The security of AI systems becomes increasingly paramount, as artificial intelligence (AI) pervades various aspects of our lives. For instance, from autonomous driving to banking and medical diagnostics, AI is playing a pivotal role in powering such security-sensitive systems. However, attacks on these high-risk AI systems could have severe consequences, jeopardizing the lives, finances, and well-being of people. Thus, adherence to fundamental security properties, namely, Confidentiality, Integrity, and Availability (CIA), is essential for AI systems to comply with security standards. In this paper, we first define a simplified abstraction of AI systems to discuss four independent viewpoints: data, models, inputs, and deployment. Subsequently, we conduct a detailed analysis of attack vectors targeting each of these viewpoints, with a rigorous assessment of CIA properties. Proactive identification of attack vectors throughout the entire AI lifecycle is a requisite step toward establishing a secure-by-design framework for AI systems.