Apache Spark has revolutionized the landscape of big data processing by harnessing the power of distributed computing to handle massive datasets. However, as Spark applications increase in size and complexity, effective performance tuning becomes essential. Optimizing Spark jobs is crucial for maximizing resource utilization, accelerating job completion, and minimizing operational costs. This article explores the architecture overview of Hadoop, Apache Spark and critical aspects of performance tuning in Apache Spark, focusing on techniques and strategies for enhancing data processing, resource allocation, and job execution. By leveraging Spark's features and optimization tactics, users can significantly improve the performance of their applications, leading to more efficient and cost-effective big data solutions.