The development of autonomous vehicles (AVs) is rapidly transforming transportation, promising increased safety, efficiency, and accessibility. This paper presents an endto-end machine learning approach for autonomous vehicle control, encompassing perception, decision-making, and actuation. Unlike traditional modular approaches, which rely on separately designed components for tasks such as object detection, path planning, and control, our method integrates these stages into a unified model. Leveraging deep learning techniques, particularly convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the model directly maps raw sensory inputs, such as camera images and LIDAR data, to driving commands. This approach allows the system to learn and adapt to complex driving environments, reducing the need for hand-engineered features and rule-based systems. The performance of the proposed model is evaluated through simulation and real-world testing, demonstrating its capability to handle diverse scenarios, including urban traffic, highway driving, and obstacle avoidance. The results highlight the advantages of an end-to-end strategy in terms of scalability, robustness, and generalization, marking a significant step toward fully autonomous vehicles.