Traffic congestion is a critical challenge in urban environments, leading to increased delays, fuel consumption, and inefficiencies in logistics. Traditional traffic light control methods, which rely on preset schedules and historical data, often fail to adapt to real-time traffic conditions, resulting in suboptimal performance. Recent advances in vehicular communication and computational methods offer new opportunities to enhance traffic signal control. This paper explores the application of Deep Q-Networks (DQN) to optimize traffic signal timings at a single intersection. By leveraging reinforcement learning (RL), I aim to address the limitations of traditional methods, such as large state spaces and slow convergence, which arise from high-dimensional data. I compare DQN against various baseline approaches, including Random, Periodic Cycle, Greedy algorithms, and Naive Q-learning. My findings show that DQN outperforms these methods in minimizing overall vehicle waiting times and ensuring a fairer distribution of wait times across vehicles. The results highlight the potential of DQN for improving traffic flow, laying the foundation for future work in multi-intersection scenarios and more complex urban environments.