Cricket is a dynamic sport requiring strategic decision-making from both batsmen and bowlers. Traditional coaching methods rely on human expertise and statistical analysis, but with advancements in artificial intelligence, reinforcement learning (RL) offers a data-driven approach to optimize cricket strategies. This paper introduces a Deep Q-Network (DQN)-based Cricket Strategy Optimizer, designed to enhance decisionmaking in batting and bowling. The proposed model leverages reinforcement learning to train two AI agents one for batsmen and another for bowlers enabling them to learn optimal actions under various game conditions. For the batting agent, the model predicts the best shot selection based on factors such as bowler type, field placements, and match scenarios. The bowling agent learns to select the most effective deliveries and field setups to maximize wicket-taking potential and minimize runs conceded. The training process involves simulating thousands of game situations, where the AI receives rewards for successful outcomes (e.g., scoring runs efficiently, taking wickets) and penalties for suboptimal decisions (e.g., losing wickets, conceding boundaries). The Deep Q-Network (DQN) model iteratively optimizes its decision-making by modelling the game environment as a Markov Decision Process (MDP), where each state-action transition is refined through continuous learning. Extensive simulations demonstrate that the trained AI agents can outperform conventional heuristics and adapt dynamically to different game contexts. This study establishes a foundation for AI-powered cricket analytics, offering potential applications in player training, strategy formulation, and match simulations. Future work will explore multi-agent reinforcement learning, and integration with real match data for further validation.