In this paper, a novel Reinforcement Learning (RL) algorithm designed for optimizing control in continuous action spaces is presented. The proposed method uses a "Target Interval Segmentation" (TIS) technique to discretize the action space into smaller segments, gradually narrowing these segments to facilitate a more efficient search for the optimal action. The method is evaluated on two important problems: the classical inverted pendulum as a benchmark and the problem of drug dosage control for leukemia patients. Simulation results indicate that the proposed method outperforms DQN, DDPG, TD3, SAC, and PPO algorithms in the drug dosage control problem. In the inverted pendulum benchmark, although SAC and TD3 delivered superior control performance, the proposed method exhibited significantly better stability. Unlike other algorithms, which experienced severe fluctuations in reward values, our method maintained consistent performance within a specific range, avoiding the large negative spikes that could jeopardize system stability. Overall, the simulation results suggest that this approach offers strong stability and accuracy, particularly in scenarios requiring precise control, and shows potential as an effective solution in continuous action spaces.