This paper introduces a novel robot parallel evolution design algorithm , leveraging the concept of a module network, to optimize the learning process of collision avoidance, approach, and wall switching behaviors in evolutionary robots. The proposed algorithm is validated and tested, demonstrating its efficacy in enabling evolutionary robots to autonomously exhibit behaviors such as collision avoidance, movement, repli-cation, and attack. The learning methodology focuses on refining the neural network-based strategies for collision avoidance, approach, and wall switching behaviors. The evolutionary robots, operating in a simulated environment, show-case the ability to adapt and enhance their performance over time. The simulation environment includes randomly generated rectangular obstacles with varying side lengths, strategically placed to represent real-world challenges. Additionally, the environment features randomly scattered approach targets, serving as goals for the robots. The modular design of the neural network allows for the integration of fundamental behaviors such as collision avoidance and approach, enabling a progressive enhancement of the robot’s capabilities. As the neural network evolves, the robots demonstrate an increasingly sophisticated ability to navigate their surroundings, avoid obstacles, approach targets, and adapt to dynamic scenarios. Through extensive simulations, the proposed algorithm proves effective in training evolutionary robots to navigate complex environments autonomously. The study contributes to the field of evolutionary robotics by presenting a modular neural network approach that enables the gradual acquisition and integration of diverse behaviors, showcasing the potential for autonomous and adaptive robotic systems in dynamic and challenging environments.Â