This paper presents a proof of concept demonstrating how thousands of training episodes in deep reinforcement learning (DRL) can be condensed into just a few episodes. The central focus is to highlight a refined version of the algorithm that was foundational to my PhD several years ago, further developed and improved upon in this work. The refined algorithm, detailed at https://codeocean.com/capsule/0436858/tree/v1 , achieves significant acceleration in learning by addressing a key insight in DRL: the learning process can be expedited by providing the agent with ample and diverse information. This is accomplished by slicing complex strategies into manageable steps and distributing these steps across multiple artificial neural networks (ANNs). Each ANN receives both plentiful and varied data, enhancing exploration and allowing for rapid convergence to a stable and effective policy.