The world of artificial intelligence and deep learning is constantly evolving, with many pioneers and researchers utilizing frameworks such as TensorFlow and PyTorch to expedite their research. While these pre-built frameworks offer a swift execution process and alleviate the burden of pure programming, they may also result in a superficial grasp of the intricate mathematical mechanics involved in deep learning. This lack of deep understanding can impede the optimization process and hinder the achievement of optimal performance in developing deep networks. To overcome these challenges, the objective of this paper is to simplify, clarify, and remove obstacles to the mechanics of deep learning networks, streamlining the development process for researchers. In this paper, Our exploration of these networks will include explaining the derivative of various methods and activation functions, providing a deeper insight into the topic .