The impressive capabilities of contemporary language models in various natural language processing tasks have not been matched by their reasoning abilities, which remain relatively underdeveloped. Addressing this discrepancy, our research introduces novel architectural modifications to GPT-Neo, significantly enhancing its performance in logical reasoning and multi-step problem-solving tasks. The proposed enhancements include integrating specialized reasoning modules, dynamic memory components, and diversified attention mechanisms, resulting in substantial improvements in accuracy, logical consistency, and inference correctness. Through a rigorous evaluation process, the modified GPT-Neo consistently outperformed baseline models such as BERT and T5 across various reasoning tasks, establishing it as a superior reasoning agent. The findings show the potential of strategic architectural innovations in advancing the cognitive functions of language models, paving the way for their application in more complex and demanding real-world scenarios. The results contribute to the growing body of research aimed at developing more intelligent and reliable artificial intelligence systems capable of sophisticated reasoning.