Language models have achieved remarkable success in generating coherent and contextually appropriate text, but they continue to face significant challenges when applied to tasks that require formal logical reasoning. Introducing structured guidance into the training process allows models to transcend the limitations of purely statistical approaches, providing a systematic means of enforcing logical consistency across multiple reasoning steps. This paper presents a novel approach to improving logical reasoning through program-guided learning, where predefined logical programs guide the model in applying formal logic operations such as conjunction, disjunction, and quantification. The modified Llama model, integrated with a program-guided learning framework, demonstrates significant improvements in task accuracy, consistency, and error reduction across various logical reasoning benchmarks, including symbolic logic, mathematical reasoning, and program synthesis. Experimental results reveal that the enhanced model outperforms the baseline in both accuracy and efficiency, while also showing greater resilience in handling ambiguous tasks through probabilistic logic mechanisms. The approach outlined in this paper contributes to the broader understanding of how language models can be adapted to tackle complex reasoning tasks more effectively, offering new insights into structured learning methodologies.