Fine-tuning large-scale neural networks for specific tasks often requires significant computational power and memory, making it an expensive and resource-intensive process. To address this issue, Parameter-Efficient Fine-Tuning (PEFT) methods have been introduced, enabling adaptation with a reduced number of trainable parameters. Among these techniques, Low-Rank Adaptation (LoRA) has gained widespread recognition for its efficiency and ease of integration. This survey provides a thorough exploration of LoRA, covering its mathematical principles, implementation techniques, and real-world applications. By comparing LoRA to traditional full fine-tuning and other PEFT methods, we highlight its ability to maintain strong performance while significantly cutting down on computational overhead. Empirical evaluations across various NLP benchmarks further demonstrate its effectiveness in reducing memory consumption without sacrificing accuracy. Beyond its advantages, we also examine the challenges associated with LoRA, such as rank selection sensitivity, compatibility with different model architectures, and potential hardware optimizations. Additionally, we discuss promising future directions, including hybrid fine-tuning strategies, dynamic rank adjustment, and expansion into multi-modal learning. By enabling more efficient adaptation of complex models, LoRA represents a crucial advancement in scalable fine-tuning. This survey aims to serve as a valuable resource for researchers and practitioners looking to leverage LoRA for optimized model training across diverse applications.