This paper presents a comprehensive approach to fine-tuning the Mistral-7B-Instruct-v0.3 large language model (LLM) for text-to-SQL generation. We leverage Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning technique, to enhance the model’s ability to understand natural language questions and generate accurate SQL queries. Our approach involves fine-tuning the Mistral model on a combination of the b-mc2/sql-create-context and gretelai/synthetic text to sql datasets, which provide a diverse range of text-to-SQL examples. To evaluate the effectiveness of our fine-tuned model, we introduce a novel evaluation framework that goes beyond exact match accuracy. We incorporate semantic similarity assessment using ChromaDB, a vector database designed for semantic search, to capture the meaning of queries comprehensively. Additionally, we validate the generated SQL queries by executing them against the corresponding database schemas to ensure their correctness and functionality. Our experiments demonstrate the efficacy of our approach, achieving state-of-the-art results in exact match accuracy, semantic similarity, and query success rate on the benchmark datasets. We also provide insights into the challenges posed by schema mismatches in text-to-SQL generation and discuss potential solutions for addressing this issue. Our findings contribute to the advancement of text-to-SQL generation and offer valuable guidance for fine-tuning and evaluating LLMs in this domain.