The increasing sophistication and capabilities of artificial intelligence systems have brought about significant advancements in natural language processing, yet they have also exposed these systems to various security vulnerabilities, particularly targeted prompt injection attacks. The introduction of a moving target defence mechanism offers a novel and significant approach to mitigating these attacks through continuously altering the model’s parameters and configurations, thereby creating an unpredictable environment that complicates adversarial efforts. This research provides a comprehensive evaluation of the moving target defence mechanism, detailing the selection and categorization of prompt injection attacks, the development of dynamic defence techniques such as random parameter perturbation, model re-initialization, and dynamic context adjustments, and their seamless integration with the Mistral LLM. The experimental results indicate a substantial reduction in the attack success rate, maintaining high performance metrics while managing computational overhead efficiently. The findings highlight the practical applicability and potential for widespread adoption of the moving target defence mechanism in enhancing the security and resilience of large language models against sophisticated adversarial tactics.