The integration of AI-generated content into various applications has highlighted significant concerns regarding the potential for deceptive information, necessitating robust methods to ensure the accuracy and trustworthiness of outputs. Introducing a novel game theory-based framework for identifying deception in language models, this study addresses the critical need for reliable verification mechanisms. By simulating interactions between liar and verifier roles within the same model, the research provides a structured approach to evaluate and enhance the reliability of automated systems. Key findings demonstrate the effectiveness of iterative prompt refinement and strategic analysis in detecting deceptive behaviors, contributing to the development of more trustworthy AI applications. The methodology offers a comprehensive solution for improving the accuracy of AI-generated content, with broader implications for its deployment in sensitive domains such as healthcare and legal services. Future research directions include refining the proposed framework and expanding its application to encompass a wider range of deceptive behaviors, including multimedia content, thereby ensuring the robustness of AI systems in diverse real-world scenarios.