This paper addresses the emerging challenge presented by large language models (LLMs) such as ChatGPT that are able to generate solutions to tasks traditionally used to enhance student’s analytical and programming skills, particularly in programming education. This widespread availability of AI-generated solutions risks undermining the learning process and skill acquisition by enabling students to use AI generated solutions instead of practicing themselves. Addressing this challenge, our paper outlines a holistic strategy that combines educational initiatives, state-of-the-art plagiarism detection mechanisms, and an innovative steganography-based technique for watermarking AI-produced code. This multifaceted approach aims to provide evaluators with the tools to distinguish between a code generated by ChatGPT and a code genuinely created by students. With the collective efforts of educators, course administrators, and partnerships with AI developers, we believe it is feasible to uphold the integrity of programming education in this age of code-producing LLMs.