It remains a formidable challenge to accurately recognize patients’ motion intentions to flexibly control hand exoskeletons. Current methods primarily focus on recognition of limited patient’s motion intentions, with the purpose of controlling preconfigured gestures of a hand exoskeleton for grasping objects. These methods exhibit a marked shortfall when encountering scenarios that are unexpected or not designed in ad-vance, such as non-preprogrammed hand movements and object manipulation tasks. This paper proposes a large language model (LLM)-enabled incremental learning framework for controlling hand exoskeletons. The framework enables patients to perform both predefined and non-predefined operation tasks with hand exoskeletons through a hand exoskeleton controller and LLM-based learners via voice interaction. Specifically, the framework embeds LLMs as expert modules that are capable of inferring appropriate hand motions and formulating corresponding control commands in accordance with human experiences to fulfill tasks that are unknown to be completed by the hand exoskeleton controller. At the same time, the hand exoskeleton controller incrementally learns to perform these non-predefined tasks by updating the control command set. Therefore, with the frame-work, the hand exoskeleton can incrementally expand its control command set so as to enhance patients’ adaptability to complex activities over daily use until the hand exoskeleton controller no longer needs to consult the LLM-based expert modules. This study is a pioneering work in the field of hand exoskeletons, which will revolutionize the way to control hand exoskeletons by patients.