Large Language Models (LLMs) are increasingly leveraged to advance activity recognition and support Activities of Daily Living (ADLs). This systematic review synthesizes findings from 32 studies published between 2022 and 2024, focusing on three primary application domains: LLMs for recognition tasks, including sensor based ADL classification, multimodal fusion, and zeroshot learning; LLMs in assistive technologies, such as social robotics, exoskeleton control, and speech/gesture interfaces for users with disabilities; and LLM driven simulation, encompassing automated generation of realistic daily scenarios for smart home testing. Across these domains, LLMs demonstrate marked improvements in recognition accuracy, natural language interaction, and system adaptability. They excel at capturing contextual nuances, enabling personalized recommendations, and reducing reliance on extensive labeled datasets. However, challenges persist, including the computational overhead of large models, prompt engineering constraints, privacy issues in home environments, and limited explainability of LLM outputs. Despite these limitations, the reviewed literature reveals that combining LLMs with structured machine learning pipelines or neuro symbolic approaches can yield robust and user friendly ADL support tools. These efforts will be crucial for scaling next generation, intelligent ADL assistance systems and delivering tangible benefits for older adults and individuals with disabilities.