Large language models (LLMs) for recommendation, aka LLM4Rec, has attracted increasing attention due to their advanced knowledge and powerful reasoning ability. However, it raises concerns about the high computational demands, storage costs, and response latency due to the massive parameters in LLMs, prompting the development of numerous efficiency-oriented solutions. Therefore, we, for the first time, present a comprehensive and systematic review of efficient LLM4Rec solutions, whereby we organize the literature in a hierarchical taxonomy aligning with RSs' pipeline (data processing → model design → learning strategy) and different phases (training → serving). Specifically, we first categorize existing methods into data-, model-, and strategy-level ones, and further differentiate each category based on their contributions to various phases of RSs. Secondly, we provide an in-depth analysis of their solutions and technical contributions, highlighting both their inherent connections and key differences. Finally, we evaluate the pros and cons of each category of methods accordingly, present valuable insights, and suggest future research directions.