Short-term load forecasting (STLF) plays a pivotal role in ensuring the operational efficiency, reliability, and economic viability of power systems. Traditional forecasting models face challenges with the nonlinearity and complexity of power consumption patterns, especially due to the increasing integration of renewable energy sources. To address these limitations, a Transformer-based large language model (LLM), dubbed LFLLM, is proposed for STLF across various voltage levels in power systems. This paper introduces an efficient training method based on Parameter-Efficient Fine-Tuning (PEFT) to tackle the challenging training problem of LLMs containing massive parameters, thereby ensuring the model's excellent learning ability. To ascertain the robustness and reliability of LFLLM in handling a wider range of load forecasting tasks, its zero-shot learning ability is evaluated. The extensive experiments indicate that LFLLM exhibits superior forecasting accuracy at different voltage levels, as well as detailed predictive capabilities at different frequencies, and remarkable zero-shot learning ability in diverse scenarios, underscores its potential for practical applications in smart grids.