Large language models (LLMs), leveraging the transformative capabilities of the 2017 transformer model, are poised to revolutionize optical networks. Built upon an architecture that excels in natural language processing through attention mechanisms and parallel processing, LLMs can handle, generate, and modify text, establishing new standards for AIbased solutions. This paper explores the potential applications of LLMs in optical networks, including network design, diagnosis, configuration, and security. It also addresses the complex and specialized nature of optical networks, which present unique challenges to LLM implementation. By employing strategies like prompt engineering and retrieval-augmented generation (RAG), LLMs can enhance network intelligence and efficiency. This paper discusses various deployment strategies, such as cloud, edge, and on-device approaches, while highlighting the potential of LLMs in cognitive optical networks, disaggregated optical networks, and network optimization. Despite the challenges, LLMs promise significant advancements in communication technology, paving the way for more intelligent and adaptive infrastructures.