Data centers increasingly depend on Operational Data Analytics (ODA) for real-time insights from vast streams of telemetry data. They typically utilize NoSQL databases for scalability and data diversity, which leads to unstructured data representation and presents significant challenges for the data interoperability. Indeed, the lack of standardization, combined with schema flexibility and complex data structures, makes it difficult for system administrators to write and execute queries, ultimately complicating the automation of data retrieval tasks. Pre-trained Large Language Models (LLMs), with their latent knowledge, promise a ready-to-use AI-driven data interoperability layer, enabling data retrieval through natural language input. However, they often generate inaccurate or hallucinated query code when handling heterogeneous data sources and complex data structures. This manuscript introduces EXASAGE, the first ODA co-pilot to leverage a Knowledge Graph (KG)-based approach, addressing these LLM limitations and simplifying data retrieval tasks in data center facilities through a prototype implementation of a conversational LLM agent. EXASAGE employs an LLM Agent as an interoperable layer to convert natural language into SPARQL queries (native to KGs), executed at a graph database endpoint. In evaluations on 1,000 prompts, EXASAGE achieved a 92.5% accuracy rate in generating correct SPARQL code, significantly outperforming the 25% accuracy of NoSQL/SQLite queries, which frequently exhibited hallucinations. Additionally, SPARQL queries are more concise, execute faster, and have shorter inference time compared to NoSQL/SQLite queries. The average time overhead for EXASAGE is about 20.36 seconds per prompt, covering the entire process, including the generation of the KG, LLM inference time, and query execution time. The maximum storage overhead for the generated KGs is just 52.62 MiB.