Abstract-As reinforcement learning agents become more popular in critical applications across both the public and private sectors, the need to reverse engineer, comprehend, and audit their decisions becomes crucial. While there are many traditional explainable AI methods to aid in this endeavor, many approaches assume direct access to the agent model, which is not available in many reverse engineering applications such as in-house recovery of legacy models or analysis of competitor, adversarial, or other external agents. To address this research gap, this paper presents LfRLD, a framework for reverse engineering reinforcement learning (RL) agents using learning from demonstrations (LfD) approaches. Empirical results demonstrated the proposed framework's potential to aid in generalizing, predicting, and summarizing agent behaviors using only observed demonstrations, and revealed many opportunities for future research. Within the wider scope of AI, this paper's findings have many implications for auditable, and therefore, trustworthy AI, aiding applications in a variety of areas such as business and finance, criminal justice, cybersecurity and defense, and Internet of Things (IoT). Impact Statement-Reinforcement learning agents are becoming increasingly popular in a variety of applications such as chatbots, economics, healthcare, and autonomous driving. Their ability to learn consequences through trial-and-error interaction enables them to explore potentially optimal routes with longterm gains despite possible short-term losses. However, over the years, such agents have become increasingly scrutinized as their decisions are seldom interpretable by others. Although recent advances in explainable AI have aided in addressing this challenge, they typically assume access to the agent models, which are often not public. The framework introduced in this paper helps address this research gap. While the proposed technology has shown promising results for reverse-engineering agent behaviors using only observed demonstrations, there are many opportunities for future developments. In addition to enhancing agent auditability and trustworthiness, the proposed work also offers an approach for reverse-engineering agents to gain an intelligence or resource advantage over external competitors or adversaries.