Understandability plays a significant role in the effective deployment of autonomous robots, as it ensures the robot clearly communicates its decisions, actions, and plans to the human it interacts with. To advance robot understandability, this work proposes a model to structure levels of explanation (LoE) and methods to minimize existing discrepancies between the human's and robot's state of mind. The applicability of the LoE model is demonstrated through two different scenarios, i.e. search and rescue operations and physical training robots. Two Markov process models are utilized to estimate existing discrepancies, a hidden Markov model and a partially observable Markov decision process model. Following these models, robot utterances are generated to minimize these discrepancies using the LoE model. The results show that both models successfully estimate existing discrepancies and can generate appropriate communicative actions for the robot, thereby enhancing its understandability.