With the growing use of mobile robots in urban settings, their interaction with people in shared spaces is increasing. It’s crucial for these robots to navigate in a socially acceptable manner and interact properly with their environment. AI guides robot behavior by interpreting the environment and controlling actions. Poor explainability of robot decisions can lead to mental and physical risks. Proper robot-to-human interfaces can help mitigate these risks. This paper introduces systems and methods for self-explaining mobile robots in common areas using projected information. We analyze data on people’s reactions to the robot, with and without selfexplainable features, across various university campus settings like maker-spaces, hallways, cafeterias, and study areas. Initial analysis shows improved acceptance of the robot based on task performance and user questionnaires.