Amid the recent advances in robotics and machine learning, unmanned aerial vehicles (UAVs) have shown evident proliferation across various applications. Consequently, the involvement of UAVs in populated environments has progressively become inevitable, putting forward stringent safety and security measures. In this work, we develop a deep reinforcement learning-based UAV-navigation approach that blends decision making with behavioral intelligence. In particular, a reinforcement learning (RL) agent is trained to instruct the UAV on how to accomplish a goal-oriented task, while assuring the safety of the UAV and its surroundings. Upon arriving at the goal position, the RL agent slows the UAV down preparing it for landing. The safety of the UAV and the environment are attained through a robust collision avoidance capability, embedded into the RL-based navigation system and considers both static and dynamic obstacles in the environment. Training is exclusively carried out in simulation, where a high fidelity UAV controller model is used to perform the simulated maneuver. The proposed approach was tested in simulation and then shown to directly transfer to reality without explicit sim2real gap transfer techniques. Experimental results demonstrated the agent’s capability to achieve the navigation task with 90% success rate.