In the evolving domain of autonomous vehicles, the importance of decision-making cannot be overstated. Deep Reinforcement Learning emerges as a pivotal tool in this landscape. However, traditional DRL algorithms grapple with inaccuracies in Q-value estimation, predominantly due to system noise and function approximation errors. Such inaccuracies, coupled with real-world unpredictabilities, often misdirect autonomous vehicles, jeopardizing safety. This work introduces a novel DRL algorithm tailored for uncertainty and noise-aware decisionmaking in autonomous vehicles. This novel approach harnesses Bayesian Neural Networks (BNN) and skew-geometric Jensen-Shannon divergence, rectifying the aforementioned limitations and also improving exploration. Evaluated in the OpenAI gymnasium environment, the algorithm has clear advantages over contemporary methods in terms of cumulative rewards and convergence speed.