Deep neural networks (DNNs) have emerged as powerful predictors, demonstrating impressive performance across a spectrum of applications. They efficiently map high-dimensional input data to an output. However, a notable drawback is the lack of transparency and interpretability in DNN outputs. In critical domains such as healthcare, finance, public safety, and autonomous systems, transparency in DNN output is paramount for informed decision-making and trustworthiness. Recognizing and quantifying uncertainty in outputs can significantly improve decision-making capabilities. In crowd counting, where deep learning models are prevalent, uncertainty estimation is vital for robust decision-making in resource allocation, event planning, and public safety. However, existing work often overlooks the associated uncertainty in crowd count predictions, focusing solely on estimation. In this work, we propose a method to not only estimate the count but also the uncertainty in these estimated counts, considering both data uncertainty and model uncertainty. Furthermore, to enhance the calibration of model uncertainty, we utilize adaptive dropout rates within a Monte Carlo dropoutbased Bayesian framework. By utilizing isotonic regression, we calibrate overall predictive uncertainty, covering both data and model uncertainties. This leads to well-calibrated confidence intervals for the model predictions. Through extensive simulations on widely used datasets, our method sets a new benchmark.