Bayesian persuasion is a strategic framework for sequential decision-making wherein a principal influences the behaviour of an agent by selectively revealing information through signals. Motivated by its application in intelligent traffic routing, where a traffic management system (principal) sends routing signals to a connected vehicle (agent), this paper considers the problem of synthesizing the optimal policy of the agent. The main contributions of the paper are as follows: (i) Identification of the sufficient conditions on the Bayesian persuasion model for monotonicity of optimal policy of agent, (ii) Dual sparse policy learning for faster computation of monotonic policies in discrete state and signal spaces, (iii) Characterisation of the optimal policy in continuous state and signal spaces using threshold curves, (iv) Simulation-based structured policy learning algorithm for estimating threshold curves which characterise monotonic policies. (v) Numerical analysis of a lane-based routing application in a traffic management system demonstrates that the simulation-based structured policy learning algorithm is 30% computationally efficient compared to an approximate discretised policy computation while having cheaper memory requirement. In addition, the proposed method outperforms existing methodologies such as using a myopic policy or recommendation-based policy. We also illustrate the usefulness of structural results in designing signalling strategies for the principal.