Background: If causal interactions between brain regions, i.e. effective connectivity modelling, could be accurately achieved, early diagnosis of many neurodegenerative diseases would be possible. In many recent comparative studies based on simulations, it has been observed that Bayesian network-based methods are more successful than others due to their non-linear and non-deterministic nature. New method: Although Dynamic Bayesian networks (DBNs) are much more suitable for effective connectivity modelling in the brain because they can inherently model temporal information and cyclic behavior, they have not been tested in comparative studies due to computational complexity problems in structure learning. In this study, reliable modelling by DBNs is achieved, which will accelerate scientific developments in neuroscience. Solutions to the computational problems that are believed to exist for this promising modelling method are proposed. Result: It is shown that discrete DBN structure modelling, which is a data-driven approach, converges to the globally correct network structure when trained with the appropriate data size and the imaginary data size, which are much smaller than the theoretically appropriate amount of data. The quantization method is also very important for convergence. The Hill Climbing (HC) search method is shown to converge to the true structure at a reasonable iteration step size when using the appropriate data and imaginary data sizes. Comparison with existing methods: The method (Improved-dDBN) is applied to commonly used simulation data and it is shown that better and more consistent performance is obtained compared to existing methods in the literature, for realistic scenarios such as varying graph complexity, various input conditions, mixed signal and noise cases and non-stationary connection conditions. Conclusions: Since Improved-dDBN performed the best for all scenarios of the simulated dataset, it is a good candidate for use on real datasets for effective brain connectivity modelling. The sample size of the dataset is very important for convergence and should therefore be checked. Also, the imaginary size should be appropriate for the available sample size. For these computational constraints, the approach proposed in this study can be easily applied.