Neuromorphic systems are a viable alternative to conventional systems for real-time tasks with constrained resources. Their low power consumption, compact hardware realization, and low-latency response characteristics are the key ingredients of such systems. Furthermore, the event-based signal processing approach can be exploited for reducing the computational load and avoiding data loss, thanks to its inherently sparse representation of sensed data and adaptive sampling time. In event-based systems, the information is commonly coded by the number of spikes within a specific temporal window. However, event-based signals may contain temporal information which is complex to extract when using rate coding. In this work, we present a novel digital implementation of the model, called Time Difference Encoder, for temporal encoding on event-based signals, which translates the time difference between two consecutive input events into a burst of output events. The number of output events along with the time between them encodes the temporal information. The proposed model has been implemented as a digital circuit with a configurable time constant, allowing it to be used in a wide range of sensing tasks which require the encoding of the time difference between events, such as optical flow based obstacle avoidance, sound source localization and gas source localization. This proposed bio-inspired model offers an alternative to the Jeffress model for the Interaural Time Difference estimation, validated with a sound source lateralization proof-of-concept. The model has been simulated and implemented on an FPGA, requiring 122 slice registers of hardware resources and less than 1 mW of power consumption.