Instigated by the plethora of data generated by edge devices and IoT devices, machine learning has become the de facto choice of everyone for solving many tasks. Applications such as intelligent healthcare monitoring systems, smart watches, or automatic cars require real-time processing of the data or image, which is done by machine learning algorithms with higher efficiency than humans. There are two possible methods for artificial intelligence?1. non-von-Neumann hardware-based implementation of neural networks 2. traditional computer science based approach for neural networks or traditional von-Neumann architecture?based implementation of neural networks The standard von-Neumann performance of neural networks, where the memory and computation parts are segregated, suffers from severe latency with increasing number of edge devices. However, the multitude of edge devices used in our daily life imposes strict restrictions on latency, device area, and power consumption for hardware. Therefore, we need to take the route beyond the CMOS-based mixed-signal implementation of neural networks, where the memory bandwidth is not limited by the quintessential von-Neumann archi?tecture. The primary motivation of present-day research on the non-von-Neumann computing architecture is to build dedicated hardware modules for implementing low-power, fast-computing units without affecting the recent trend of scaling. This chapter focuses on amorphous indium-gallium-zinc-oxide (α-IGZO) based and their system-level applications. Back-end-of-line (BEoL) compatible indium IGZO based multibit one-time programmable (OTP) ferroelectric thin film transistors (FeTFT) with lifelong retention capability were fabricated. The maximum temperature of the entire fabrication process was limited to 350oC. The gate-stack engineering by varying the thickness ratio of ferroelectric hafnium zirconium oxide (HZO) and IGZO layer fomented excellent data-retention capability and one time programming property. Further, we have evaluated the performance of IGZO-based FeTFT as synaptic devices for an inference engine. The system-level simulation revealed inference accuracy loss of only 1.5% after ten years without re-training for Modified National Institute of Standards and Technology (MNIST) hand-written digits in a multi-layer perceptron (MLP) neural network with a baseline of 97%. The proposed inference engine also showed superior energy efficiency and cell area of 95.33 TOPS/W (binary) and 8F2 , respectively.