Muhammad Usman Hadi

and 13 more

Within the vast expanse of computerized language processing, a revolutionary entity known as Large Language Models (LLMs) has emerged, wielding immense power in its capacity to comprehend intricate linguistic patterns and conjure coherent and contextually fitting responses. Large language models (LLMs) are a type of artificial intelligence (AI) that have emerged as powerful tools for a wide range of tasks, including natural language processing (NLP), machine translation, and question-answering. This survey paper provides a comprehensive overview of LLMs, including their history, architecture, training methods, applications, and challenges. The paper begins by discussing the fundamental concepts of generative AI and the architecture of generative pre- trained transformers (GPT). It then provides an overview of the history of LLMs, their evolution over time, and the different training methods that have been used to train them. The paper then discusses the wide range of applications of LLMs, including medical, education, finance, and engineering. It also discusses how LLMs are shaping the future of AI and how they can be used to solve real-world problems. The paper then discusses the challenges associated with deploying LLMs in real-world scenarios, including ethical considerations, model biases, interpretability, and computational resource requirements. It also highlights techniques for enhancing the robustness and controllability of LLMs, and addressing bias, fairness, and generation quality issues. Finally, the paper concludes by highlighting the future of LLM research and the challenges that need to be addressed in order to make LLMs more reliable and useful. This survey paper is intended to provide researchers, practitioners, and enthusiasts with a comprehensive understanding of LLMs, their evolution, applications, and challenges. By consolidating the state-of-the-art knowledge in the field, this survey serves as a valuable resource for further advancements in the development and utilization of LLMs for a wide range of real-world applications. The GitHub repo for this project is available at https://github.com/anas-zafar/LLM-Survey

Muhammad Usman Hadi

and 8 more

Rizwan Qureshi

and 12 more

Data generated from sources such as wearable sensors, medical imaging, personal health records, pathology records, and public health organizations have resulted in a  massive information increase in the medical sciences over the last decade. Advances in computational hardware, such as cloud computing, Graphical Processing Units (GPUs), and Tensor  Processing Units (TPUs), provide the means to utilize these data.  Consequently, many Artificial Intelligence (AI)-based methods have been developed to infer from large healthcare data. Here,  we present an overview of recent progress in artificial intelligence  and biosensors in medical and life sciences. We discuss the role  of machine learning in medical imaging, precision medicine,  and biosensors for the Internet of Things (IoT). We review the  most recent advancements in wearable biosensing technologies  that use AI to assist in monitoring bodily electro-physiological  and electro-chemical signals and disease diagnosis, demonstrating  the trend towards personalized medicine with highly effective, inexpensive, and precise point-of-care treatment. Furthermore,  an overview of the advances in computing technologies, such as  accelerated artificial intelligence, edge computing, and federated  learning for medical data, are also documented. Finally, we investigate challenges in data-driven AI approaches, the potential  issues that biosensors and IoT-based healthcare generate, and the distribution shifts that occur among different data modalities,  concluding with an overview of future prospects