Muhammad Usman Hadi

and 13 more

Within the vast expanse of computerized language processing, a revolutionary entity known as Large Language Models (LLMs) has emerged, wielding immense power in its capacity to comprehend intricate linguistic patterns and conjure coherent and contextually fitting responses. Large language models (LLMs) are a type of artificial intelligence (AI) that have emerged as powerful tools for a wide range of tasks, including natural language processing (NLP), machine translation, and question-answering. This survey paper provides a comprehensive overview of LLMs, including their history, architecture, training methods, applications, and challenges. The paper begins by discussing the fundamental concepts of generative AI and the architecture of generative pre- trained transformers (GPT). It then provides an overview of the history of LLMs, their evolution over time, and the different training methods that have been used to train them. The paper then discusses the wide range of applications of LLMs, including medical, education, finance, and engineering. It also discusses how LLMs are shaping the future of AI and how they can be used to solve real-world problems. The paper then discusses the challenges associated with deploying LLMs in real-world scenarios, including ethical considerations, model biases, interpretability, and computational resource requirements. It also highlights techniques for enhancing the robustness and controllability of LLMs, and addressing bias, fairness, and generation quality issues. Finally, the paper concludes by highlighting the future of LLM research and the challenges that need to be addressed in order to make LLMs more reliable and useful. This survey paper is intended to provide researchers, practitioners, and enthusiasts with a comprehensive understanding of LLMs, their evolution, applications, and challenges. By consolidating the state-of-the-art knowledge in the field, this survey serves as a valuable resource for further advancements in the development and utilization of LLMs for a wide range of real-world applications. The GitHub repo for this project is available at https://github.com/anas-zafar/LLM-Survey

Ahlem Aboud

and 5 more

This paper presents a new Multi-Objective Particle Swarm Optimization algorithm for Multifactorial Optimization Problems referred to as MOPSO-mfact. It operates in two distinct steps with two processing steps. Initially, a pre-search is conducted in a unified search space where all individuals are optimized to address various tasks simultaneously. Then, the dominance operator is used to explore the solution encoding search space and identify the best skill factor. In the subsequent step, individuals with identical solution skill factors are grouped together to create multiple sub-swarms for multitasking optimization. This approach leverages the dominance operator instead of relying on random mating probabilities. The MOPSO-mfact algorithm was tested on a set of Multifactorial test benchmarks called “ETMOF” which includes 36 multi/many objectives optimization problems. A comparative study is conducted toward the Inverted General Distance (IGD) and the Mean Inverted General Distance (MIGD) metrics. The Mean Standard Score (MSS) is used to determine the best approach for multitasking optimization. Parameter optimization for MOPSO-mfact is achieved through sensitivity analysis using the Taguchi method, ensuring optimal performance. The MOPSO-mfact algorithm demonstrated promising results, achieving a good MSS result for solving the 28/26 ETMOF test suite as assessed by the IGD indicators. Additionally, it performed well, solving 33 out of 36 problems in the ETMOF suite when evaluated using the MIGD metric.

Chnoor M. Rahman

and 5 more

The dragonfly algorithm developed in 2016. It is one of the algorithms used by the researchers to optimize an extensive series of uses and applications in various areas. At times, it offers superior performance compared to the most well-known optimization techniques. However, this algorithm faces several difficulties when it is utilized to enhance complex optimization problems. This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems. This review paper shows a comprehensive investigation of the dragonfly algorithm in the engineering area. First, an overview of the algorithm is discussed. Besides, we also examined the modifications of the algorithm. The merged forms of this algorithm with different techniques and the modifications that have been done to make the algorithm perform better are addressed. Additionally, a survey on applications in the engineering area that used the dragonfly algorithm is offered. The utilized engineering applications are the applications in the field of mechanical engineering problems, electrical engineering problems, optimal parameters, economic load dispatch, and loss reduction. The algorithm is tested and evaluated against particle swarm optimization algorithm and firefly algorithm. To evaluate the ability of the dragonfly algorithm and other participated algorithms a set of traditional benchmarks (TF1-TF23) were utilized. Moreover, to examine the ability of the algorithm to optimize large scale optimization problems CEC-C2019 benchmarks were utilized. A comparison is made between the algorithm and other metaheuristic techniques to show its ability to enhance various problems. The outcomes of the algorithm from the works that utilized the dragonfly algorithm previously and the outcomes of the benchmark test functions proved that in comparison with participated algorithms (GWO, PSO, and GA), the dragonfly algorithm owns an excellent performance, especially for small to intermediate applications. Moreover, the congestion facts of the technique and some future works are presented. The authors conducted this research to help other researchers who want to study the algorithm and utilize it to optimize engineering problems.

Ahlem Aboud

and 8 more

Ahlem Aboud

and 6 more