The exponential growth of textual data across various domains necessitates the development of efficient and accurate summarization techniques to facilitate quick comprehension and information retrieval. The novel automated system for summarizing multiple document abstracts and titles using advanced neural architectures addresses this need by leveraging large language models to generate concise and coherent summaries. The methodology involved comprehensive data collection, preprocessing, model selection, and summarization processes, evaluated through a combination of quantitative and qualitative metrics. Results demonstrated high efficacy in handling shorter documents and strong performance in technical domains such as healthcare and science, although challenges in coherence and readability were noted. Domain-specific performance highlighted the necessity for tailored adaptations, and the study contributed valuable insights into hybrid summarization techniques combining extractive and abstractive methods. Future research directions include the development of advanced attention mechanisms, domain-specific fine-tuning, and reinforcement learning techniques to optimize summarization quality, alongside addressing ethical considerations to ensure responsible deployment.