loading page

Large Language Models Bias Issues Solving Through SDRT
  • Nagesh Somayajula ,
  • Chinmay Somayajula
Nagesh Somayajula
Author Profile
Chinmay Somayajula
Andhra University

Corresponding Author:[email protected]

Author Profile

Abstract

The paper “Large Language Models Bias Issues Solving Through SDRT” discusses the challenges and ethical concerns posed by large language models (LLMs) such as GPT-3 and GPT-4 in the realm of natural language processing (NLP) and artificial intelligence research. It proposes a solution in the form of Segmented Discourse Representation Theory (SDRT) to address these challenges. By integrating SDRT into existing transformer models and incorporating it into both encoders and decoders, the paper aims to reduce bias, enhance semantic understanding, and foster more meaningful and transparent conversations. This approach recognizes the importance of responsible LLM development and the need for solutions to mitigate issues like misinformation, biased content, and lack of contextual understanding. Through its technical details and architectural improvements, the paper contributes to the ongoing discourse on enhancing the capabilities and ethical use of large language models in complex NLP environments.