As Generative AI becomes more prevalent, the vulnerability to security threats grows. This study conducts a thorough exploration of red teaming methods within the domain of Multimodal Large Language Models (MLLMs). Similar to adversarial attacks, red teaming involves tricking the model to generate unexpected outputs, revealing weaknesses that can be addressed through enhanced training for improved robustness. Through an extensive review of existing literature, this research categorizes and analyzes adversarial attacks, providing insights into their methodologies, targets and potential consequences. It further explores the evolving tactics employed to exploit vulnerabilities in various models, encompassing both traditional and deep learning architectures. The study also investigates the current state of defense mechanisms, examining countermeasures designed to thwart adversarial attacks. In addition to these aspects, the research conducts a meticulous analysis of red teaming methods with a specific focus on vulnerabilities related to images. By synthesizing insights from various studies and experiments, this survey aims to offer a comprehensive understanding of the multifaceted challenges posed by adversarial attacks in MLLMs. The outcomes of this research serve as a valuable resource for practitioners, researchers and policymakers seeking to fortify Generative AI systems against emerging security threats.