This review paper explores advanced methods to prompt Large Language Models (LLMs) into generating objectionable or unintended behaviors through adversarial prompt injection attacks. We examine a series of novel projects like HOUYI, Robustly Aligned LLM (RA-LLM), StruQ, and Virtual Prompt Injection, that compel LLMs to produce affirmative responses to harmful queries. Additionally, the paper investigates the robustness of these attacks across different models and prompts. Several new benchmarks, such as PromptBench, AdvBench, AttackEval, INJECAGENT, and RobustnessSuite, have been created to evaluate the overall performance and resilience of LLMs against adversarial prompt injection attacks. Results show significant success rates in misleading models like Vicuna-7B, Llama-2-7B-Chat, GPT-3.5, and GPT-4. The review highlights limitations in existing defense mechanisms and proposes future directions for enhancing LLM alignment and safety mechanisms such as LLM SELF DEFENSE, Unlike previous studies, this paper provides a more comprehensive evaluation by integrating a broader range of attack methods and target models. Our study shows that there is a need for improved robustness in LLMs, which will potentially shape the future of AI-driven applications and security protocols.