Increasing model complexity has introduced significant challenges in ensuring the safety and ethical alignment of autonomous systems responsible for generating language outputs. Human-centric approaches to safety evaluation, while effective in isolated cases, struggle to scale alongside the exponential growth of model parameters and data requirements. A novel automated pipeline for dynamic data creation is introduced, capable of generating adversarial inputs that expose vulnerabilities in model behavior and allow for iterative refinement. This method eliminates the need for human oversight, offering a more scalable and efficient mechanism for ensuring alignment with predefined safety metrics. Experiments with the LLaMA model demonstrate that dynamic data creation can significantly reduce unsafe outputs, improve response consistency, and enhance the model's robustness through continuous feedback. The findings highlight the transformative potential of automated data generation in addressing the limitations of human-dependent safety evaluation, making it an invaluable tool in the broader field of model alignment and testing. Through this automated approach, safety testing becomes more efficient, cost-effective, and consistent, enabling broader applications across diverse contexts.