Autonomy in vehicles is achieved using AI for control and perception tasks. The visual inputs from camera forms the foundation for subsequent control that follows. Existing works have shown adversarial vulnerabilities during AI based visual tasks. One major threat is adversarial patches, which can impact decision making in autonomous vehicles (AVs). Current evaluation methods often utilize static datasets with unrealistic patch placements. This paper proposes a novel framework, AVATAR, to standardize adversarial patch testing and analysis. AVATAR creates a simulation environment, where the patch is integrated with actors in the scene to enhance realism during testing. The vehicle’s behaviour is captured as a time-series trace for post-simulation quantitative analysis. Furthermore, we introduce an Adversarial Trace Classifier (ATC) that analyzes these traces to predict the potential presence of adversarial patches. The aim is to detect vulnerabilities in object detection algorithms for the design of robust perception system for AVs. Hence, AVATAR will pave the way for safer deployment of autonomous vehicles in real-world.