Over the past decade, neural networks have shown commendable performance in complex tasks such as Image Recognition, Natural Language Processing, and game playing. At the same time, these networks have unfortunately been proven to be susceptible to minor changes to their inputs, commonly called adversarial perturbations. For example, an otherwise accurate DNN model can be deceived into misclassifying a given image by making slight alterations to the image, invisible to human eyes. Such adversarial attacks make it challenging to deploy them in security-sensitive applications such as autonomous vehicles. It is, therefore, crucial to understand adversarial attacks to test the robustness of a model and come up with suitable defense techniques. This paper aims to survey popular works in this domain. In our discussion, we broadly classify the attacks as image-dependent or independent and targeted or untargeted. We also cover some interesting defense techniques. We discuss real life or physical attacks where an attacker can perturb the image in real life, for instance, by painting a stop sign or pasting stickers on objects. We also show some experimental results, as we test the performance of two adversarial attacks against a pre-trained VGG19 model on Imagenet dataset.