IoT devices constantly communicate with servers over the Internet, allowing an attacker to extract sensitive information by passively monitoring the network traffic. Recent research works have shown that a network attacker with a trained machine learning (ML) model can accurately fingerprint IoT devices learned from the (encrypted) traffic flows of the devices. Such fingerprinting attacks are capable of revealing the make and model of the devices, which can further be used to extract detailed user activities. In this work, we develop and propose iPET, a novel adversarial perturbation-based traffic modification system that defends against fingerprinting attacks. iPET design employs GAN (Generative Adversarial Networks) in a tuneable way, allowing users to specify the maximum bandwidth overhead they are willing to tolerate for the defense. A fundamental idea of iPET is to deliberately introduce stochasticity between model instances. This approach limits a counter attack, as it inhibits an attacker from recreating an identical perturbation model and using it for fingerprinting. We evaluate the effectiveness of our defense against state-of-the-art fingerprinting models and with three different attacker capabilities. Our evaluations on synthetic and real-world datasets demonstrate that iPET decreases the accuracy of even the potent attackers. We also show that the traffic perturbations generated by iPET generalize well to different fingerprinting schemes that an attacker may deploy.