Faster adaptability to open-world novelties by intelligent agents is a necessary factor in achieving the goal of creating Artificial General Intelligence (AGI). Current RL framework does not considers the unseen changes (novelties) in the environment. Therefore, in this paper, we have proposed OODA-RL, a Reinforcement Learning based framework that can be used to develop robust RL algorithms capable of handling both the known environments as well as adaptation to the unseen environments. OODA-RL expands the definition of internal composition of the agent as compared to the abstract definition in the classical RL framework, allowing the RL researchers to incorporate novelty adaptation techniques as an add-on feature to the existing SoTA as well as yet-to-be-developed RL algorithms.