Human-instructed prompt generation introduces limitations in scalability and adaptability, often requiring substantial manual intervention to optimize model performance across varied tasks. A novel recursive in-context learning framework addresses this challenge through self-instructed prompt refinement, enabling models to dynamically improve outputs without external guidance. Through multiple iterative cycles, prompts are adjusted based on the quality of the previous output, resulting in significant improvements in lexical precision, semantic relevance, and task completion accuracy. The framework's ability to autonomously refine prompts demonstrates a substantial reduction in reliance on human intervention, while maintaining high diversity and coherence across multiple iterations. Experimental results show consistent gains across domains, including technical writing, conversational agents, and content summarization, reinforcing the potential of recursive learning in creating more adaptable and efficient models. By employing automated evaluation metrics such as BLEU, ROUGE, and BERTScore, the model's performance was quantified and analyzed, confirming the effectiveness of the recursive feedback mechanism in achieving high-quality prompt generation.