This paper builds on previous research on the Single-Turn Crescendo Attack (STCA) by introducing new findings on the effectiveness of a four-turn STCA (STCA-4) against Open AI GPT o1 series (Strawberry) models. Building upon our original work with the STCA-3, we demonstrate that GPT o1 Mini and GPT o1 Preview models are vulnerable to an STCA-4 attack, often generating disallowed content such as hate speech and slurs. In contrast, models like Gemini 1.5 Flash and LLaMA 3 8B showed robust resistance to the same attack. We hypothesize that the success of the STCA-4 on GPT models may be due to the familiarity effect, as the attack leverages language generated by the GPT o1 models themselves, or the slow-thinking ability of the o1 series models. These results highlight the need for improved safeguards in large language models to prevent adversarial exploits that condense multi-turn escalations into a single prompt.