Many people use online platforms such as Instagram and Facebook on a daily basis. These applications often feature comment sections where users can express their opinions. Unfortunately, it is common for this feature to be used inappropriately, with people posting offensive comments towards individuals or groups. Defining what represents offensive content is difficult, as it varies by context, making AI removal challenging. In order to offer additional awareness on hateful and abusive conduct in social media, this work focuses on the SemEval-2023 task of binary sexism detection. Our data analysis indicates that adding a new label would be beneficial for enhancing the dataset's alignment with real-world applications. Towards this end, this work presents a simple baseline model using cosine similarity to assess comment alignment with different platform guidelines to highlight how these comments differ from the community guidelines.