1. Introduction:The recent improvement of machine learning (ML) has impacted the development of various fields and enabled computers to learn complex data patterns and make predictions without prior instructions. Especially in the natural language processing (NLP) application. A crucial phase in NLP is the representation of words [23]. It significantly impacts the performance of ML algorithms in tasks such as Language translation, Sentiment Analysis, and Text Classification. While deep learning techniques have led to remarkable developments in NLP, traditional models still hold relevance in specific instances of sentiment analysis, where they provide more interpretable results. This interpretability becomes especially valuable in domains where explainability is crucial. Moreover, in scenarios with limited computational resources or time constraints, traditional models offer simplicity and efficiency, making them a preferred choice. Additionally, when only a limited amount of labeled data is available, traditional models can still achieve reasonable performance, making them suitable for situations where data scarcity is a concernThe first introduced application of word representation is a one-hot encoded vector, which represents the word as a sparse vector. This representation, however, lacks essential features of the data [7]. The one-hot encoding technique is unable to extract the semantic similarity between words. Additionally, the sparse vector representation is heavy to compute and requires more processing resources with massive data.More recent word representation has resolved the one-hot encoding representation limitation. These techniques have successfully demonstrated higher performance in various tasks. The new word embeddings extract the semantic similarity of words using a dense low dimensional space vector which requires minor resources to compute and displays a better word representation.The field of text-based classification on ML has seen significant advancements, with a growing technique, newly introduced model, and enhanced word representation to attempt to solve various challenges. Especially in sentiment analysis which extracts abstract information from the data. However, there is further research in understanding machine learning classifiers and word representation to optimize the model’s performance. Which we address in the following research questions:Q1: Which word representations yield the highest performance in sentiment analysis when used with traditional machine learning and deep learning models?Q2: Is there a word representation technique that can output comparable performance in both traditional machine learning models and deep learning models for sentiment analysis?