Significant advancements have been achieved in Brain-Machine Interface (BMI), particularly in electroencephalography (EEG)-based systems, which capture the brain’s electrical activity dynamics non-invasively. This study focuses on EEG-based BMI for detecting voluntary keystrokes, aiming to develop a reliable brain-computer interface (BCI) to simulate and anticipate keystrokes, especially for individuals with motor impairments. The dataset, featured in a Nature publication, was initially band-pass filtered and segmented into 22-electrode arrays, and our work began with excluding non-significant channels (A1, A2, and X5), and the feature-extracted dataset was utilized for developing models. Using ERP window-based segmentation, each window was categorized relative to Event 0, resulting in a 19*200 data array of features set. The ERP window-based data segmentation was done with those below Event 0 belonging to one and above Event 0 belonging to another window so that total electrode data division resulted in a total of 19*200 data arrays to begin model development. The methodology includes extensive segmentation, event alignment, ERP plot analysis, and signal analysis. Different deep learning models are trained to classify EEG data into three categories—’resting state’ (0), ‘d’ key press (1), and ‘l’ key press (2). Real-time keypress simulation based on neural activity is enabled through integration with a tkinter-based graphical user interface. Feature engineering utilized ERP windows, and the SVC model achieved 90.42% accuracy in event classification. Additionally, deep learning models—MLP (89% accuracy), Catboost (87.39% accuracy), KNN (72.59%), Gaussian Naive Bayes (79.21%), Logistic Regression (90.81% accuracy), and a novel Bi-Directional LSTM-GRU hybrid model (89% accuracy)—were developed for BCI keyboard simulation. Finally, a GUI was created to predict and simulate keystrokes using the trained MLP model.