As a staple food feeding over half of the world's population, rice needs well defined classification techniques to improve agricultural yields, the supply chain, and food safety. To meet these needs, this work presents an Attention-Based Hybrid Model that can accurately and effectively classify Bangladeshi rice varieties. This research covers complex variations of feature space in shape, texture, and color based on 20 different rice varieties by using a comprehensive dataset of 27,000 high-resolution images that include real-world agricultural issues. The core innovation is proposed based on the Attention-Based CNN and CBAM structure. This in fact effectively highlights and enhances spatially and channel-orientated features, allowing the model to tell apart morphologically similar types of rice with high accuracy. The proposed Attention-Based CNN had achieved an accuracy of 91.92%, which leads to an improvement in both generalization aspects and robustness across different categories of testing conditions. Moreover, extending the proposed framework, feature extraction combined with KNN classifier reported the top accuracy of 99.35% proving that modern feature extraction and classification algorithms go hand in hand. This new combined approach does better than Random Forest and Support Vector Classifier because it solves problems that are normally associated with it such as finegrained features and scaling. Apart from that, the model represents one level up from the current paradigm of automated agriculture, bringing a robust, standardized, and flexible solution for rice variety identification. That way, this study provides a way of connecting a technological standpoint with the requirements that farmers have to address in order to further the issue of sustainability, support precision agriculture, and address the growing need for food quality and security in the world.
The early identification of Autism Spectrum Disorder (ASD) is crucial for facilitating prompt therapies that can markedly enhance quality of life. This work presents an innovative EEG-based Brain-Computer Interface (BCI) system aimed at improving the precision and dependability of ASD classification. Utilizing the BCIAUT P300 dataset, comprising EEG recordings from 15 participants across 105 sessions, we established a novel multimodal framework. This system incorporates Vision Transformer (ViT) for spatial feature extraction and Long Short-Term Memory (LSTM) networks for temporal analysis, merging the advantages of both architectures. The ViT-LSTM model attained a validation accuracy of 94.75%, but the EfficientNet-LSTM model exhibited superior performance with a validation accuracy of 95.61%. Statistical investigations, including a P-value of 0.0288 and a T-value of 5.76 for the EfficientNet-based design, validated the reliability and robustness of these findings. ViT adeptly captures global spatial relationships using self-attention processes, whereas EfficientNet improves spatial representation through its pretrained feature extraction. To enhance usability in clinical environments, Explainable AI (XAI) methodologies, particularly Local Interpretable Model-Agnostic Explanations (LIME), were utilized. LIME offers transparent, feature-specific insights into model predictions, enabling physicians to comprehend and have confidence in the decision-making process. This system sets a new standard in AI-driven brain-computer interfaces by merging cutting-edge accuracy with interpretability, providing scalable and significant solutions for the diagnosis and treatment of ASD.