Abstract
Social networking and e-commerce platform demographics have skyrocketed along with the Internet’s rapid development. Online user from all around the world the world exchange their opinions and thoughts, which has become a new custom. The volume of data that people express on various platforms has increased as a result of the Internet’s expansion. Sentiment analysis is made possible by the availability of these various worldviews and peoples emotion. However, in the NLP domain, the lack of consistent labeled data makes sentiment analysis even more difficult. This article proposes a Convolution Neural Network-long and Short-term Memory model based on BERT and attention. The technique is due to the shortcomings of the existing model in managing long-term dependencies in natural languages. First, feed the Convolution Neural Network’s long- and short-term memory with the text vector encoded using Bert. Second, the attention layer receives the output of the Convolution Neural Network and Short-Term Memory model. It uses weighting vectors to extract significant features and the most pertinent information from the input. The findings indicate this. Attention-based models, Constitutional Neural Networks (ABCNN), Hierarchical Attention Networks (HAN), and Bi-LSTM-ATT are contrasted. The macroaverage F1 index and F1-score accuracy both see notable gains from the approach. The suggested model has strong performance in both the macro average F1 indicator and the F1 accuracy scores.
Keywords: Attention Mechanisms, BERT-Convolution Neural Networks, LSTM Neural Network, NLP, Sentiment Analysis.