Abstrakt: |
This involves studying one of the most important parts of natural language processing (NLP): sentiment, or whether a thing that makes a sentence is neutral, positive, or negative. This paper presents an enhanced long short-term memory (LSTM) network for the sentiment analysis task using an additional deep layer to capture sublevel patterns from the word input. So, the process that we followed in our approach is that we cleaned the data, preprocessed it, built the model, trained the model, and finally tested it. The novelty here lies in the additional layer in the architecture of LSTM model, which improves the model performance. We added a deep layer with the intention of improving accuracy and generalizing the model. The results of the experiment are analyzed using recall, F1-score, and accuracy, which in turn show that the deep-layered LSTM model gives us a better prediction. The LSTM model outperformed the baseline in terms of accuracy, recall, and f1-score. The deep layer's forecast accuracy increased dramatically once it was trained to capture intricate sequences. However, the improved model overfitted, necessitating additional regularization and hyperparameter adjustment. In this paper, we have discussed the advantages and disadvantages of using deep layers in LSTM networks and their application to developing models for deep learning with better-performing sentiment analysis. [ABSTRACT FROM AUTHOR] |