How to model and encode the semantics of human-written text and select the type of neural network to process it with are not settled issues in sentiment analysis. Accuracy and transferability are critical issues in machine learning in general. These properties are closely related to the loss estimates for the trained model. I present a computationally-efficient and accurate feedforward neural network for sentiment prediction capable of maintaining low losses. When coupled with an effective semantics model of the text it provides highly accurate models with low losses. Losses are maintained low over many epochs of training which allows learning the optimal model for some hyperparameters, e.g. attaining a desired accuracy, for a particular training dataset before transferring it to another and retaining a high transfer accuracy. Experimental results on representative benchmark datasets and comparisons to other methods 1 show the advantages of the new approach.
deep learning, sentiment analysis, machine learning, artificial intelligence