In this post, I will demonstrate how to build a model that,
given an emotion and its effect/type (Positive Or Negative) get the three
closest synonymous emotions to a given emotion.
I use an online dictionary sometimes, then I got a thought that
why don’t I pick some synonyms for “hate” and “love” and build a Deep Learning(DL)
model using Keras and Tensorflow backend, that uses Embeddings Vectors (To know
more about Entity embeddings and advantages of them over OneHot encoded
vectors click here). So below is the list of synonyms I picked from Dictionary.com
emotionsList= ['like','antipathy', 'hostility','love','warmth','loathe','abhor','intimacy','dislike','venom','affection', 'tenderness','animosity','attachment','infatuation','fondness','hate']
But to build any good model we need representative
data, that’s when I thought I may need one more feature that helps my DL model
to suggest better closest emotions. So I came up with “emoeffect” feature as
below. For example, If someone says I like you, it gives a positive
impression/feeling. But if someone says I hate you, it gives a negative feeling
to us.
emoaffect= ['positive','negative', 'negative','positive','positive','negative','negative','positive','negative','negative','positive', 'positive','negative','positive','positive','positive','negative']
Here the objective is not to build the model with the best accuracy, but to generate best embeddings. So that we can use these embedding
vectors to find closest matches. So we treat this problem as a supervised task. The supervised task is just
the method through which we train our network.
The complete code with explanation is available on
Github at: