I would suggest a following pipeline:
- Construct a training set from existing dataset of synonyms and antonyms (taken e.g. from the Wordnet thesaurus). You’ll need to craft negative examples carefully.
- Take a pretrained model such as BERT and fine-tune it on your tasks. If you choose BERT, it should be probably
BertForNextSentencePrediction
where you use your words/prhases instead of sentences, and predict 1 if they are synonyms and 0 if they are not; same for antonyms.
CLICK HERE to find out more related problems solutions.