Advances in Semantic Textual Similarity
Advances in Semantic Textual Similarity
from Google Research
Posted by Yinfei Yang, Software Engineer and Chris Tar, Engineering Manager, Google AI
The recent rapid progress of neural network-based natural language understanding research, especially on learning semantic text representations, can enable truly novel products such as Smart Compose and Talk to Books . It can also help improve performance on a variety of natural language tasks which have limited amounts of training data, such as building strong text classifiers from as few as 100 labeled examples.
Below, we discuss two papers reporting recent progress on semantic representation research at Google, as well as two new models available for download on TensorFlow Hub that we hope developers will use to build new and exciting applications.
Semantic Textual Similarity
In “ Learning Semantic Textual Similarity from Conversations ”, we introduce a new way to learn sentence representations for semantic textual similarity. The intuition is that sentences are semantically similar if they have a similar distribution of responses. For example, “How old are you?” and “What is your age?” are both questions about age, which can be answered by similar responses such as “I am 20 years old”. In contrast, while “How are you?” and “How old are you?” contain almost identical words, they have very different meanings and lead to different responses.
|Sentences are semantically similar if they can be answered by the same responses. Otherwise, they are semantically different.|
In this work, we aim to learn semantic similarity by way of a response classification task: given a conversational input, we wish to classify the correct response from a batch of randomly selected responses. But, the ultimate goal is to learn a model that can return encodings representing a variety of natural language relationships, including similarity and relatedness. By adding another prediction task (In this case, the
dataset) and forcing both through shared encoding layers, we get even better performance on similarity measures such as the
(a sentence similarity benchmark) and
CQA task B
(a question/question similarity task). This is because logical entailment is quite different from simple equivalence and provides more signal for learning complex semantic representations.
|For a given input, classification is considered a ranking problem against potential candidates.|
Universal Sentence Encoder
In “ Universal Sentence Encoder ”, we introduce a model that extends the multitask training described above by adding more tasks, jointly training them with a skip-thought -like model that predicts sentences surrounding a given selection of text. However, instead of the encoder-decoder architecture in the original skip-thought model, we make use of an encode-only architecture by way of a shared encoder to drive the prediction tasks. In this way, training time is greatly reduced while preserving the performance on a variety of transfer tasks including sentiment and semantic similarity classification. The aim is to provide a single encoder that can support as wide a variety of applications as possible, including paraphrase detection, relatedness, clustering and custom text classification.
|Pairwise semantic similarity comparison via outputs from TensorFlow Hub Universal Sentence Encoder.|
As described in our paper, one version of the Universal Sentence Encoder model uses a
deep average network
(DAN) encoder, while a second version uses a more complicated self attended network architecture,
|Multi-task training as described in “ Universal Sentence Encoder ”. A variety of tasks and task structures are joined by shared encoder layers/parameters (grey boxes).|
With the more complicated architecture, the model performs better than the simpler DAN model on a variety of sentiment and similarity classification tasks, and for short sentences is only moderately slower. However, compute time for the model using Transformer increases noticeably as sentence length increases, whereas the compute time for the DAN model stays nearly constant as sentence length is increased.
In addition to the Universal Sentence Encoder model described above, we are also sharing two new models on TensorFlow Hub : the Universal Sentence Encoder – Large and Universal Sentence Encoder – Lite . These are pretrained Tensorflow models that return a semantic encoding for variable-length text inputs. The encodings can be used for semantic similarity measurement, relatedness, classification, or clustering of natural language text.
- The Large model is trained with the Transformer encoder described in our second paper. It targets scenarios requiring high precision semantic representations and the best model performance at the cost of speed & size.
- The Lite model is trained on a Sentence Piece vocabulary instead of words in order to significantly reduce the vocabulary size, which is a major contributor of model size. It targets scenarios where resources like memory and CPU are limited, such as on-device or browser based implementations.
We’re excited to share this research, and these models, with the community. We believe that what we’re showing here is just the beginning, and that there remain important research problems to be addressed, such as extending the techniques to more languages (the models discussed above currently support English). We also hope to further develop this technology so it can understand text at the paragraph or even document level. In achieving these tasks, it may be possible to make an encoder that is truly “universal”.
Daniel Cer, Mario Guajardo-Cespedes, Sheng-Yi Kong, Noah Constant for training the models, Nan Hua, Nicole Limtiaco, Rhomni St. John for transferring tasks, Steve Yuan, Yunhsuan Sung, Brian Strope, Ray Kurzweil for discussion of the model architecture. Special thanks to Sheng-Yi Kong and Noah Constant for training the Lite model.