Amazon to use NLP to better Alexa’s answers
Posted on 27 November 2019

Giants like Google, Apple, Microsoft and now Amazon have started using NLP to improve quality of their voice assistants. Amazon has come up with a new approach named TANDA (Transfer and Adapt) which builds on Google’s Transformer, can be effectively adapted to new domains with a small amount of training data while achieving higher accuracy than traditional techniques. By way of refresher, Transformers are a type of neural architecture introduced in a paper co-authored by researchers at Google Brain, Google’s AI research division. 
TANDA, is a two-part training methodology that (1) adapts the Transformer model to a question-answering task and (2) tailors it to specific types of questions and answers. To validate their approach, the Amazon researchers first tapped two popular NLP frameworks — Google’s BERT and Facebook’s RoBERTa — and measured accuracy with mean average precision and mean reciprocal recall, using the entire set of candidates for each question.
In a second experiment, the team built four different corpora with questions sampled from Alexa customers’ interactions. They say that using TANDA with the aforementioned RoBERTa produces an “even higher” improvement than with BERT, and that TANDA remains robust against noise.
One of the study’s co-authors wrote that “Interesting future work can be devoted to address the question about the applicability and generalization of the TANDA approach to other NLP tasks”.