Microsoft Turing Team
The MS Turing team develops state-of-the-art, large-scale models to solve challenging business problems across Microsoft, from Bing to Office to Xbox to Dynamics. Our mission is to expand the boundaries of natural language understanding, machine reading comprehension, question answering, transfer learning, reinforcement learning, computer vision, and even building interpretable models.

NLP
Turing NLP models are used across Microsoft to finetune for downstream tasks, such as SmartFind in Microsoft Word or Question Matching in Xbox.
For instance, the models we develop include:
-
Language understanding
- Monolingual and universal language representation
- Question answering
-
Language generation
- Text prediction (e.g. Smart Compose)
- Summarization
- Dialog generation (e.g. bots)

Bing
Turing models power a variety of features in Bing: from search rankings to autosuggest.
- Ranking and retrieval
- QnA & Captions
- Answer Relevance
- Image QnA
- Autosuggest
- Direct Answers
- “People Also Ask” feature
- Ads relevance and generation
- Fast nearest neighbor search at scale
We use deep learning and machine reading comprehension (MRC) to enhance our search experience by providing direct answers to your search queries (QnA), identifying and showcasing the most relevant search results, suggesting similar content (“People Also Ask”) in lightning-fast speed!

Office
This line of work aims to bring the state of the art Turing language understanding and generation models to the Office suite, including Word, Teams, and Outlook. This includes:
- Smart Reply
- Smart compose
- At a Glance in Sharepoint
- Smart Find + QnA
- Question Generation

Azure
Turing models work seamlessly with Azure technologies:

Enterprise Semantic Search
The team develops text-based precision rankers using learned query and document encodings, allowing fast and effective search of information at the enterprise scale.

Multimodal Representations
The Turing team is also developing multimodal representation learning frameworks to learn Universal language-vision representations, for tasks such as text-image QnA.

Training Optimization
We work closely with the platform team to optimize model training for large-scale models. We also use a variety of techniques to reduce the inference latency of our models, for instance working together with the ONNX team.