Skip to main content

Microsoft Turing Team

The MS Turing team develops state-of-the-art, large-scale models to solve challenging business problems across Microsoft, from Bing to Office to Xbox to Dynamics. Our mission is to expand the boundaries of natural language understanding, machine reading comprehension, question answering, transfer learning, reinforcement learning, computer vision, and even building interpretable models.

NLP

NLP

Turing NLP models are used across Microsoft to finetune for downstream tasks, such as SmartFind in Microsoft Word or Question Matching in Xbox.

For instance, the models we develop include:

  • Language understanding
    1. Monolingual and universal language representation
    2. Question answering

  • Language generation
    1. Text prediction (e.g. Smart Compose)
    2. Summarization
    3. Dialog generation (e.g. bots)

Bing

Bing

Turing models power a variety of features in Bing: from search rankings to autosuggest.

  • Ranking and retrieval
  • QnA & Captions
  • Answer Relevance
  • Image QnA
  • Autosuggest
  • Direct Answers
  • “People Also Ask” feature
  • Ads relevance and generation
  • Fast nearest neighbor search at scale

We use deep learning and machine reading comprehension (MRC) to enhance our search experience by providing direct answers to your search queries (QnA), identifying and showcasing the most relevant search results, suggesting similar content (“People Also Ask”) in lightning-fast speed!

Office

Office

This line of work aims to bring the state of the art Turing language understanding and generation models to the Office suite, including Word, Teams, and Outlook. This includes:

Azure

Azure

Turing models work seamlessly with Azure technologies:

Enterprise Semantic Search

Enterprise Semantic Search

The team develops text-based precision rankers using learned query and document encodings, allowing fast and effective search of information at the enterprise scale.

Multimodal Representations

Multimodal Representations

The Turing team is also developing multimodal representation learning frameworks to learn Universal language-vision representations, for tasks such as text-image QnA.

Training Optimization

Training Optimization

We work closely with the platform team to optimize model training for large-scale models. We also use a variety of techniques to reduce the inference latency of our models, for instance working together with the ONNX team.