Microsoft Megatron-Turing NLG 530B
The World’s Largest and Most Powerful Generative Language ModelDetails
Microsoft Turing Academic Program (MS-TAP)
Share Microsoft advances with Microsoft’s Turing family of natural language models in responsible mannerLearn More
Microsoft Turing Universal Language Representation Model
Microsoft T-ULRv2 tops XTREME leaderboardWatch
Microsoft Turing Universal Language Representation model, T-ULRv5, tops XTREME leaderboard and trains 100x faster
Our latest Turing universal language representation model (T-ULRv5), a Microsoft-created model is once again the state of the art and at the top of the Google XTREME public leaderboard..
Inside Microsoft's Project Turing, the team that's quietly reinventing how it develops advanced AI to move faster and take on rivals like Google
Since 2017, Microsoft has pursued this goal under the name Project Turing, a team that's tasked with building these large language models and figuring out how they can be used in the company's vast suite of products.
Microsoft Turing-NLG: A 17-billion-parameter language model by Microsoft
Turing Natural Language Generation (T-NLG) is a 17 billion parameter language model by Microsoft that outperforms the state of the art on many downstream NLP tasks. We present a demo of the model, including its freeform generation, question answering, and summarization capabilities, to academics for feedback and research purposes
Generate Chatbot training data with QBox — powered by Microsoft Turing NLG
One of the primary challenges when building any kind of chatbot is producing or obtaining high-quality, diversified training data. The training data that you use across your model’s intents will determine how readily your model picks up on a real user’s true intent when exposed to queries it’s never seen before. So no matter what chatbot framework you’re using (e.g. Microsoft LUIS, IBM Watson, etc.), having high-quality training data is a must..
Microsoft trains world’s largest Transformer language model
Microsoft AI & Research today shared what it calls the largest Transformer-based language generation model ever and open-sourced a deep learning library named DeepSpeed to make distributed training of large models easier.
Assistive AI Makes Replying Easier – Microsoft Research
Microsoft’s mission is to empower every person and organization to achieve more. So, we are constantly looking for opportunities to simplify workflows and save people time and effort. Sending replies to email or chat messages is a common activity and people spend considerable amount of time on it.
Microsoft details how it improved Bing’s autosuggest recommendations with AI
Earlier in the year, Microsoft detailed the ways Bing has benefited from AI at Scale, an initiative to apply large-scale AI and supercomputing to language processing across Microsoft’s apps, services, and managed products. AI at Scale chiefly bolstered the search engine’s ability to directly answer questions and generate image captions, but in a blog post today, Microsoft says it has led to Bing improvements in things like autocomplete suggestions.
Better Document Previews using the Microsoft Turing Model for Natural Language Representations
Knowledge workers spend close to 20% of their time searching for and gathering information. When using document management systems such as Microsoft OneDrive and SharePoint people find themselves looking at directories full of documents. Interacting with such a list of documents can be time-consuming without a mechanism for previewing the documents.
Here's how Microsoft is looking to make search smarter and more natural
Microsoft is continuing to evolve its unified Microsoft Search service. The latest pieces it is integrating into Microsoft Search involve its 'Project Turing' deep-learning work, as well as advances it is making around semantic meaning and intent.
Microsoft Turing Academic Program Workshop
MS-TAP program has given a special opportunity to academics to research on Turing Language Model family with Microsoft. The workshop is to bring together the research institutions who participated in the program for the last 2 years to exchange their findings and learnings on their research on Large Language Models. We will also discuss the best way to make AI models by Microsoft more accessible to academics in a more accountable and responsible way so that we develop the foundational research community to drive the state-of-the-art of research and the innovation on Large Language Models together.
Turing Bletchley: A Universal Image Language Representation model by Microsoft
Today, the Microsoft Turing team is thrilled to introduce Turing Bletchley, a 2.5-billion parameter Universal Image Language Representation model (T-UILR) that can perform image-language tasks in 94 languages. T-Bletchley has an image encoder and a universal language encoder that vectorize input image and text respectively so that semantically similar images and texts align with each other. This model shows uniquely powerful capabilities and a groundbreaking advancement in image language understanding.
Microsoft Turing Universal Language Representation model, T-ULRv6, tops both XTREME and GLUE leaderboards with a single model
Today, we are thrilled to announce that the most recent addition to our Turing Universal Language Representation Model family (T-ULRv6) has achieved the 1st position on both the Google XTREME and GLUE leaderboards, demonstrating that a single multilingual model can achieve state-of-the-art capabilities in both English and Multilingual understanding tasks.
Introducing Turing Image Super Resolution: AI powered image enhancements for Microsoft Edge and Bing Maps
We can all probably think of a time when we had the perfect image - a prized portrait of a family member to be framed or the best screenshot to illustrate your point in a presentation - but could not use it because the quality was too low. Using the power of Deep Learning, the Microsoft Turing team has built a new model to help in these scenarios.
Scaling Up Multilingual Evaluation Workshop
Massively Multilingual Language Models (MMLMs) are trained on around 100 languages of the world, however, most existing multilingual NLP benchmarks provide evaluation data in only a handful of these languages. The languages present in evaluation benchmarks are usually high-resource and largely belong to the Indo-European language family. This makes current multilingual evaluation unreliable and does not provide a full picture of the performance of MMLMs across the linguistic landscape.
Want to make a difference? So do we, Step in to explore the weath of career opportunities and take your career to the next level.CONTACT US