Jump to content

How Microsoft 365 Copilot works


Recommended Posts

Guest Zachary Cavanell
Posted

Get an inside look at how large language models (LLMs) work when you use them with your data in Microsoft 365. See what makes this possible and how the process respects your privacy to keep data safe with Microsoft 365 Copilot. The LLMs used for Microsoft 365 Copilot are hosted in the Microsoft Cloud and are not trained on your organizational data. Copilot automatically inherits your organization’s security, compliance, and privacy policies set in Microsoft 365.

 

 

 

largevv2px999.png.17f363c3318f39d27c62a1faa762dcee.png

 

Join Mary David Pasch to go inside the mechanics of AI-powered Microsoft 365 Copilot capabilities, what they do, and how they work.

 

 

Generate relevant responses.

 

 

largevv2px999.png.2d802257f4d69d9350f02f1c3cddd7ce.png

 

Large language models work with the information provided to prompts using Microsoft 365 Copilot orchestration and the information you have access to via Microsoft Search. With each new conversation, chat history is wiped clean, and your interactions do not train the large language model.

 

 

See what you missed.

 

 

largevv2px999.png.6d1011526672cfd33a07c7b900318eb7.png

 

Example of Business Chat and how Copilot works with Microsoft 365 apps, leveraging just the information you have access to with Microsoft Search and the Microsoft Graph to quickly catch you up and save you time.

 

 

Generate new content fast.

 

 

largevv2px999.png.9518e9a5c2a2b7cc77976434fa280d43.png

 

Example of how Microsoft 365 Copilot orchestration scans relevant inputs from OneNote, Word, or PowerPoint files to help generate a great starting point.

 

 

Watch our video here.

 

 

 

 

 

QUICK LINKS:

 

 

— Introduction

 

— Large Language Models (LLMs)

 

— Write prompts to include additional info

 

— Core components of Copilot

 

— Default prompt

 

— Microsoft 365 Copilot orchestrator

 

— Generate content

 

— Wrap up

 

 

Link References:

 

 

For more on how Microsoft operates its AI services, check out Responsible AI principles from Microsoft

 

 

Unfamiliar with Microsoft Mechanics?

 

 

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.

 

 

Keep getting this insider knowledge, join us on social:

 

Video Transcript:

 

 

-Have you ever wanted to know how large language models work when you connect them to the data in your organization? At Microsoft, we recently demonstrated Microsoft 365 Copilot, which transforms how we work by leveraging large language models that interact with your organizational data. Copilot works alongside you. For example, in Word, Copilot can easily write an entirely new document, like a business proposal using content from your existing files. Or in Outlook, based on the content you select, Copilot can compose your email replies for you. In PowerPoint, you can transform your written content into a visually beautiful presentation with the click of a button. In Teams, Copilot can generate meeting summaries with discussed follow-up actions. Or while using Business Chat in Microsoft Teams, it can help you catch up on something you may have missed, bringing together information from multiple sources to bring you up to speed. If you’re wondering how large language models know what they know in these scenarios, let me break down the mechanics of what makes this possible, and how the process respects your privacy, and keeps your data safe with Microsoft 365 Copilot.

 

 

 

-First, let’s look at where large language models, or LLMs, get their knowledge. LLMs are trained on massive amounts of public data, including books, articles, and websites to learn language, context and meaning. You can interact with large language models using natural language with what’s called a prompt. A prompt is typically a statement or question. When you ask a question in the prompt, the LLM generates a response based on its public data training and understanding of context, which can come in part from how you phrase your prompt. For example, you might give it more details to generate a response. As you continue to ask questions and get responses, the large language model is temporarily getting more context. Your full conversation gets sent with each subsequent prompt, so the LLM can generate relevant responses as you chat with it. It’s processing natural language and referring to its knowledge like we would in conversation. A key difference is that it only remembers the conversation while it’s in that conversation. The chat history is wiped clean with each new conversation. And it won’t use the knowledge from your conversations and interactions to train the model.

 

 

 

-That said, you can also write your prompt to include additional information, which the large language model will refer to as it generates its response. This is how you can give the LLM a little more knowledge it might need to answer your question. I’ll show you how this works using Microsoft Bing Chat’s GPT-enabled public service that has no affiliation with your organization’s data. First, I’ll ask it a completely random question that it can’t answer, “What color shirt am I wearing today?” And it responds intelligently. It knows what a shirt is but it can’t see me to answer my question so it responds accordingly, which is an accurate response.

 

 

 

-Let me ask the question again, this time including some additional information in my prompt. I’ll describe my outfit. Now you can see it responds using the information I gave it, which is more in line with what I was looking for. And now that it has the context, I can keep asking it related questions like, “What color shoes?” Again, that’s because the prompt builds with each interaction. And to prove that the large language model doesn’t retain the information, I’ll start a new chat session and ask it again, “What color shirt am I wearing today?” And now it again says, “I can’t see you, so I don’t know.” It knew what shirt I was wearing before only because I temporarily provided that additional limited information.

 

 

 

-In this new session, it no longer has access to what I said before, and I never told it my shirt color, so it doesn’t know. So how does this work then in the context of Microsoft 365 Copilot? In my previous example using Bing Chat, I provided the prompt more information and context to give the LLM what it needed to generate the right response. This is what the Microsoft 365 Copilot system does automatically for you as you interact across different app experiences. To do this, Copilot has several core components.

 

 

 

-First off, are the large language models hosted in the Microsoft Cloud via the Azure OpenAI service. To be clear, Copilot is not calling the public OpenAI service that powers ChatGPT. Microsoft 365 Copilot uses its own private instances of the large language models. Next, Microsoft 365 Copilot has a powerful orchestration engine that I’ll explain in a moment. Copilot capabilities are surfaced in and work with Microsoft 365 apps. Microsoft Search is used for information retrieval to feed prompts, like I did in the example before where information I provided in my prompt was used to help generate an answer. Then the Microsoft Graph, which has long been foundational to Microsoft 365, includes additional information about the relationships and activities over your organization’s data. The Copilot system respects per user access permissions to any content and Graph information it retrieves. This is important because Microsoft 365 Copilot will only generate responses based on information that you have explicit permission to access.

 

 

 

-Additionally, Microsoft 365 has its own default prompt with responsible rules for interaction. This includes things like where to search to find the right information for a prompt. For example, calling the Microsoft Graph to gain more context, like your recent documents or messages. Also, the style and tone of the response, including things like being informative for style and positive for tone, as well as different approaches that Copilot can take to gather information for the prompt. For example, it can choose to iterate on a few different searches until there’s enough information for it to generate a good response. Copilot knows how it should cite its sources, and of course, responsible AI practices, such as ensuring harmful content is not included in generated responses. Importantly, this default prompt gets appended to your prompt whenever you interact with Copilot.

 

 

 

-The Microsoft 365 Copilot Orchestrator combines the default prompt with your prompt, plus additional information it’s gathered, and will form one long prompt to present to the LLM in order to generate a response. Now let’s go back to the example you saw earlier in Microsoft Teams where a user asked, “Did anything happen yesterday with Fabrikam?” Copilot didn’t just send that question or prompt directly to the large language model. Instead, Copilot knew that it needed more knowledge and context, so using clues from the user’s question, like Fabrikam, it inferred that it needed to search for content sources private to the organization. The Copilot orchestrator searched the Microsoft Graph for activities, ensuring it respected the user’s permissions and access to information, in this case, the user Kat. It found the email thread from Mona that Kat received, activities in the Project Checklist and March planning presentation, which are files that Kat had access to, as well as the sharing action where the final contract was sent to Fabrikam for review, again, where Kat would have been part of the share activity. And Copilot cited each source of information so Kat could easily validate the response.

 

 

 

-These are all individual steps that Kat could have done manually, like searching her inbox for emails from Mona looking at recent project file activities in SharePoint or reading the sharing notifications sent to Fabrikam for the contract. Copilot removed the tediousness of performing these steps manually and formulated a natural easy-to-follow and concise response in a single step. So that’s how Business Chat with Copilot works.

 

 

 

-Now, in the examples I showed you earlier, you also saw how Microsoft 365 Copilot can help save you time in the apps you’re working in by generating content. In fact, let’s go back to the Copilot and Word example to explain how that worked. Microsoft 365 Copilot can help generate a draft proposal by using content you’ve been working on, for example, in OneNote or other documents that you have access to, like Word or PowerPoint files. Here we combine the large language model’s training on how a proposal document is structured and written with Microsoft 365 Copilot orchestration, which scans and takes relevant inputs from additional documents you’ve selected, adding the information to the prompt.

 

 

 

-The LLM is then able to generate an entirely new proposal document with the additional information from those files, providing a first draft that you can use to save time and quickly get started. And just like with the Business Chat example, the important thing to remember here is that the enterprise data used to generate informed responses is only present as part of a prompt to the large language model. These prompts are not retained by the large language models nor used to train them, and all retrieved information is based on the individual data access and permissions you have while using Copilot.

 

 

 

-So hopefully that explains how Copilot capabilities in Microsoft 365 work. For more information on how Microsoft operates its AI services, check out aka.ms/MicrosoftResponsibleAI. Please keep checking back to Microsoft Mechanics for the latest in tech updates, and thanks for watching.

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...