This week Microsoft launched the product that had already been expected by those who’ve been reading my newsletter. Yes, Microsoft Agent 365 is now officially a thing!
But what is it exactly? You could of course read the MS product marketing stories. Or the extensive documentation pages. Me, I decided to try something different. At the same time as Ignite 2025 keynote started, I gave Claude Code a task to build me an Agent 365 FAQ site. Using language that addresses the customer’s perspective, rather than that of the service provider.
It took 2 hours for Claude to plow through the MS websites I gave it and to turn that information into a categorized Frequently Asked Questions website. The result was pretty good, actually. Good enough that I will surely reference this myself when trying to figure out Agent 365. Have a look at the A365 FAQ:
Let’s put the short description here in the newsletter, too:
Microsoft Agent 365 is described as Microsoft’s control plane for AI agents. It acts as a centralized platform to register, monitor, secure, and govern all AI agents across an organization—whether they’re built by Microsoft, third parties, or internally. The idea is to give IT and security teams the same level of oversight for AI agents as they have for human users, since these agents can access data, perform actions, and make decisions. Agent 365 integrates with services like Entra (identity), Defender (security), and Purview (compliance) to provide identity management, threat protection, and data governance for agents.
Rather than being yet another Copilot agent, or a collection of those, Agent 365 is actually the infrastructure for eventually using AI agents for real work at scale. “Eventually”, because the journey is longer than the vendors would like to admit. Yet I see this as a positive step along that journey. Much better than some of the earlier missteps.
Welcome to per-agent licensing
The reason why Agent 365 is such a big deal for Microsoft is of course in the business opportunity it aims to capture. Last week, Satya Nadella appeared in the Dwarkesh podcast and talked about all things AI. That episode in its 1.5h glory is worth listening/watching if you want to understand the strategic thinking behind Agent 365 and how Microsoft positions itself in the (currently massively overvalued) AI market.
No matter what customers, employees or reporters think about the direction Microsoft has been going in for the past 2 years, the perspectives Satya shared actually made me believe that it will work out in the end. Even when the current GenAI 1.0 bubble blows, I bet MS will be in a strong position. In fact, it looks to me like they’re already positioning themselves for the post-bubble era. With full visibility into the horrific financials of OpenAI, MS has the data points needed to plan for the long-term success. Even if short-term stock valuation cannot be ignored.
One specific thing that Satya talked about in the podcast is worth illustrating with a picture: the future of Microsoft’s business model is going to be per-user and per-agent licensing. This is what the “post-SaaS” world looks like from Redmond’s point of view.

How do you then get to actually charging for agents, which today seem to be just popping up everywhere with no one explicitly buying them? You need to follow similar principles as with apps. This slide from “Best practices to secure and govern low code agents, apps, and flows” session at Ignite 2025 tells the exact same story we heard before regarding Power Apps:
Some Power Apps are merely a quick template splashed on top of a SharePoint List, used for a limited time for some specific need. Others are highly critical line-of-business apps that run customer-facing processes, like with many apps that have the Dynamics 365 icon on them.
Some apps are worth paying more attention to - and paying more for the services associated with them. The needs for services around “proper” AI agents that perform similar tasks as what human workers do today with D365 apps are where the enterprise money can be found. In Microsoft’s vision, these are even more deeply intertwined because they think AI agents will also use apps.
Satya said how the digital workers will have very similar needs as human workers: identity, computer, tools. Instead of simply being ephemeral chats with an AI bot, the agents will need to do much more than produce text responses with their tokens. AI labs have the LLMs, sure — but a model is not enough. To handle more complex tasks and to securely collaborate with other agents, MS envisions the agents will use Windows 365 computers, authenticate with their Entra Agent ID, and use an A365 edition of Office tools like Teams and Outlook. Oh, and IT needs to keep track of what the agents are doing.
A future like this will change how organizations budget for their IT spend in a profound way. Instead of simply counting the employees that need a Microsoft 365 license, the new variable will be “digital headcount”. You can find a deeper dive into the commercial impact in my brand new blog, The Licensing Guide. It’s a site where I’ll be posting all the wonderful Microsoft licensing related material that couldn’t fit into a newsletter like this.
Okay, so what’s the cost of these A365 licenses for agents then? Sorry, we don’t know yet. We only know that the licenses are real and that you can already get them in trial for a Frontier-enabled tenant. When and how you need to assign them to AI agents remains to be seen.

Terms of Service prompt when signing up for Agent 365 trial: 25 licenses for “agent instances only”.
It’s a good start
Looking at what has been shared about Agent 365 during the Ignite 2025 week, Microsoft is obviously doubling down on its existing strengths. Instead of being something futuristic and “AGI” style, A365 is all about convincing customers that they can count on MS services for agents — just like they did with human users.
The “Introducing Microsoft Agent 365” video couldn’t be more safety-oriented:
You've built systems that empower, protect, and unify.
Now, manage agents the same way.
No need to reinvent. No need to rebuild.
Extend what works.
Your infrastructure, your apps, your protections.
Familiar, tailored, unified.
Agents under control.
Innovation unleashed.
What will you empower?
“In a sea of constant AI change, you are safe with Microsoft.” That’s not from the video, but it might as well be. And I totally get why MS have chosen to position themselves like this. Because they ARE like this. A 50-year-old corporation, sometimes described as the most resilient IT company in the world. There’s a reason why hardly any startups choose Microsoft as their platform, and why most bigger customers do.
In the aforementioned podcast, on many occasions Satya had to explain to the younger techbros interviewing him why a frontier model alone won’t change the world. And even if it eventually does, it won’t happen overnight. Microsoft has been through the on-prem → cloud transformation and succeeded brilliantly in it. They know the hybrid reality of business customers out there, even though it may not sound like it when watching the keynotes that focus mostly on the latest smoke + mirrors tech. This is just the strange duality of the Microsoft ecosystem.
Agent 365, as it is described today during the first launch, is a masterful play in combining all the strengths of established MS services and serving them to the AI agent audience. Not only are they selling the idea to end-customers that have to buy the A365 licenses. An equally important audience is the agent developers out there who must find a way to address the security and governance concerns of enterprise customers, if they ever want to get beyond the PoC stage. When (not if) the VC funding for AI startups runs out, the ones left standing will look for an established delivery channel through which they can try to ship their IPR in the form of agent products.

“All your agents can be enabled for Agent 365” slide from the A365 pitch.
By demonstrating their readiness for offering a control plane for AI agents not just from Microsoft’s own platform but any product that supports their SDK, Microsoft is positioning themselves as the guardians of the enterprise. “Agents are coming, and here’s how you can be ready for it.”
The inconvenient truth about agents
Once the dust from Ignite 2025 settles, customers and partners will be back at their offices, asking “okay, now what?” They’ll have a vague idea about A365 offering all sorts of security and compliance tools for agents. If they’re knowledgeable enough about what LLMs are and what challenges remain in products built on them, they’ll hopefully still have plenty of concerns about deploying AI agents in the real world.
While Microsoft is promising a control plane for agents, they are not promising that anything running within Agent 365 would be secure. It’s up to the customer to do something with the technical guardrails and analytics that the toolkit provides. Yes, putting on a helmet before jumping on a motorcycle lessens the risk of fatal injury. No, it still doesn’t make it as safe as a bus ride.
One key element of what Satya talked about and what wasn’t yet part of the Agent 365 offering described at Ignite 2025 was computer use for any agent. Instead, MS launched Project Opal that sounds to me like an early version of what should eventually become a part of the A365 package. In short, it’s a way for AI to open a Windows 365 cloud PC for agents and click on things inside that PC, like browser and Office tools.
Upon seeing what the preview for Frontier customers looks like, I was greeted with this warning:

Computer Use comes with significant security and privacy risks. Both errors in judgment by the AI and the presence of malicious or confusing instructions on web pages, desktops, or other operating environments which the AI encounters may cause it to execute commands you or others do not intend, which could compromise the security of your or other users' browsers, computers, and any accounts to which AI has access, including personal, financial, or enterprise systems. Taking appropriate measures to address these risks is recommended. Ensure that you fully understand what resources Opal may access.
What MS is talking about here is prompt injection. Something I have been also talking about in this newsletter. Because it’s an unsolved problem that’s unlike anything we’ve encountered before in the realm of business applications.
You might ask “how is this different from RPA bots?” It basically comes down to why Copilot Studio agents are different from Power Automate cloud flows: the AI version is non-deterministic. It’s up to the LLM to figure out what it does with the task given to it and all the tools + data it has access to. Therein lies the massive risk. Not just that the agent couldn’t complete the task — it’s about all the possible things that it could do with those privileges.
LLMs cannot distinguish between instructions and data. Everyone would love that they’d become smart enough to do it, but it’s not in their DNA. It’s a bit like hoping humans could fly, but nature didn’t design us that way. Sure, our DNA has enabled humans to build airplanes and autopilots for them. But that airplane in turn is not able to go and build new things. It’s not a higher form of intelligence, let alone anything generic that could be applied outside its natural domain. LLMs are highly unlikely to directly evolve into AGI, just like humans are unlikely to suddenly grow wings.
This gets us to the problem of AI agents. Less than half a year after ChatGPT was launched, I blogged about how developers were plugging GPT-4 into a web browser for automating the computer. My speculation about this being what the tech giants would be quickly pursuing was pretty accurate. What I didn’t know back then was how big of a challenge it would be to secure such agents.
Simon Willison has described this as the lethal trifecta for AI agents. If your AI agent has 1) access to private data, 2) ability to externally communicate, and 3) exposure to untrusted content — you are screwed. You cannot protect yourself from malicious actors taking over your AI agent and doing bad things with it. The problem of course is that most business scenarios for useful Copilot agents tick all three boxes.
There’s a great set of slides on this topic to give you a better understanding of the risks. I’ll end this issue with what Simon says, as I think that emphasizes the point I made in the subject line:
As a user of these systems you need to understand this issue. The LLM vendors are not going to save us! We need to avoid the lethal trifecta combination of tools ourselves to stay safe.






