A major difference between vibe coding an app on your own vs. building a solution on top of Microsoft Power Platform comes down to the question of “how do you manage it?” This could be about app deployment, identity and access management, data connectivity - or monitoring the app.
The power of the low-code platform and the major factor that has set MS apart from smaller players in the market has never been about what the Power Apps specifically do. Let’s face it: they are not the prettiest, not the most enjoyable to build, not the most flexible to tweak and extend with code, not even the most AI-ready low-code apps.
No, it’s always been about the platform + the ecosystem. For customers who are already running production workloads in the Microsoft cloud (be it M365 productivity apps or Azure infra/serverless solutions), adding Power Apps and Power Automate into the mix has been the easy and logical choice. Because of the compatibility of admin/governance tools and familiarity of the experiences.
How does the assumption of administration ease then hold true in the real world? It’s tough for me to say what the low-code platform experience outside MS products would be like, given how I’ve spent time with customers that are mainly just using the Power Platform. Still, I can point out some gotchas of areas that most likely aren’t meeting the expectations set out in MS product marketing materials. One of which is monitoring.
Hello? Is this thing on?
Apps do need monitoring, to identify when they are either failing to complete tasks they’ve been designed for, or when their performance drops in a significant, user impacting way. Thanks to the evergreen platform which continuously receives updates from the platform vendor, things can break even when no one touched anything on the customer or app developer side.
A much bigger area of monitoring concern is automations, though. Things that run in the background, with no user-facing surface that would immediately give them a signal that things are off. If you just assume that a Power Automate cloud flow you activated will keep running flawlessly every day from here to infinity, the chances of you being wrong one day are quite high.
Just last week, a customer asked me why certain automated data creation for a manufacturing project work management system wasn’t happening anymore. No solution updates had been pushed, but the automation had simply stopped. In the end, the only fix was to switch the cloud flow off and on again. And like magic, things started to work again. Something in the MS cloud had simply messed up the trigger of the underlying Azure Logic App that cloud flows are built on. While it shouldn’t happen - yes, it does.
“Maybe AI could help us here?” That would be lovely, I haven’t managed to get many benefits from Copilot in the Power Automate context yet. Back when I was testing the Copilot sidecar that opened itself while visiting the Automation Center, the responses weren’t great. The way Copilot chat presented the data about flow run telemetry ranged from useless to misleading.
Luckily, we don’t have to rely on the UI for AI. Classic dashboards are a better fit for showing such data and making it easy for humans to spot anomalies that require drilling deeper into the metrics. Power Platform Monitor is the umbrella feature name that now is presented prominently in the new PPAC.

“Track overall performance, and follow alerts you’ve set” with the Monitor tab in Power Platform Admin Center
Monitor also exists as a tab in the Power Apps Maker portal, while on the Power Automate Maker portal the data is under Automation Center. Today, I’ll focus on the tenant-level Power Platform Admin Center.
“Real-time” data reimagined
When the Power Platform Monitor feature went GA in August 2025, here’s how Microsoft’s blog post advertised the capabilities Monitor offers:
Ever had a critical app crash at the worst possible moment, or a vital flow suddenly stop sending emails? With Monitor, you don’t have to wait for end-users to complain. Now generally available and enabled by default, no setup required, Monitor gives makers and admins real-time visibility, powerful metrics, and actionable recommendations to keep apps and automations running smoothly.
Wow, sounds amazing! Are you saying that I could open the Monitor dashboards and immediately see what’s happening with my apps and flows in the tenant right now?
That’s what it indeed sounds like. But it’s not how Monitor works. In reality, the data we get in the dashboards, charts, and tiles of Monitor UI in PPAC are aggregated data on a daily level. Before the day ends, you’ll have no visibility at all into what has happened. The charts are historical data from last 30 days, aggregated to a daily level.

“Real-time” data - once per day, aggregated.
In practice, it looks like the delay isn’t merely 24 hours. I’ve been keeping an eye on the data that my own tenant’s PPAC provides me. On the morning of writing this, when I came to the office and looked at the Monitor screens at 09:00 in the morning, the timestamp of the last data update was from over 54h ago. Instead of knowing how my flows are doing this Friday morning, I see aggregate data from Wednesday.

My tenant’s Monitor data for a specific cloud flow, with data last updated 54h ago.
In practice, what’s happening in the UI is possibly a time zone related confusion. Since I’m in UTC+3, that 3 AM timestamp may come from an offset that the charts handle in a funny way. Maybe the Wednesday 27th data has been collected from the entire day. Still, that’s over a day’s worth of delay for my morning status check.
The promise of “you don’t have to wait for the end-users to complain” feels downright false here. If there are indeed systems that the daily business operations of the organization rely on, knowing about their issues over one day after the fact gives plenty of time for end-users to notice it before the Power Platform admin does.
Let the computer alert YOU!
In the end, it shouldn’t be up to any individual like me to remember to open the PPAC dashboards on a regular basis. Luckily, the Microsoft product team also understands this. Which is why immediately after the Monitor GA announcement there was a preview launched for Monitor Alerts:
Stop chasing problems and start preventing them. Monitor Alerts flips the script for Power Platform admins. Instead of manually checking dashboards, you can define custom health thresholds and get notifications when apps or flows start slipping. No guesswork, no endless refresh marathons—just proactive control.
Very well, let’s be proactive and take control of our flows! Now, since we already know that the Alerts work on data that is aggregated to 24h level, we need to immediately disqualify any alert scenarios where an individual flow run should trigger something. We need to be comfortable with an outage of at least one day and the business impact resulting from that. Sometimes, that’s a reasonable SLA expectation for low-code automations, though.
The available metrics for Power Automate in Monitor Alerts are duration (in seconds) or success rate. (Run count seems to be missing, even though it’s in the docs.) Out of these, the success rate is closest to being able to tell the admins that an operation that the business might have expected to complete did not do so. Let’s therefore use that to configure a strict rule where anything below 98% of successful flow runs in this environment gives us an alert:

Adding a rule for Monitor Alert, to be triggered when flow success rate is under 98%.
Since the Alert feature is configured on environment level, any flow failing there is going to notify the recipients by email. It might be nice to be able to limit this to specific flows only, but that isn’t what Monitor currently offers. Nor do we have any other channel available apart from the trusty ol’ email messages for the time being.
What do these emails then look like? They will be restricted to a template shown below, so any instructions you wish to provide to the recipients will have to be in the alert name field. Again, I wish more configurability options are on the roadmap.

Every single Power Platform Monitor alert message in my mailbox so far.
Great, the alerts are working, so we’re all good and proactive now! Well, actually, no. You’re missing out on most of the failed flow runs. That’s what my own tests have shown, at least.
The above image from my Outlook inbox shows all the Monitor messages I have received. I configured the Monitor Alert rule on September 19th and got the first notification email immediately after saving the rule. Since then, there have been three alert emails. Which is not great, considering that one specific flow (seen in the Monitor chart screenshot above) has been below the success rate threshold every day.
I should have received ten alert emails. I have received three. So, the success rate of the Monitor Alert feature is currently 30%. That’s not exactly what you want to see. Either the system should fail every time due to a persistent issue, or it should be sending you alerts all the time (maybe even when the threshold didn’t get exceeded). Because reliability is what monitoring is all about.
The cloud runs on trust
I get it, the Alerts are a preview feature - unlike the main Monitor that is now Generally Available. It isn’t production ready yet, rather the feature is out now so it can get exposure to real world use cases and data. Still, it is already gated behind Managed Environments. It is intended for customers who are paying for Power Apps & Power Automate premium licenses.
NOTE: while I was finalizing this newsletter issue, the whole Monitoring system for flows stopped working. As of right now, I haven’t received new data for 6 days in my tenant. When doing a search via the PPAC’s known issues page, I discovered “Issue ID 5544878: Possible missing data in flow analytics reports”. The issue remains active and has been opened on Aug 25th. However, I have received data on Aug 29th still, so I’m not entirely sure if this is the root cause for everything I’ve written about here.
The problem I have is how all of this is marketed. “When incidents hit, every minute counts.” That kind of text, together with the misleading use of “real-time” term here, gives the reason for customers to expect more. When the vendor is claiming that their platform has built-in monitoring capabilities that meet real-life business requirements - that impacts everyone. Customers assume everything is under control, partners don’t invest time in additional monitoring, community members refer everyone to MS docs / marketing materials, no room is left in the market for monitoring solutions from ISVs.
Yeah, I know all those posts in the Power Platform blog are massaged by Copilot these days. After all, using AI is no longer optional for MS employees. What happens in practice is that the LLM beefs up the marketing message and presents claims that people who know the real product capabilities might not have written there. As a result, we now need to treat MS product team blogs to be optimized for persuasion rather than the truth. Whereas earlier they used to be more factual and technically oriented.
To understand the reason the newer, bigger LLMs continue to hallucinate this way, I recommend checking out this study: Machine Bullshit: Characterizing the Emergent Disregard for Truth in Large Language Models. This tendency of sweet-talking AI tools to ignore any negative signals and emphasize the positive qualities can be as big of an issue as pure hallucinations.
Combining that with advertising business models, the future of online content seems pretty dim for customers who want to know what the products actually are capable of. I strongly recommend checking out the Beware of the Google AI salesman and its cronies article if you’re still thinking a data giant like Google would be able to summarize its search results accurately.
Why not a DIY Monitor on Azure?
Okay, if the productized monitoring features for low-code solutions aren’t meeting your demands, why not just leverage pro-code tooling? Technically, the Azure Application Insights integration for Power Automate has been available for two years already. So, all of us Power Platform professionals surely are using it all the time, right? Right…?

Just because logging is technically possible, doesn’t mean it is available when you need it…
Yeah, I have activated the data export to Application Insights a few times over the years. I have even written a blog post about model-driven Power Apps telemetry data collection in 2021. Am I comfortable with diving into the data and tools on the Azure side? No, unfortunately there haven’t been situations where customers would have been interested in paying me for hands-on monitoring work yet.
Microsoft has documented steps on how to set up Application Insights with Power Automate. Getting the data flowing into Azure is quite simple. What happens next is the part where considerably more cognitive effort is needed. Because of the generic nature of the service, we don’t have a way to just say “show me my cloud flows”. Instead, the documentation guides you to filter by flow GUID in the Operation Name property.

Trying to filter Azure Application Insights metrics data to Power Automate cloud flows… by GUIDs.
And so it goes. Azure Application Insights would surely be a powerful tool for those who are willing to invest some time (and Azure credits) in making it work for them. You could configure metric alert rules that are triggered in the specific way your solution requires. Twisting the ARM JSON schema with the help of AI to create the correct filter criteria probably wouldn’t be too difficult with GenAI tools to help you.
You just need to build it. Which is very different from the effortless visibility that the Monitor feature in Power Platform promises to deliver. That gets us to the core dilemma of choosing between pro-code and low-code approach. It’s never about what could be built but rather what will get done.
Treating every canvas app and cloud flow in the same way as you’d treat custom software developed by a team of professional engineers wouldn’t ever make sense. The lower the barrier is for building something, the more things will be built. Then, you’re left with the question of how you are going to govern a billion apps?
You need to find a balance between building things with Azure vs. consuming services from Power Platform. In a Microsoft cloud customer organization, there will typically be a huge number of flows for personal or team productivity that can’t realistically be maintained by anyone but the original maker. Even if some IT admins would see things failing in a tenant-level dashboard, who you gonna call? In most cases, no one.
Stepping up from that level, you often end up in the uncanny valley where the solution in question is neither a citizen managed tool nor an IT managed critical system. You need to ensure you’re not under-governing such apps and flows by leaving it all to single individuals. At the same time, over-engineering the requirements for giving a green light to important low-code solutions in the tenant will guarantee that you’ll not see too many such solutions being used. And no business value from your licensing and other investments.
I’m known for openly pointing out the problems that customers may run into with Microsoft business apps technology. Yet that doesn’t mean I wouldn’t recommend using it. Quite the contrary. I write about these issues because I care about what people can achieve via low-code tools in the MS cloud. Even with all the magical vibe coding agents out there these days, I believe organizations need a level of control, visibility, security, reliability, and repeatability with their digital business tools that today’s AI cannot yet offer.
I hope that MS will keep investing in the productized admin tooling that makes it easy for customers to trust Power Platform. Even though Monitor today is not yet quite where it is advertised to be, I wish it will evolve into a layer that adds value on top of the raw telemetry data and Azure services. So that both the makers and admins can focus their time and effort elsewhere.