Microsoft Fabric has revolutionized data management, processing, and use. Here are the top picks from the event.
Copilot was heavily promoted in almost all sessions last year. At that time, it was lacking in terms of functions and use cases. At this year’s conference, the main theme was clearly AI and Fabric data agents. Almost all sessions demonstrated that Fabric is integrating agents into virtually all of its services.
Fabric’s data agents can be defined as AI-driven software with real-time access to data that can operate independently based on that data. Fabric’s data agents are fast to deploy, and they don’t require specific configuration. For more precise control, consider switching to the codable AI agents found in AI Foundry and Visual Studio Code.
A highlight of the conference was the importance of providing instructions to the agents. In response, Microsoft recently updated Fabric’s data agent instruction view to support long sets of instructions of up to 15,000 characters. The product is now ready for organizations to start their PoC projects. If the knowledge base is in order, deploying the Fabric data agent is quick, and it is surprisingly good at retrieving and analyzing data with the right instructions! While data agents are not a replacement for daily reporting, they will most likely be a useful tool for ad hoc queries and for guiding users to the right report.
Are agents the future? Based on Microsoft’s efforts, this might well be the case. Fabric is slowly shaping up to be the foundation of the organization’s data platforms. The most common data platforms are supported in the form of shortcuts or mirrors. This means that previous data properties are usable in almost real time in Fabric. The combination of data agents and mirrors is interesting. What about a close to real-time mirrored shadow database in your application database that provides instant access to Fabric data agent analyses without coding a separate agent?
Have you ever needed to update data directly from a Power BI report? This fantastic feature is now available. A more in-depth presentation of the Translytical Task Flow was given at FabCon. This feature, which was aptly named, was created by combining transactions and analyses in one place. A value entered into the report is saved in the database and then returned to the report. This method can be used to make comments in the report, acknowledge alarms or create a simple forecasting system, among other things.
Report authors will receive new features as well. A browser-based Power BI developer solution of is coming soon. Developers can opt to stop using Power BI Desktop entirely and perform the entire semantic model development process in a browser. This includes retrieving data sources with Power Query, defining relationships between tables and creating metrics. This allows Power BI solutions to be developed on non-Windows computers.
Developers can also take advantage of a feature that allows them to use previously written DAX formulae. The DAX UDF (DAX User Defined Functions) function enables the creation of a metric library for an organization. Compared to calculation groups, DAX UDFs support parametrization, enabling the creation of more comprehensive, reusable functions. The feature is currently found in the Preview version and does not yet enable centralized metric libraries for selecting functions for the semantic model, but it’s a feature worth investigating.
Many new features, some small and some large, were presented at Fabcon, some of which were introduced in the sessions without much fanfare. One of these features is “Soft-delete”, a sort of recycle bin for Fabric, where all deleted items end up. These items can be restored within a specific time frame (7–30 days). From now on, you are safe if you happen to delete a whole workspace.
Two interesting new apps were also presented at Fabicon: Map and Graph Analytics. The Map application is used to create advanced maps, and Graph Analytics enables modeling and analysing networks and relationships. Neither is a feature of Power BI, but rather separate applications for expanding Fabric.
During the Map presentation, delivery trucks moved on a map in real time, and bottlenecks were highlighted in red as they occurred. The Map application can use Lakehouse files or real-time data and supports large amounts of data.
Graph Analytics was developed using a new language called GQL (Graph Query Language). One use case is recommending products to customers based on the buying habits of similar customers. In banking, the application can be used to identify fraud by revealing networks of suspicious transactions. You can also come up with use cases for Map and Graph Analytics.
Fabric offers even broader possibilities for data management and use. In particular, agents’ ability to process data without complex coding opens new doors for business users and developers. Power BI reports become more comprehensive and easier to develop. Microsoft’s focus on Fabric’s components demonstrates their intention to develop a more unified and flexible data platform.
Are you considering transitioning to Fabric and interested in discussing it further? We have put several Fabric deployments to production and are happy to help.
What is Microsoft Fabric? Part 1: From the past to the present
What is Microsoft Fabric? Part 2: Technology
What is Microsoft Fabric? Part 3: Performance
What is Microsoft Fabric? Part 4: Licenses
What is Microsoft Fabric? Part 5: Fabric is Microsoft’s favorite child and it is developing rapidly
What is Microsoft Fabric? Part 6: Lakehouse integrates artificial intelligence and combines disparate data
A data expert building the future – Pinja’s road to the core of Microsoft Fabric
Pinja’s knowledge management and business intelligence services