In today’s data-driven world, organizations often struggle with balancing agility and governance in their analytics workflows. When business rely on centralized resources, they frequently run into an “IT bottleneck”—a gap between analysts who know the business and engineers who manage the datasets. This challenge can lead to duplicated efforts, inconsistent metrics, and frustrating delays.
In this post, we’ll explore a fictional case study of a manufacturing company that experiences this pain point. You’ll see how their reliance on a small, centralized data engineering team created inefficiencies, and how introducing the “Analytics Engineer” role helped bridge the gap. By empowering analysts with intermediate data engineering skills, they unlocked faster insights, reduced silos, and improved collaboration across teams.
Manufacturing Company Example
To start, consider a fictional manufacturing company that produces digital billboards. They employ a large community of Power BI developers throughout the organization (with “Data Analyst” titles) to create Power BI reports for their specific areas of the business.
In the IT department, there’s a centralized team of data engineers to create and manage enterprise datasets. These data engineers handle requests from analysts to create new datasets as needed.
Here are some sample positions in the organization structure:
- Lisa – Data Analyst on the Sales team
- Uses sales data to analyze product performance and identify customer trends
- Needs to aggregate raw sales order data to create customer profiles based on purchasing behavior
- Raj – Data Analyst on the Warranty team
- Uses claims data to analyze warranty trends and reduce warranty costs
- Needs to aggregate raw warranty claims with specific business logic to measure profitability metrics
- Maria – Data Analyst on the Engineering team
- Uses telemetry data from the field to analyze product reliability and improve product quality
- Needs to clean and standardize raw telemetry data to prepare it for analysis
- Tom – Data Analyst on the Manufacturing Operations team
- Uses production floor data to analyze the manufacturing process and improve efficiency
- Needs to transform raw work order data to create productivity metrics for different stations
- Alan – Data Engineer on the IT Data Engineering team
- Serves as a dedicated data engineer with specialized skills in data preparation/transformation
- Takes requests for new enterprise datasets from analysts across the organization

Note that the organization uses Microsoft Fabric for their data preparation and analytics, and they prepare each role through the following Microsoft certifications:
- PL-300 Power BI Data Analyst
- Data analysts take this to learn how to build semantic models and visualize data in Power BI
- DP-700 Fabric Data Engineer
- Data engineers take this to learn how to load and transform data with advanced cloud tools
Since the dedicated data engineers are the only ones with the advanced technical skills for data preparation, the Power BI developers are restricted to only creating Power BI reports and semantic models. This creates an issue called the “IT bottleneck”, which we’ll explore next.
The IT Bottleneck
Since data engineers are in high demand (and expensive), the centralized data engineering team is relatively small compared to the vast network of Power BI developers. Thus, they are overwhelmed with a high volume of requests from the business, resulting in long lead times.

Rather than wait for their requests to be turned into enterprise datasets, many Power BI developers choose to implement their transformations/logic themselves in Power Query Editor. This results in 2 main issues:
- Power Query transformations are less efficient than other tools (such as spark notebooks and cloud pipelines), resulting in high refresh times and limited transformation capabilities
- Semantic models have limited capabilities for sharing data, resulting in data silos and duplicated efforts
We’ll dive into the second issue further with a little example.
Semantic Model Data Silos
When transformation logic is performed in Power Query, it’s siloed inside of the final semantic model. While it is possible to reference an existing semantic model with a new report, it’s only a read-only view of the semantic model and its tables.
Suppose there’s a table in an existing semantic model with specific logic applied, and you want to take that final table and combine it with other datasets for another use case. You would either have to add your extra logic to the existing semantic model (which is tough if someone else owns it) or re-create the logic in your own semantic model.
In our organization example, most analysts have to re-create the logic in a new semantic model, which has the following issues:
- Duplicated effort is used to maintain the same logic in multiple places
- Wasted resources are used to perform the same transformations in multiple places
- Inconsistencies can arise between the different sources, resulting in metrics that contradict when they should match
We’ll demonstrate this with a quick example from our fictional organization.
Example: Telemetry Data Silos
In our fictional organization, Raj (from the Warranty team) wants to utilize the telemetry data to perform preventative maintenance on specific billboards in the field (to reduce warranty costs). Before he can use it though, there’s cleaning and standardization logic that needs to be applied to the data, which Maria already did for her engineering use cases.
Maria shared her semantic model with the cleaned telemetry data, but Raj needs to combine it with enterprise customer data for his use case (which can’t be done when referencing the existing semantic model). Instead, Raj has to create a new semantic model where he re-creates all the cleaning logic from Maria’s semantic model.

With this setup, Raj spends a lot of time re-creating the logic from Maria’s semantic model, and over time, users notice inconsistencies between Raj’s data and Maria’s data.
To alleviate these issues, we suggest a different approach to the environment setup, which uses a new concept called the “Analytics Engineer”
The Solution: Analytics Engineer
To address the gap between the data analyst and data engineer roles, Microsoft created a Fabric Analytics Engineer certification as a middle ground between the other two. The training introduces some of the data preparation tools at a less advanced level than the Fabric Data Engineer certification, allowing data analysts to learn the features without becoming fully dedicated data engineers.

In our fictional organization, some of the analysts could take Fabric Analytics Engineer training, preparing them to create their own datasets for their specialized use cases. The core enterprise datasets would still be maintained through the IT team, but the more specialized datasets could be maintained by individual data analysts (thus solving the IT bottleneck issue).
Example: Sharing Telemetry Data
Let’s revisit our example where Raj wanted to utilize the telemetry data. With the new structure, Maria implemented the data cleaning logic in a Fabric data warehouse instead of a semantic model. Now, Raj can copy the final tables into his own data warehouse and join in the customer data without duplicating the cleaning logic.

The new setup makes it much easier for Raj to set up his analysis, but also ensures consistency in the data cleaning logic.
Conclusion
The IT bottleneck isn’t just an inconvenience—it’s a structural challenge that slows down decision-making and creates inefficiencies across the organization. By introducing the “Analytics Engineer” role, companies can empower analysts to handle specialized data preparation without overburdening the central data engineering team. This hybrid approach preserves governance, reduces duplication, and accelerates insights.
If your team struggles with long lead times and fragmented data processes, consider investing in the Fabric Analytics Engineer training. It’s not just a certification—it’s a strategy for bridging the gap between business and IT, unlocking agility without sacrificing control.
