
Search Interloop
Use the search bar below to find blogs, events & other content on the Interloop Website
53 results found with an empty search
- Yes, You Can Use Fabric to Report on Fabric
By Luisa Torres Whether you're building data products or using them to power decisions, knowing how your data is structured makes all the difference. Microsoft Fabric makes it easy to build and scale powerful data environments. Keeping track of all those environments? Not always as simple. That’s why we started using Fabric to report on itself—unlocking clear, intuitive visibility into everything we’ve built. Reporting on Fabric is all about turning its metadata tools inward: surfacing the structure, flow, and relationships behind your data products in one centralized view. It’s a simple idea with big impact—and one that’s helped our team move faster, collaborate smarter, and stay organized at scale. Why Report on Microsoft Fabric? See How Data Flows, Visually: Tracking data from source through each transformation layer makes it easier to spot logic, dependencies, and opportunities for optimization. Stay Organized as You Scale: As your data footprint grows, this approach helps you quickly understand what you’ve built—without jumping between workspaces or asking around. Accelerate Team Onboarding: New and existing team members can more easily navigate the data landscape, reducing ramp-up time and improving collaboration. Make Smarter Decisions, Faster: Identify the most relevant tables and columns for any analysis with ease—boosting both speed and accuracy in data-driven work. This visibility becomes especially valuable across larger or cross-functional teams, where shared context leads to faster alignment, fewe r miscommunications and better business outcomes. How We Do It We tap into Fabric’s APIs using PySpark notebooks to dynamically pull detailed metadata from our workspaces and Lakehouses—everything from tables and schemas to columns and their characteristics. To make that metadata truly usable, we assign a unique key to each asset. Then, we visualize the data in Power BI. The result? A clean, interactive experience that helps users of all backgrounds navigate the system and understand how everything connects. Solving a Real-World Challenge As our own data ecosystem expanded, so did the complexity. Manually checking each Lakehouse to understand what tables existed—and how they were structured—became a time sink. We’d already built visualizations to help make sense of specific sources. So we thought: why not apply that same principle to the platform itself? That idea led us to develop a custom Fabric Data Explorer —a solution built on Fabric’s metadata that helps both technical and business users quickly see the shape and flow of our data environment. It’s helped us eliminate guesswork, avoid confusion, and collaborate with more clarity across every team. Final Thoughts Using Fabric to report on Fabric isn’t just an internal organization hack—it’s a strategic way to see your systems clearly, work smarter, and build with confidence. A clean, well-mapped foundation pays off. It makes maintenance easier, onboarding smoother, and evolution faster as your needs shift. Want to explore how Fabric can help your team gain visibility and scale smarter? 🛰️ Get looped in today .
- Exciting News: Fivetran Acquires Census – What This Means for Fabric Customers
Delivering Unparalleled Value in Data Integration and Operational Syncing What’s better than having partners whose tools complement your services and help customers get more from their data? Having those partners join forces. Our longtime collaborators, Fivetran and Census , are officially becoming one: Fivetran has announced an agreement to acquire Census. Why This Acquisition is Transformative With the announcement that Fivetran—a leader in automated data pipelines—has agreed to acquire Census—a pioneer in reverse ETL technology—the future of data integration and operational syncing just got a major boost. Here’s why we’re excited: Unified Expertise: This acquisition combines Fivetran’s robust data ingestion capabilities with Census’s innovative operational syncing, creating a truly end-to-end data movement ecosystem. Enhanced Customer Value: Customers can now benefit from a more streamlined approach to transforming raw data into actionable insights—delivered directly within their operational tools. Stronger Partnerships: At Interloop, we’ve seen firsthand the value both platforms bring to the Microsoft Fabric ecosystem. Together, their combined potential is greater than the sum of their parts. Innovation in Data Usage: This collaboration lays the groundwork for the next wave of advancements in how businesses activate data across analytics and operations. Real-Life Impact: A Customer Success Story We’ve already supported several Fabric customers using both Fivetran and Census—and the results speak for themselves. One example: a complex manufacturing organization faced challenges integrating their ERP, marketing platform (HubSpot), and sales CRM (SugarCRM). Using Microsoft Fabric, Fivetran, and Census, we designed a unified solution. Fivetran automated the ingestion of all three systems into Fabric and built a cross-system Power BI dashboard. Census then synced that data back into their CRM, enabling the sales team to view both order details and marketing engagement scores directly on account records. The result? Sales reps are now fully informed at every touchpoint, and the dashboard reflects real-time data across all three systems. Now, imagine that capability—with the tools combined under one roof. Impact on Industry Trends and Market Growth This acquisition marks a major shift in how companies think about and implement data movement. Here’s what we see on the horizon: Accelerated Digital Transformation: As data syncing becomes simpler, more businesses will adopt agile, data-driven strategies. More Cohesive SaaS Ecosystems: The alignment between Fivetran and Census will encourage tighter integrations across tools and platforms—especially in Microsoft-centric environments. Reverse ETL Goes Mainstream: This deal validates the critical role of reverse ETL in modern data stacks, pushing adoption forward. Broader Market Accessibility: With more accessible, powerful tooling, companies of all sizes—not just enterprise—stand to benefit. Final Thoughts This partnership between Fivetran and Census is more than a business transaction—it’s a leap forward for the data industry. At Interloop, we’re excited by what this unlocks for our customers and for the broader Microsoft Fabric ecosystem. With best-in-class tools now united, and Interloop’s expertise layered on top, we’re more equipped than ever to help mid-market businesses harness the full power of their data. Want to understand how we work together to make your data system soar? Get looped in today .
- From Manual to Automated: How Census + SendGrid Streamlined Product Return Alerts
By Mclain Reese In today’s fast-paced business landscape, automation isn’t just nice to have — it’s expected. One of our clients needed a better way to keep customers in the loop when product returns were received. But the data required to trigger those notifications? It lived across two separate systems, each with its own structure and quirks. Our task was clear: automate email alerts with precision, pulling from disparate data sources and delivering accurate, real-time updates to both customers and internal teams. By combining Microsoft Fabric, Census and SendGrid, we built a streamlined, scalable solution that eliminated manual effort and improved the customer experience. The Challenge The client needed to send real-time email notifications when product returns were received. But the required information was fragmented across two different systems. To make this work, we needed to: Extract and align product return data from both sources Identify the correct customer recipient for each return CC appropriate internal stakeholders Ensure accuracy, reliability and scalability This wasn’t just a sync-and-send — it required precise data matching, transformation and a smart automation layer to ensure each alert fired correctly. The Solution: Powered by Fabric, Census and SendGrid We approached the project with a modern data activation mindset, leveraging the strength of the Microsoft Fabric ecosystem for data integration and curation, and using Census and SendGrid to drive activation and delivery. Data Integration + Curation with Fabric We started by extracting return data from both source systems into OneLake. Each system required its own extraction method. Once ingested, we cleaned and organized the data, then used Notebooks in Fabric to transform it into a curated Product Returns table — structured specifically for downstream use in Census. Triggering Alerts with Census Census enabled us to activate the curated data. We defined a sync-key that identified new product return records. When Census detected a new sync-key, it triggered the email alert workflow — no manual intervention required. Delivering Dynamic Emails with SendGrid With the sync event in motion, SendGrid took over. Using dynamic templates, we populated each message with personalized return details— including product info, recipient contact and internal CCs. SendGrid’s robust API allowed us to ensure reliable delivery and easy testing. The Outcome What was once a manual, error-prone task is now a seamless, automated workflow. Customers receive timely updates about their returns. The internal team is looped in automatically. And our client has reclaimed valuable time to focus on higher-impact work. This use case highlights how data activation tools like Census, paired with modern delivery platforms like SendGrid, can bring real-time visibility and operational efficiency to life — with the help of a strong integration layer like Fabric. Want to see more in action? Learn how we connect data to outcomes on the Interloop Resource Hub and get looped in today.
- Seamless Insights, Smarter Decisions: Unlock Embedded Power BI Dashboards
By Meaghan Frost Your Power BI dashboards are only as powerful as the people who use them. But if accessing insights means bouncing between platforms, logging into separate tools, or dealing with disconnected data, adoption suffers—and so does decision-making. Embedding Power BI reports directly within operational platforms like Salesforce or Sugar CRM removes those barriers, delivering analytics exactly where users need them. The result? Faster insights, fewer disruptions, and better business outcomes. Bring Insights to the Right Place Data is most valuable when it’s available at the right moment. With embedded Power BI dashboards, your teams can access critical insights without switching between tools or disrupting their workflow. Whether it’s a sales team reviewing real-time customer data inside a CRM or a finance team analyzing trends within an ERP, keeping insights in-platform eliminates unnecessary friction. Cost-Effective, No Extra Tools Needed Advanced analytics shouldn’t require an expanding tech stack. By embedding Power BI reports directly into the platforms your team already uses, you eliminate the need for additional analytics tools or redundant reporting systems. Instead of paying for separate visualization tools, teams can leverage Power BI’s robust capabilities exactly where they work—without added cost or complexity. One Dashboard, Multiple Destinations A common frustration for business analysts and data engineers is building dashboards only to be asked to recreate the same insights elsewhere. Embedding removes this redundancy by allowing teams to create one intelligence solution that seamlessly integrates into multiple environments. This ensures data consistency across applications while freeing up technical resources for higher-value work. Boost Productivity, Cut the Noise The less time employees spend searching for insights, the more time they have to act on them. Embedding Power BI reports eliminates context switching, reducing wasted effort and improving focus. Instead of logging into multiple platforms to piece together information, teams get a centralized, real-time view of the data they need—without distraction. Security That Adapts to Your Needs Embedding Power BI isn’t just about accessibility—it’s about smart security. With built-in filtering and row-level security, reports dynamically adjust based on user roles, permissions, and application context. That means employees, customers, and partners see only the data relevant to them—no custom-built dashboards, no unnecessary duplication, just efficient, secure data access. The Impact in Action One of our clients, a leading material handling equipment dealer, embedded Power BI reports directly within their Sugar CRM. This allowed their sales team to access real-time ERP data and customer insights inside Salesforce, eliminating the need to jump between platforms. The impact? Time savings for the sales team when preparing for customer meetings. Centralized data access that enabled management to make faster, more proactive decisions. Stronger adoption and engagement with analytics, since insights were available where the team already worked. Let’s Make Your Data Work for You Your teams shouldn’t have to chase insights—insights should come to them. Embedded Power BI dashboards bring real-time, actionable data directly into the platforms your business already relies on, transforming decision-making and operational efficiency. Want to explore what this could look like for your business? Get looped in today.
- Boosting Your Data Strategy with Orbit Architecture: A Unified Approach to Seamless Solutions
In the rapidly advancing world of data operations, the ability to manage complex, outcome-driven data solutions is key. Interloop’s Orbit Architecture offers a flexible, resilient framework that helps DataOps engineers structure data to meet specific business needs without causing unwanted disruptions. Designed to work within the gold layer of the medallion architecture, Orbit Architecture enables each solution to stay on track, reducing risk and enhancing security. What is Orbit Architecture? Orbit Architecture is Interloop’s unique data design pattern, crafted to organize data within the "gold" layer of the medallion architecture. This layer serves as the trusted source of curated data that drives outcome-focused solutions. Much like the “separation of concerns” concept in computer science, Orbit Architecture establishes independent data structures and models within the gold layer, tailored to each specific business outcome. Each "orbit" within the gold layer becomes a specialized mission module, delivering consumption-ready, outcome-specific data to keep systems aligned and resilient. How It Works: A Use Case Imagine a scenario where your team manages two key solutions: a customer health dashboard and an operational Copilot. Both rely on data tables from the gold lakehouse. Now, your engineering team receives a request to adjust an ERP data table to better support the Copilot, and once the update is completed, the Copilot runs seamlessly. However, this update causes unintended issues on the customer health dashboard, which also uses the ERP table to link customer order data to sales opportunities. With Orbit Architecture, these disruptions are prevented. Each solution is contained within its own orbit, all connected to the gold data but insulated from one another. This structure allows each outcome to use the data model that best suits its needs—such as a dimensional model for Power BI or a graph model for an AI tool—without interfering with other data solutions. Key Benefits of Orbit Architecture 🚀 Flexibility Each orbit can be tailored to specific business requirements, giving DataOps engineers the freedom to use the data model (graph, dimensional, relational) that best supports each outcome, rather than forcing every solution into one mold. 🚀 Mitigated Downstream Effects Isolated data structures mean updates in one orbit don’t impact others, reducing the risk of unintended disruptions and data inconsistencies. 🚀 Supports OneLake Ideology Aligned with the OneLake approach, Orbit Architecture maximizes the value of a single golden version of data, allowing for analysis without duplication or data movement. 🚀 Metadata Management Orbit Architecture supports comprehensive metadata for data lineage, quality, and governance, helping to keep all systems organized and compliant. 🚀 Security and Access Control Robust security and access controls are available at each orbit level, allowing sensitive data to be protected without sacrificing accessibility for authorized users. The Wrap Up: Why Orbit Architecture Matters Orbit Architecture equips DataOps engineers with a powerful, structured framework for building flexible, outcome-specific solutions that are efficient and secure. By providing isolated data structures within the gold layer, Orbit Architecture allows engineers to mitigate downstream effects, ensure seamless collaboration, and make informed, data-driven decisions without duplication or interference. Get Looped In Looking to achieve more with your data? Get looped in with one of our data experts today to explore how Orbit Architecture can streamline your data systems and elevate your outcomes.
- Copilot, Azure Studio, and Bot Framework: Navigating Microsoft's AI Capabilities
By Meaghan Frost Artificial Intelligence is everywhere. This is leading to new feature announcements, new capabilities, and... sometimes leading to confusion. There are so many terms and tools to know, after all! This blog is intended to help explain some of Microsoft's key AI platforms and tools, noting what's what and supporting you on your AI learning journey. Let's dive in... Copilot Studio Copilot Studio is a platform designed to extend and customize the capabilities of Microsoft 365 Copilot. It allows developers to create custom copilots tailored to specific business needs by integrating various data sources and actions. Key features include the ability to add knowledge from Dataverse tables, create topics with generative answers, and extend functionalities using plugins and connectors. Azure Studio Azure Studio is a comprehensive platform for developing, deploying, and managing AI applications. It brings together models, tools, services, and integrations necessary for AI development. Key features include drag-and-drop functionality, visual programming environments, prebuilt templates, and tools for advanced data integration and workflow orchestration. Bot Framework The Bot Framework is a set of tools and services for building conversational AI experiences. It includes Bot Framework Composer for designing bots, Bot Framework Skills for adding capabilities, and Power Automate cloud flows for integrating with other services. Key features include the ability to create and manage actions, define business rules, and integrate with various APIs. Key Features and Use Cases Copilot Studio : Key Features : Customizable copilots, integration with Dataverse , generative answers, plugins, and connectors. Use Cases : Enhancing productivity by creating domain-specific copilots, automating repetitive tasks, and providing contextual information to users. Azure Studio : Key Features : Drag-and-drop functionality, visual programming, prebuilt templates, advanced data integration, and workflow orchestration. Use Cases : Rapid prototyping, building and refining AI applications, deploying scalable AI solutions, and managing AI workflows. Bot Framework : Key Features : Bot design with Composer, adding skills, integrating with Power Automate, defining business rules, and API integration. Use Cases : Creating conversational AI experiences, automating customer support, integrating with enterprise systems, and enhancing user interactions. Empowering Developers and Data Engineers These tools empower developers and data engineers by simplifying the process of creating and deploying AI-driven applications. Copilot Studio allows developers to create custom copilots without deep technical knowledge, enabling them to focus on business-specific needs and integrate various data sources seamlessly. Azure Studio provides a comprehensive platform that supports the entire AI lifecycle, from model selection to deployment. Its user-friendly interface and prebuilt capabilities accelerate development and reduce the need for extensive coding. Bot Framework offers a robust set of tools for building conversational AI, allowing developers to create sophisticated bots with minimal effort. Its integration with Power Automate and other services streamlines the development process and enhances functionality. Supporting the Future of AI and Machine Learning These platforms are at the forefront of AI and machine learning innovation. In the next year, we can expect several advancements: Enhanced Integration : Improved integration between Copilot Studio, Azure Studio, and Bot Framework, allowing for more seamless workflows and data sharing. Advanced AI Capabilities : New AI models and tools that provide more accurate and context-aware responses, enhancing the overall user experience. Increased Automation : More automation features that reduce manual intervention and streamline processes, making it easier to deploy and manage AI applications. Preparing for the Future Businesses should start preparing by: Investing in Training : Ensuring that their teams are well-versed in using these platforms and understanding their capabilities. Exploring Use Cases : Identifying areas where AI can add value and experimenting with pilot projects to understand the potential benefits. Building a Data Strategy : Developing a robust data strategy to ensure that the necessary data is available and accessible for AI applications. By leveraging these tools and preparing for the future, businesses can stay ahead of the curve and harness the full potential of AI and machine learning. Get Looped In Trying to understand how to set your organization up for the best possible AI foundation? We have a team of experts to support with that. Let us know you'd like to connect, and we'll happily support you on anything Microsoft, data, or artificial intelligence. Get Looped In today.
- Plotting the Course: A Strategic Guide to DataOps Tools and Optimization
By Jordan Berry & Matt Poisson Modern businesses rely on data to drive decisions, optimize operations, and power AI initiatives. Whether it’s advanced analytics shaping strategic insights or Retrieval-Augmented Generation (RAG) AI unlocking new efficiencies, the foundation of success lies in unifying and managing data effectively. With a growing web of data sources, evolving formats, and shifting business requirements, staying ahead can feel overwhelming. Data Operations (DataOps) provides the framework and toolset to keep up with business demands, ensuring data products remain accurate, efficient, and scalable. This guide maps out key DataOps tools to help your team build a high-performance, AI-ready data strategy. Defining Data Operations (DataOps) When organizations first engage with data and AI, the initial steps are often straightforward: Build data pipelines Generate dashboards Develop AI models Deliver insights to stakeholders At first, these efforts seem successful—until operational realities set in. Data teams quickly find themselves in a cycle of maintaining, troubleshooting, and firefighting issues instead of driving innovation. Common challenges include: Broken pipelines disrupting analytics and AI models Stakeholders questioning data accuracy and freshness Scaling issues as data environments become more complex This is where DataOps tools become mission-critical. Key Capabilities of DataOps Tools An effective DataOps strategy enhances four critical areas : Productivity – Reduces time spent on repetitive tasks, allowing teams to focus on high-value, strategic initiatives. Monitoring – Provides real-time observability, ensuring data is fresh, accurate, and reliable—before stakeholders notice an issue. Orchestration – Ensures data workflows are executed in the correct order, preventing data mismatches, stale reports, and AI model failures. Optimization – Tracks query performance, system reliability, and cost efficiency, providing actionable insights to improve operations over time. When integrated effectively, these capabilities create a robust, AI-ready data ecosystem. Top DataOps Tools to Consider Selecting the right DataOps tool depends on your existing data infrastructure (Microsoft Fabric, Snowflake, Databricks, etc.), team size, and specific business needs. Consider factors like automation, monitoring capabilities, and scalability when evaluating solutions. Below are some of the leading DataOps tools that can enhance data management and analytics performance: Unravel Best for: Big Data Observability & Performance Optimization Unravel provides AI-driven monitoring and optimization for big data pipelines, helping teams improve performance and cost efficiency. ✅ Strong observability features ✅ AI-powered issue detection and resolution ➖ Initial setup and configuration may require time IBM Data Band Best for: Enterprise Data Observability & Anomaly Detection IBM Data Band automatically builds historical data baselines, detects anomalies, and streamlines data quality issue resolution. ✅ Comprehensive data observability ✅ Strong IBM ecosystem integration ➖ Can be complex to implement, with higher costs for smaller teams Monte Carlo Data Best for: Data Reliability & Automated Lineage Tracking Monte Carlo enhances data visibility, lineage tracking, and root-cause analysis , reducing downtime and errors. ✅ Automated data lineage tracking ✅ Effective root-cause analysis tools ➖ Customization may be needed for unique use cases Bigeye Best for: Real-Time Data Monitoring & Quality Control Bigeye offers real-time monitoring and anomaly detection to maintain high data accuracy. ✅ Easy to use with strong automation features ✅ Proactive data health monitoring ➖ May require additional integrations for broader coverage Anomalo Best for: AI-Powered Data Quality Assurance Anomalo’s AI-driven monitoring ensures data integrity, compliance, and security. ✅ Automated quality assurance with proactive alerting ✅ Seamless integration with multiple data platforms ➖ May require fine-tuning for optimal performance Composable DataOps Best for: End-to-End Data Management & Integration Composable DataOps provides a full suite of data integration, discovery, and analytics tools. ✅ Strong integration capabilities ✅ Comprehensive data management features ➖ Initial setup complexity may be a consideration Interloop Mission Control Best for: Microsoft Fabric DataOps & AI-Ready Workflows Interloop’s Mission Control is purpose-built for Microsoft Fabric , enabling data teams to: ✅ Connect to 500+ sources ✅ Orchestrate and optimize entire data estates ✅ Monitor and ensure always-fresh, high-quality data ➖ Currently exclusive to Microsoft Fabric (not yet available for Snowflake or Databricks) Final Approach: Choosing the Right DataOps Tool The right DataOps strategy can transform your data management and AI workflows, making them more scalable, efficient, and resilient. Whether you’re optimizing for data reliability, automation, or workflow orchestration, these tools provide the foundation needed to navigate a data-driven future. Ready to chart your course? Get looped in today .
- 6 Ways to Ingest Data into Microsoft Fabric (And How to Choose)
By Tony Berry Data ingestion is a foundational component of any modern data architecture, enabling raw data to be collected, imported, and processed from a variety of source systems into a centralized lake or warehouse. Microsoft Fabric provides several robust options for data ingestion - each with its own strengths, limitations, and ideal use cases. Whether your team is focused on building pipelines, accelerating ETL, or supporting analytics and AI workflows, choosing the right approach is critical. Below is an overview of the six key ingestion methods available in Microsoft Fabric, with insights on when and why to use each. 1. Data Pipelines Data Pipelines in Microsoft Fabric offer a code-free or low-code experience for orchestrating ETL workflows. They enable users to copy data from source to destination while also incorporating additional steps, such as preparing environments, executing T-SQL, and running notebooks. Best for: Teams looking to automate and scale standard ETL processes with limited code requirements. ✅ Pros Code-Free or Low-Code – Accessible to broader teams. Workflow Automation – Supports scheduling and orchestration. High Scalability – Capable of managing large volumes of data. ⚠️ Cons Initial Setup – Requires some configuration. Performance Ceiling – May not match code-rich options for extremely high-throughput. Transformation Flexibility – More limited for advanced data shaping or normalization. Learn more about Data Pipelines 2. Dataflows Gen2 Dataflows Gen2 provides a visual, Power Query–based environment for data prep and transformation before ingestion. It’s designed for users who need custom, column-level transformations without writing code. Best for: Analysts and data engineers who need an easy way to prep and shape source data visually. ✅ Pros No-Code Interface – Great for data prep without engineering support. Custom Transformations – Modify schemas, create calculated fields, and shape datasets. Fabric Native – Fully integrated into the Fabric ecosystem. ⚠️ Cons Source Limitations – Bound to supported connectors. Less Suitable for Scale – Not optimized for massive or highly complex pipelines. Flexibility Constraints – May not support advanced ingestion logic. Learn more about Dataflows Gen2 3. PySpark and Python Notebooks For technically advanced teams, PySpark and Python notebooks offer unmatched flexibility and distributed processing capabilities. These notebooks are ideal for complex transformation pipelines, large datasets, and Spark-native workloads. Best for: Teams with Spark/Python expertise working on custom, high-scale data processing tasks. ✅ Pros High Performance – Leverages Spark’s distributed compute engine. Custom Logic – Supports complex ingestion and transformation workflows. Seamless Integration – Connects to other Fabric components for end-to-end pipelines. ⚠️ Cons High Complexity – Requires PySpark or Python expertise. Manual Management – Error handling, logging, and retries must be coded explicitly. Setup Overhead – More effort required than GUI-based tools. Learn more about Notebooks in Fabric 4. Copy Job (New) The new Copy Job tool uses a visual assistant to move data between cloud-based sources and sinks. It’s a simplified option for users who want to ingest data quickly without building a full pipeline. Best for: Users who need a fast, lightweight ingestion option with minimal setup. ✅ Pros User-Friendly Setup – Copy Assistant simplifies configuration. Connector Support – Works with a growing list of cloud sources. Composable – Can be included in broader pipeline workflows. ⚠️ Cons Gateway Restriction – On-premises to on-premises transfers require a shared gateway. Throughput Limitations – May not match dedicated tools like the COPY statement. Limited Connectors – Support for sources is still expanding. Learn more about Copy Job 5. COPY (Transact-SQL) The COPY statement is a high-throughput, T-SQL–driven method for ingesting data from Azure storage into Fabric. It’s best suited for engineering teams who need full control over ingestion behavior via SQL. Best for: Data teams already operating in a Transact-SQL environment and needing maximum performance. ✅ Pros Top-Tier Performance – Delivers the highest available ingestion throughput. Granular Control – Tune performance, map columns, and control ingestion behavior. ETL/ELT Integration – Works seamlessly with existing T-SQL logic. ⚠️ Cons Azure-Only Source Support – Currently limited to Azure storage accounts. Code Requirement – Requires SQL fluency; not ideal for all users. Learn more about the COPY Statement 6. External Tools (e.g., Fivetran) Fivetran offers a Managed Data Lake Service (MDLS) that automates ingestion and normalization into Fabric and OneLake. With 700+ connectors and prebuilt logic, it’s a strong option for teams that prioritize automation and governance. Best for: Organizations seeking fast, governed ingestion from a wide variety of data sources—without building it all themselves. ✅ Pros Fully Managed – Automates ingestion, normalization, compaction, and deduplication. Extensive Connectors – 700+ prebuilt source integrations. Fabric-Native – Supports OneLake and AI/analytics workloads. Governance Ready – Converts raw data into optimized formats (Delta Lake or Apache Iceberg). ⚠️ Cons Cost Consideration – Fivetran licensing adds to overall project cost. Learn more about Fivetran’s MDLS Final Thoughts Microsoft Fabric gives teams the flexibility to choose the right ingestion strategy based on technical maturity, scale, and existing architecture. Whether you're looking for no-code setup, full control via SQL or Spark, or fully managed ingestion, there’s an option designed to meet your needs. Understanding the trade-offs of each method - and aligning them with your team’s strengths - sets the foundation for scalable, insight-ready data infrastructure. Need help choosing the right data ingestion path? At Interloop, we specialize in helping mid-market teams activate their data with clarity and confidence. Whether you're evaluating COPY vs. Pipelines, rolling out Fivetran, or just getting started with Microsoft Fabric - we can help you move from disconnected data to real-time insight. Let’s get you from ingestion to action. Get looped in today .
- Now Boarding: Copilot in Microsoft Fabric Opens Access Across All SKUs
By Tony Berry Microsoft just launched a game-changing update, and this one’s worth paying attention to. As of April 30, 2025, Copilot in Microsoft Fabric is now available across all paid Fabric Capacity SKUs, including the entry-level F2. In plain terms: AI is no longer a premium feature. It's now standard issue, and that shift is poised to reshape how teams across the business spectrum use Fabric. This expansion marks a new era of accessibility and impact, democratizing AI across the platform and enabling more users - regardless of capacity tier - to optimize operations, drive smarter decisions, and move at the speed of insight. Why This Matters Until now, Copilot was available only at higher capacity tiers - creating a gap between what was possible for large enterprise users and everyone else. With Copilot now included in all Microsoft Fabric capacities , mid-market organizations, department-level teams, and data-driven leaders gain access to a powerful AI toolkit that was once out of reach. This isn’t just a nice-to-have; it’s a performance multiplier. Whether you're integrating data, building models, or visualizing trends, Copilot accelerates your workflow through natural language prompts and intelligent automation. Copilot in Action: Fabric Experiences That Are Now Open to All Let’s explore what’s included. These Copilot experiences in Microsoft Fabric are now available to any user with an F2 capacity or higher: Copilot Experience What It Enables Data Factory Automate integration and transformation with natural language input. Data Science Build and deploy machine learning models more intuitively. Data Warehouse Generate insights and manage warehousing tasks with AI support. Power BI Create visuals and reports by simply describing what you need. Synapse Seamlessly migrate from Azure Synapse with Copilot’s AI-powered assistance. Real-Time Analytics Analyze streaming data instantly and make faster decisions. OneLake Discover and manage data across the organization—unified under one lake. Whether you're deep in data science or building dashboards in Power BI, Copilot meets you where you are and takes you further, faster. Final Thoughts: More Than an Upgrade - A Strategic Advantage At Interloop, we see this evolution as more than just a product update - it's a strategic move from Microsoft that closes the accessibility gap and levels the playing field for AI adoption. Copilot in Microsoft Fabric enables leaner teams to work smarter, respond faster, and extract more value from their data ecosystems. In a landscape where every insight counts and every second matters, AI shouldn’t be a luxury. Now, it’s not. Curious how Copilot could fit into your data strategy? Schedule a free Fabric briefing with one of our data experts at Interloop - we’ll help you explore what’s possible. Get looped in.
- From Insights to Action: How Data Activation Powers Copilots
In today’s fast-evolving AI landscape, data isn’t just something you analyze - it’s something you act on. That was the core message behind Interloop’s recent webinar, From Insights to AI: How Data Activation Powers Copilots , hosted alongside our industry partner, Census . Whether you’re experimenting with Microsoft Copilot or still exploring what’s possible, this session broke down the real-world mechanics of how to unlock AI’s full potential - by starting with the right data foundation. Why It Matters Many organizations are eager to embrace AI, but few have the infrastructure in place to truly capitalize on it. Without trusted, well-organized data, even the most advanced copilots will struggle to deliver useful results. This webinar focused on bridging that gap - demystifying what data activation really means and why it’s essential for powering AI that’s not just responsive, but operationally impactful. Big Takeaways Here are a few key insights that stood out: Generative AI is only as good as your data. You can’t build meaningful copilots on disjointed or siloed data. A modern, unified data platform - like Microsoft Fabric - is the launchpad for everything that follows. Reverse ETL (aka data activation) moves data from insight to action. Instead of housing insights in dashboards or BI tools, reverse ETL pushes data into tools where your teams actually work - CRMs, marketing platforms, support systems, and more. Copilots need business context, not just raw data. With Census, Interloop demonstrated how copilots can be fed curated, business-specific context that enables real automation - from triggering smart syncs to updating targets in Salesforce. Start small. Scale smart. Don’t wait for a moonshot AI use case. Start with a focused workflow - like automating sales target adjustments or enabling faster customer success responses - and grow from there. The Demo That Brought It Together A highlight of the session was a live walkthrough showing how data flows through Fabric into Census and finally into Copilot Studio, where natural language queries can drive real-time updates across systems. From increasing sales targets by 5% to syncing those changes back into a CRM, it’s no longer theory - it’s a repeatable pattern that mid-market companies can adopt right now. Final Thought AI isn’t just for tech giants. With the right foundation and partners, even lean teams can move fast, stay competitive, and let data power more than just dashboards. They can power decisions, systems - and yes, copilots. Watch the Full Webinar Here:












