Close Menu
geekfence.comgeekfence.com
    What's Hot

    Sweden’s EVs At 63.2% Share In 2025 – Volvo EX40 Best-Seller

    February 15, 2026

    Fatal ‘Index out of range’ error when using macOS simulator in Xcode but not when using iOS simulator

    February 15, 2026

    AWS IoT Greengrass nucleus lite – Revolutionizing edge computing on resource-constrained devices

    February 15, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Big Data»Demystifying data fabrics – bridging the gap between data sources and workloads
    Big Data

    Demystifying data fabrics – bridging the gap between data sources and workloads

    AdminBy AdminNovember 19, 2025No Comments5 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Demystifying data fabrics – bridging the gap between data sources and workloads
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The term “data fabric” is used across the tech industry, yet its definition and implementation can vary. I have seen this across vendors: in autumn last year, British Telecom (BT) talked about their data fabric at an analyst event; meanwhile, in storage, NetApp has been re-orienting their brand to intelligent infrastructure but was previously using the term. Application platform vendor Appian has a data fabric product, and database provider MongoDB has also been talking about data fabrics and similar ideas. 

    At its core, a data fabric is a unified architecture that abstracts and integrates disparate data sources to create a seamless data layer. The principle is to create a unified, synchronized layer between disparate sources of data and the workloads that need access to data—your applications, workloads, and, increasingly, your AI algorithms or learning engines. 

    There are plenty of reasons to want such an overlay. The data fabric acts as a generalized integration layer, plugging into different data sources or adding advanced capabilities to facilitate access for applications, workloads, and models, like enabling access to those sources while keeping them synchronized. 

    So far, so good. The challenge, however, is that we have a gap between the principle of a data fabric and its actual implementation. People are using the term to represent different things. To return to our four examples:

    • BT defines data fabric as a network-level overlay designed to optimize data transmission across long distances.
    • NetApp’s interpretation (even with the term intelligent data infrastructure) emphasizes storage efficiency and centralized management.
    • Appian positions its data fabric product as a tool for unifying data at the application layer, enabling faster development and customization of user-facing tools. 
    • MongoDB (and other structured data solution providers) consider data fabric principles in the context of data management infrastructure.

    How do we cut through all of this? One answer is to accept that we can approach it from multiple angles. You can talk about data fabric conceptually—recognizing the need to bring together data sources—but without overreaching. You don’t need a universal “uber-fabric” that covers absolutely everything. Instead, focus on the specific data you need to manage.

    If we rewind a couple of decades, we can see similarities with the principles of service-oriented architecture, which looked to decouple service provision from database systems. Back then, we discussed the difference between services, processes, and data. The same applies now: you can request a service or request data as a service, focusing on what’s needed for your workload. Create, read, update and delete remain the most straightforward of data services!

    I am also reminded of the origins of network acceleration, which would use caching to speed up data transfers by holding versions of data locally rather than repeatedly accessing the source. Akamai built its business on how to transfer unstructured content like music and films efficiently and over long distances. 

    That’s not to suggest data fabrics are reinventing the wheel. We are in a different (cloud-based) world technologically; plus, they bring new aspects, not least around metadata management, lineage tracking, compliance and security features. These are especially critical for AI workloads, where data governance, quality and provenance directly impact model performance and trustworthiness.

    If you are considering deploying a data fabric, the best starting point is to think about what you want the data for. Not only will this help orient you towards what kind of data fabric might be the most appropriate, but this approach also helps avoid the trap of trying to manage all the data in the world. Instead, you can prioritize the most valuable subset of data and consider what level of data fabric works best for your needs:

    1. Network level: To integrate data across multi-cloud, on-premises, and edge environments.
    2. Infrastructure level: If your data is centralized with one storage vendor, focus on the storage layer to serve coherent data pools.
    3. Application level: To pull together disparate datasets for specific applications or platforms.

    For example, in BT’s case, they’ve found internal value in using their data fabric to consolidate data from multiple sources. This reduces duplication and helps streamline operations, making data management more efficient. It’s clearly a useful tool for consolidating silos and improving application rationalization.

    In the end, data fabric isn’t a monolithic, one-size-fits-all solution. It’s a strategic conceptual layer, backed up by products and features, that you can apply where it makes the most sense to add flexibility and improve data delivery. Deployment fabric isn’t a “set it and forget it” exercise: it requires ongoing effort to scope, deploy, and maintain—not only the software itself but also the configuration and integration of data sources.

    While a data fabric can exist conceptually in multiple places, it’s important not to replicate delivery efforts unnecessarily. So, whether you’re pulling data together across the network, within infrastructure, or at the application level, the principles remain the same: use it where it’s most appropriate for your needs, and enable it to evolve with the data it serves.

    The post Demystifying data fabrics – bridging the gap between data sources and workloads appeared first on Gigaom.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    What is Prompt Chaining?

    February 14, 2026

    What’s the Best Customer Communications Platform for Insurance Companies?

    February 13, 2026

    How Zalando innovates their Fast-Serving layer by migrating to Amazon Redshift

    February 11, 2026

    Introducing the new Databricks Partner Program and Well-Architected Framework for ISVs and Data Providers

    February 10, 2026

    AI Shows How Payment Delays Disrupt Your Business

    February 9, 2026

    7 Steps to Handle Design Failures in Automotive Engineering

    February 8, 2026
    Top Posts

    Hard-braking events as indicators of road segment crash risk

    January 14, 202617 Views

    Understanding U-Net Architecture in Deep Learning

    November 25, 202512 Views

    How to integrate a graph database into your RAG pipeline

    February 8, 20268 Views
    Don't Miss

    Sweden’s EVs At 63.2% Share In 2025 – Volvo EX40 Best-Seller

    February 15, 2026

    Support CleanTechnica’s work through a Substack subscription or on Stripe. Or support our…

    Fatal ‘Index out of range’ error when using macOS simulator in Xcode but not when using iOS simulator

    February 15, 2026

    AWS IoT Greengrass nucleus lite – Revolutionizing edge computing on resource-constrained devices

    February 15, 2026

    The great computer science exodus (and where students are going instead)

    February 15, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Sweden’s EVs At 63.2% Share In 2025 – Volvo EX40 Best-Seller

    February 15, 2026

    Fatal ‘Index out of range’ error when using macOS simulator in Xcode but not when using iOS simulator

    February 15, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.