Caution Data Leaders: Plan Carefully before Rushing to Data Mesh

ABSTRACT: The data mesh is a new architectural paradigm in the modern data stack which promotes de-centralized data management, development, and governance.

The rush to the data mesh has begun. Before diving in, data leaders should carefully evaluate benefits, pitfalls, challenges, patterns, and anti-patterns. As of now data mesh lacks canonical reference implementation, which can result in enterprises adopting only selected elements of the data mesh principles. Enterprises inevitably will face organizational and technical issues adopting data mesh, as it is a socio-technical paradigm and should plan to evolve their data platform through iterations and adapt accordingly. It is easy to be swayed by vendors promising data mesh capabilities. Vendors are adept at promoting their product offering by adding trending buzzwords to offerings they developed for another market. This blog explores the challenges, best practices, and practical guidance for organizations looking to dip their toes into data mesh.

Challenges

The data mesh is a relatively new concept that is gaining traction and mindshare, with practical implementation still in the early stages. Adopting data mesh and moving away from a monolithic platform requires a mental shift and re-wiring and rethinking the way data is organized and managed. Data mesh calls for a lot more coordination and communication among teams and requires a change in data management approaches for data integration, data processing, data governance, and data access. 

Most enterprises have multiple tech stacks and a large number of existing data assets cannot be accessed or modified easily due to strict compliance, isolation, and organizational boundary challenges. These combined can make data mesh adoption difficult, more from an organizational perspective than a technical one. 

Lack of specific tools, libraries and frameworks may require a higher level of manual development, testing, and deployment compared to traditional architectures. Concepts of data contracts are very new and have a learning curve for most data personas, especially those outside the software development world. These contracts need to be integrated with data processing, testing, and validation tools. 

Ensuring distinct data copies with data sharing and data governance can be challenging to coordinate across multiple domains. Data mesh leaves a lot of unanswered questions in an enterprise setting, which probably will get answered as the paradigm matures. Some of the areas of fuzziness include the following:

  • How to choose granularity of data products and ensure consistency and integrity across these products. Finer grained data products can cause a dramatic increase in the number of infrastructure resources.

  • It is unclear how data contracts and data cards integrate with existing tools in the data stack.

  • How to manage and govern MDM and reference data when teams operate in domains. There are multiple patterns but no guidelines about which approach works best.

  • How to ensure domains don’t ingest the same data, create copies and avoid redundant processing.

  • Who governs data ownership when multiple data products require the same dataset.

  • How to decide when to promote a centralized platform component compared to giving freedom to the business domains to implement their own pieces. 

Guidance

Data mesh principles provide guidelines but are not a recipe for success. Jumping into a data mesh-based approach before reaching a level of maturity and critical mass can be counterproductive. Organizations need to have enough data experts before dispersing them across domains. Adopting federated governance requires that an organization already has mature underlying policies and controls to avoid data chaos. It is important to leverage a data sharing platform to avoid data duplication for shared usage across domains. This ties closely with the development of data sharing platforms by most vendors, including Databricks delta sharing, AWS, and the Snowflake marketplace. Train domain teams to use shared and self-service based infrastructure access. Organizations should have DataOps in place before venturing into data mesh.

When is Data Mesh a Good Fit?

Consider moving to a data mesh-based approach under the following situations:

  • Autonomous Units: When an organization has global deployments across multiple domains and regions, a data mesh makes sense, if the company culture supports it, for example - where independent BUs work autonomously and heterogeneously for their data/analytics requirements.

  • Centralized bottleneck: When a central data team onboards new data sources for the entire organization, causing bottlenecks, delay, and iterations.

  • Analytics agility: Where analytics agility is critical to success and efficacy of analytic products.

  • Localized data: Data is nuanced, and business rules and semantics are complex and data preparation requires domain specialists more than engineering specialists. 

When is Data Mesh NOT a good fit? 

Avoid adopting a data mesh when:

  • Analytics users are centralized and their needs across domains are homogenous (for example, only BI/reporting but not ad hoc analytics, data science, or machine learning).

  • Organization lacks a robust culture of change management. Data mesh requires restructuring and mindset change. Enterprises and data personas must overcome resistance to taking additional responsibilities and domain ownership to creating data products as a core responsibility and artifact.

  • Organizations are not operating at a scale where decentralization makes sense. Data mesh is not a good fit within a single project or region or if the organization has a handful of domains and doesn't envisage growth in the number of domains.

  • There is not enough buy-in from the teams for cross domain collaboration or data product interoperability. For example, it might not have a strong enough business case about why adopting it will deliver business value. 

  • When it is perceived as a technical solution, organizations expect to find off-the-shelf software to adopt data mesh. Organizational culture does not empower bottom-up decision-making. Do not have established roles & responsibilities and incentivized structure for distributed teams. Do not have a critical mass of data talent and data teams have low engineering maturity. Coordination overhead may not be justifiable for small to medium sized organizations.

Best Practices

Adoption of data mesh is a journey, not a destination. Best practices are continuously learnt and unlearnt. Some of the best practices include the following.

  • Define and empower domains to thrive independently.

  • Focus on building trustworthy data products with focus on data quality, data validation, testing, implemented at each of the stages from conception to consumption.

  • Focus on communication and collaboration across cross-functional teams with a high degree of awareness of changes and updates to the data services.

  • Educate and upskill data teams on concepts, fundamentals, and principles of data mesh.

  • It is best to implement in increments starting with POC/POV and not recommended to fully transition to this paradigm.

  • Adopt the data mesh model domain by domain, pillar by pillar in collaboration with a centralized infrastructure team. Start with one or two domains, collect feedback and improve through feedback loops. 

  • Continuously evaluate, iterate, learn, unlearn, and document what works and what does not.

  • Enable self-service preferably for all tasks from data ingestion, infrastructure provisioning, access controls, governance with hooks for injecting domain specific functionality. This goes a long way to minimize time from ideation to data product. 

  • Make sure each service can handle load balancing, horizontal scaling, and performance testing. Make it easy to access, use, and publish data products. 

Conclusion

The road to data mesh adoption is challenging, with uncertainties, potholes and hairpin curves. Early adopters are pioneers and still learning from the community and peer organizations. Data mesh requires a mindset and organization shift, as well as a level of data maturity that doesn’t happen overnight. Adoption of data mesh does not need to be a purely binary decision. Learn from case studies of organizations like Zalando, ABM-Amro, JP Morgan Chase, Intuit on their journey towards adopting data mesh principles. Remember each organization is different with different culture and org structure and in different stages of data maturity.

Author bio: 

Sumit is an Ex Gartner VP Analyst in the Data Management & Analytics space. He has more than 30 years of experience in the data and Software Industry in various roles spanning companies from startups to enterprise organizations in building, managing and guiding teams and building scalable software systems across the stack from middle tier, data layer, analytics using BigData, NoSQL, DB Internals, Data Warehousing, Data Modeling, Data Science and data engineering.