Trends in DataOps: Bringing Scale and Rigor to Data and Analytics
Most organizations are plagued by data silos, poor data quality, slow development processes, and a huge gulf between business and IT. DataOps promises to address these challenges and enable data teams to develop data pipelines “faster, better, cheaper.” DataOps, therefore, has the potential to heal the rift between business and IT and help organizations get more value from their data.
This report is a sequel to our June 2019 report Best Practices in DataOps: How to Create Robust, Automated Data Pipelines, which profiled numerous DataOps pioneers and examined the keys to their success. This report surveys data and analytics professionals and provides an overview of the trends in DataOps, including adoption rates, benefits, challenges, use cases, and data processing environments. Both reports should be read together.
Key Takeaways
- A majority of companies have yet to fully implement DataOps. More than a quarter (27%) have a DataOps initiative, 43% do not, and 30% have been experimenting.
- Organizations with large data environments (thousands or tens of thousands of sources and targets) are more likely to have a performance monitoring tool (65%) and orchestration software (53%) than organizations with less complex data environments.
- The biggest benefit of DataOps is “faster cycle times,” selected by 60% of respondents.
- The biggest challenge of DataOps is “establishing formal processes,” selected by 55% of respondents.
- The DataOps mindset has a laser focus on continuous improvement.
- DataOps is also associated with a set of technologies, borrowed from DevOps, and adapted for data processing.