Itamar Ben Hemo
FEB 29, 2024
6 min read
Don’t miss a thing!
You can unsubscribe anytime

I’ve spent the past 23 years working in the data industry. If there’s one thing I’ve gotten used to throughout my years is that the only constant in the data industry is change. No, I’m not only referring to changes in data tooling, I’m also referring to changes in how we look at the data industry as a whole.

After reflecting on Tristan’s obituary of the Modern Data Stack (MDS), I’ve concluded that he’s right, the MDS reached its peak.

In retrospect to 2021, when the “modern data stack” was the talk of the town, we embraced the buzzword, positioning ourselves not as a point solution but as a data management and integration platform for the MDS. It served its purpose well in those initial stages, providing a quick understanding for both investors and potential customers about what Rivery offered.

However, as time has progressed, adhering strictly to the modern data stack concept has become impractical. Undoubtedly, we reaped substantial benefits and even secured fundraising based on attaching ourselves to the MDS. Nevertheless, a point has been reached where a shift is imperative, and the modern data stack needs to evolve.

The more things change, the more they are the same

To better understand why the modern data stack has to evolve, let’s briefly touch on how we reached this point.

The modern data stack picked up momentum in late 2016, early 2017 and was a term coined by the data community referring to a collection of mostly cloud-native tools centered around a data warehouse. When linked together, these tools form a data infrastructure architecture that allows organizations to use the cloud and interact with it via SQL to be data-driven companies.

I won’t bore you with the history of data tooling (you can read more about that here if you are interested). Still, essentially data tooling evolved from legacy on-prem solutions to cloud-native technologies that are faster to adopt to solve major pain points data practitioners were facing at the time.

In retrospect, the modern data stack did its job: it lowered the barrier of entry to working (from a cost and expertise level) with data and made data easily accessible for internal and external stakeholders.

While the modern data stack was all the hype in 2021 and 2022, data teams are starting to see its shortcomings take center stage. The sheer number of specialized tools that make up the stack, catering to every use case from ingestion and transformation to data warehousing, visualization, observability, and DataOps is overwhelming.  

The original idea of the modern data stack was to increase the time to value for data teams to deliver data insights. But now, the overwhelming amount of tools (as shown below) is making it increasingly challenging for data teams to deliver value quickly and cost-effectively.

Image courtesy of
Image courtesy of

It’s becoming clearer every day that the over-modularity of the MDS is a huge problem for data teams. By no means, are we saying the modern data stack is a bad approach. It was just another phase of technology that everyone quickly brought into the hype. Like most things that get adopted very quickly, soon after you start realizing there are some shortcomings and something better comes along.

Shifting from a modern data stack to a modern data platform mindset

When Rivery was in alpha mode during my time at the Keyrus Group in 2017, I wrote:

“As companies become increasingly more data-driven, there’s a growing amount of sources from which companies gather critical information about their customer’s behavior, their marketing performance, and the ongoing optimization and improvement of their many operations – all of which are measured, tracked, and processed on different platforms.”

Seven years later, this remains truer than ever before. A week ago, Benn Stancil pointed out that the biggest miss of the modern stack was the over-modularity of the data ecosystem. And, like Tristan, he’s right.

Modern data stack technologies are linked together in a linear chain. Naturally, there are pressure points in terms of integration and manpower. A lot of resources are required to serve insights to the entire business. Tech-wise, upstream processes enable downstream processes. So if one link in the chain fails nothing downstream can function or get updated properly. It demands a lot of workarounds.

This process does not scale up well.

It’s more evident than ever before that a switch to a platform approach is necessary for data teams. It’s 2024, organizations are maintaining smaller data teams and being asked to deliver faster justifying an investment in data even when budgets are tightly reviewed post the Zero Interest Rate Policy (zirp) era. 

By taking a platform approach, you are consolidating the various functions that a small data team would need to build end-to-end data pipelines. It’s a philosophy we have been talking about with our customers, that a platform approach provides simplicity while being flexible enough to adapt to specific use cases.

With a platform approach, data teams don’t have to worry about setting, maintaining, and linking together various data tools. A platform approach provides centralized visibility over your data pipelines, simplifying root-cause analysis when something goes wrong.

Moving Forward

When I first began talking about Rivery publicly (back in 2017 when we were still an internal product), I positioned Rivery as a data management and integration platform designed for businesses to take control of their data. At the same time, I also believed that this process should be as simple as possible, as I described here:

“Simplicity will always be a driving force for everything we create at Rivery. We help companies build a serverless data pipeline to eliminate the need to spend time & resources orchestrating data processes. In a way, one could argue that we’re in the business of making life for BI & data teams simpler.”

Years later, our commitment to simplifying the creation and maintenance of data pipelines remains the same. To achieve this simplification, we built a platform that is both simple and flexible, empowering users to delve deeper and exert control over their pipelines when necessary. Central to this effort was the creation of a platform that is not closed but open, allowing seamless integration into existing architectures while preserving a unified experience, especially for DataOps, to facilitate easier data delivery.

Our ongoing investments are geared towards further streamlining this process, whether through simplified ingestion from various sources or the provision of enhanced templates for a quicker start. Simultaneously, we acknowledge that relying solely on tools is insufficient. 

This is where the significance of data modeling and the organizational structure of data teams comes into play. 

As organizations grow, our platform is designed to empower not just the centralized data platform team but also the decentralized data teams. It facilitates self-service and pipeline construction for the latter, while the former retains governance over data access, operation health, and consumption costs. This, we believe, represents the next significant evolution a Modern Data Platform (MDP) offers beyond the MDS, transcending tools consolidation to address broader organizational needs.

Minimize the firefighting.
Maximize ROI on pipelines.

icon icon