Elesh Mistry
SEP 20, 2022
5 min read
Don’t miss a thing!
You can unsubscribe anytime

Data integration tools are becoming increasingly accessible to less technical members of your data engineering team. The no-code revolution has simplified the way you move data from your database, application, file and event sources to your cloud data warehouse. On the downside, these no-code solutions lack the flexibility and capabilities of more developer-focused tools. So when it comes to dealing with more complex use cases you’re often left with two choices: shelve them or solve them using traditional solutions.

Prioritizing your new data stack with no-code should take into account these 5 use cases:

  1. Simple extraction and loading
  2. Push down data transformation
  3. Orchestrating workflows
  4. Complex extraction and transformation
  5. Reverse ETL – send data back to sources for data refresh and data activation

But does this actually simplify your data stack or create bigger problems in the long run? Going in headstrong with a no-code solution might seem like a great idea at first. But without a flexible approach, it leads to a more complex overall stack and you’re back to square one.

So let’s start again. The goal is to help you create a simple stack which satisfies all of the above use cases but also has provision to:

  1. Connect to non-native sources (API, File Systems, Cloud DBs)
  2. Connect to sources with complex authentication mechanisms
  3. Transform any format of data using existing library functions
  4. Integrate ML libraries to future-proof the stack for data analytics use cases
  5. Programmatically control your development lifecycle by having native integration points with common source configuration tools
  6. Integrate with other tools in the data space with a comprehensive REST API

It’s definitely a big ask. The ability to bridge the gap between traditional ETL and modern ELT requires a tool which has been built from the ground up to address all of these issues.

A modern data stack should never be a stack of five or more tools just to complete ETLT. It should be a true simplification. At the end of the day, you want to make data more accessible and have increased flexibility to address even the most complex use cases. Rivery’s Python Rivers and Custom Connector Rivers allows you to extend functionality in a simple, fully managed way. This flexibility not only caters to your most complex requirements today but also future proofs your stack for the use cases of tomorrow.

Want to see how easy it is to simplify data integration from the ground up? Start your free trial today.

Minimize the firefighting.
Maximize ROI on pipelines.

icon icon