Talks by Users for Users: A day to shorten the distance between data and decision making

Data Agility Day is a virtual day to examine ways to improve value extraction from data. Many companies still struggle with delivering data projects on time, at scale, and with useful results.

Our mission is the persistent evolution of agile data methods, strategies, and team enablement. Sessions will cover how individuals and teams at data-savvy organizations are achieving the agility that enables decisions to be made faster and create competitive advantages.


What is Data Agility?

Data agility is the ability to shorten the distance between data and the decision-making that drives action and empowers businesses to be insight-driven.

As the needs for insights grow, data teams are looking to increase their data management workflows and effectiveness – in essence, run data agile organizations.

On October 21st, we’ll bring together data experts (engineers, developers, scientists, architects, and analysts) from leading companies to lead in-depth conversations, case studies, and strategies on exactly how to achieve true data agility.


Who is Coming to Data Agility Day?

Data engineers, data developers, data architects, and data scientists as well as BI teams, marketing analytics professionals, innovation-focused executives, and other data science practitioners and leadership.

Join us for sessions on: Data Transformation, Data Orchestration, Data Visualization, Data Observability, Data Governance, Data Management, and Data Ingestion.


Data Agility Day 2021 Virtual Agenda: October 21, 2021

9:00 AM PST

Welcome to Data Agility Day

Thibaut Ceyrolle, Board Member, Advisor, Investor, and Snowflake EMEA Founder


9:15 AM PST

Keynote Conversation: Companies should expect to work faster

Itamar Ben Hamo, Co-Founder & CEO @ Rivery

Adam Conway, SVP Products @ Databricks


How can companies deliver true data agility? What are key growth engines? If looking at dashboards and having static BI was the legacy way of operating, what is the future of analytics?


Join Itamar Ben Hamo of Rivery and Adam Conway of Databricks as they explore answers to these questions and the implications they make on the speed at which companies will be expected to operate.


9:45 AM PST’s custom stack that allows them rapid access to consumer app insights

Tomer Coreanu, General Manager B2C @


Check out how Tomer builds an agile data stack from scratch. Tomer will explain how his stack powered his company start-up to unicorn status and beyond.


Using Rivery, BigQuery, and Tableau Tomer will show in-depth technical code examples and the dashboards used to drive growth in the subscription economy for a product that delivers real-time data insights.


10:20 AM PST

Why you don’t need to overengineer your data stack

Naomi Miller, Director of Data Engineering @ NBC Universal


10:55 AM PST

How StuffThatWorks turned 51.6 million data points into health care insights

Yossi Synett, Chief Data Scientist @ Stuff That Works


Yossi Synett, the Chief Data Scientist and Co-founder of StuffThatWorks will talk about building a business based on crowdsourced data pulled from over 1 million contributors.


Leveraging machine learning and big data digestion, Yossi will talk about being an agile company and responding to data-driven trends.


10:55 AM PST

How Freshly is scaling business metrics observability with AI

David Drai, Co-Founder & CEO @ Anodot

David Ashirov, VP of Data @ Freshly


We put so much effort into building the perfect data infrastructure. But few actually put the same thought into their analytics stack.


What good is a race car without a trained driver? Join VP of Data David Ashirov as he shares how Freshly is monitoring hundreds of thousands of business metrics in real time, and the impact that has had on operations, sales, and overall revenue.


10:55 AM PST

Reverse ETL – How to power multi-directional data flows

Taylor McGrath, Head of Customer Solutions @ Rivery


Taylor McGrath, Head of Customer Solutions at Rivery, will talk about why BI + dashboards are necessary for the analysis of trends.


But Reverse ETL takes insights one step further, that once something is ‘realized’, the data point can be pushed to another system, where action can immediately be taken. Attendees will be able to walk away with an understanding of how to implement and deploy their own Reverse ETLs.


10:55 AM PST

Unifying analytics – changing data architecture to unite BI and data science

Paige Roberts, Open Source Relations Manager @ Vertica


The data warehouse has been an analytics workhorse for decades for business intelligence teams. Unprecedented volumes of data, new types of data, and the need for advanced analyses like machine learning brought on the age of the data lake.


Now, many companies have a data lake for data science, a data warehouse for BI, or a mishmash of both, possibly combined with a mandate to go to the cloud. The end result can be a sprawling mess, a lot of duplicated effort, a lot of missed opportunities, a lot of projects that never made it into production, and a lot of financial investment without return.


Technical and spiritual unification of the two opposed camps can make a powerful impact on the effectiveness of analytics for the business overall.


  • Look at successful data architectures from companies like Philips, The TradeDesk, and Climate Corporation
  • Learn to eliminate duplication of effort between data science and BI data engineering teams
  • See a variety of ways companies are getting AI and ML projects into production where they have a real impact, without bogging down essential BI
  • Study analytics architectures that work, why and how they work, and where they’re going from here


11:30 AM PST

Balance agility and governance with the data cloud and dataops

Kent Graziano, Chief Technical Evangelist @ Snowflake


DataOps is the application of DevOps concepts to data. The DataOps Manifesto outlines what that means, similar to how the Agile Manifesto outlines the goals of the Agile Software movement.


But, as the demand for data governance has increased, and the demand to do “more with less” and be more agile has put more pressure on data teams, we all need more guidance on how to manage all this.


Seeing that need, a small group of industry thought leaders and practitioners got together and created a DataOps philosophy to describe the best way to deliver DataOps by defining the core pillars that must underpin a successful approach.


Combining this approach with an agile and governed platform like Snowflake’s Data Cloud allows organizations to indeed balance these seemingly competing goals while still delivering value at scale.


12:05 PM PST

Picking the right data tools: when to stop writing custom code pipelines

Ben Rogojan, Consultant @ Seattle Data Guy


Ben is a Data Engineer Influencer, Consultant, and Data Engineer at Facebook and joins us to discuss his experience with various data pipeline systems whether it be 100% custom code or drag and drop low code.


He will discuss the modern challenges that data engineers face and how different tools can help approach each of these problems. The focus will be to discuss the perfect balance between build vs. buy solutions for their data stacks. Especially as data sources continue to explode.


Attendees will learn how to prioritize their time to deliver maximum value to their companies and when to use off-the-shelf solutions to make them better data engineers.


12:05 PM PST

How to build an agile data strategy: A conversation with Alex Tverdholeb, VP of Data Services at Fox

Molly Vorwerck, Head of Content & Communications @ Monte Carlo

Alex Tverdohleb, VP of Data Services @ FOX


In this fireside chat, Alex Tverdholeb, VP of Data Services at Fox, sits down with Molly Vorwerck, a founding team member at Monte Carlo, the data observability company, to discuss his experience leading and defining data strategy at hyper-growth companies.


Alex will discuss: how his team aligns their goals and objectives to Fox’s company-wide KPIs; how to structure your data organization; how to weigh building vs. buying your stack; and best practices for building a culture of data trust – at scale.


12:05 PM PST

How to layer your data the right way

Christian Heinzmann, Senior Director of Data @ Indigo


Building data marts is tricky enough by itself. When you throw a fast-growing industry on top of it, with constant shifting data inputs and business requirements it becomes nigh near impossible. Learn how Indigo Ag layers their data to deal with these constant challenges.


12:05 PM PST

Healthedge’s approach to multi-dimensional data observability

Rohit Choudhary, Founder & CEO @ Acceldata

Krishnan Bhagavath, VP of Engineering @ Healthedge


Rohit and Krishnan will sit down to discuss how Healthedge has used multi-dimensional data observability to reduce complexity, improve reliability, and leverage AI/ML to improve engineering productivity, all while reducing costs. This session will review the technical and business benefits of:


  • Successfully architecting, operating, and optimizing complex data systems at scale

  • Full visibility into data processing, data, and data pipelines

  • Using ML to automate data identification, quality, and management


12:40 PM PST

Empathy based data governance

Aisha Memon, Data Governance & Democratization Product Owner @ Cisco


1:15 PM PST

Agile can’t get analytics finished

Aash Viswanathan, Senior Data Scientist @ Atlassian


Data analytics is knowledge work, so as long as there are questions left to answer, there is no “done.” This is because the analysis is for understanding things we don’t know yet – making access to insights very difficult to schedule.


This talk should help those who’ve been asked for analytics ETAs or those being asked to prioritize other projects for “after analytics is done.”


Aash will talk through how he has learned to handle strategy and processes for data analytics projects at Atlassian, LinkedIn, Lime, and other organizations, so confident analysis can be made from the data and analytics can grow alongside the business.


1:50 PM PST

What we’re learning from capturing and tracing the lineage of key datasets at Northwestern Mutual

Kevin Mellott, Asst. Director Data Engineering @ Northwestern Mutual

Julien Le Dem, Co-Founder & CTO @ Datakin


Context is important, especially when it comes to data. Once we understand where a dataset comes from and how it’s consumed – in other words, its data lineage – we can make decisions about it more quickly and decisively.


Using OpenLineage, the team at Northwestern Mutual has begun to capture and trace the lineage of key datasets. This makes it easier to find the root cause of failures and predict the effect of changes before they’re made.


In this informal chat, Julien Le Dem from OpenLineage and Kevin Mellott from Northwestern Mutual will discuss how they approach data lineage and what they’ve learned along the way.


1:50 PM PST

Understanding open source communities using the modern data stack

Srini Kadamati, Senior Data Scientist @ Preset


Open source communities have powered the last decade of modernization in the data ecosystem, but we are just now beginning to understand how these types of communities are started, nurtured, and grown.


Srini joined Preset and the Apache Superset community 18 months ago specifically to help out with developer advocacy and community. Since joining, his team has used open source data tools to catalog and visualize community data to find better ways to understand and support the Superset community.


In this talk, Srini will share the lessons he’s learned about open source and about growing communities over the last 18 months.


1:50 PM PST

How we lose trust in data and the struggle of regaining it

Chaim Mazal, VP Information Security, CISO @ Kandji

Ben Herzberg, Chief Data Scientist @ Satori


In an agile data environment, it’s easy to develop trust issues, unless you adapt.


In this expert discussion, we will dig into the changes, trust challenges, and ways to overcome them, based on the experience of leading complex data organizations.


1:50 PM PST

How to make cloud migration painless and achieve organizational data agility

Khalil Sheikh, EVP Solutions & Strategy @ Saxon Global


Khalil will teach attendees how to move their data to the cloud and build an agile data stack that can scale for any size organization.


Together, we’ll examine some pitfalls to avoid and Khalil’s best practices when performing a cloud migration.


2:25 PM PST

Data lakehouse, data mesh, and data fabric (the alphabet soup of data architectures)

James Serra, Data Platform Architecture Lead @ EY


So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric.  What do all these terms mean and how do they compare to a data warehouse?


In this session, James covers all of them in detail and compares the pros and cons of each from his perspective as a seasoned Data Platform Architecture Lead at EY.


Each may sound great in theory, but he’ll dig into the concerns you need to be aware of before taking the plunge. James will also include use cases so you can see what approach will work best for your big data needs.


3:00 PM PST

How Grubhub uses data to prioritize product roadmaps

Seth Rosenstein, Sr. Product Manager @ Grubhub


See firsthand how Grubhub uses customer & market data to prioritize their product roadmap. See how to filter out data noise to reach insights that drive value.


Backtracking from revenue numbers attendees will learn how to identify the KPIs that should drive the roadmap.


This data-led product roadmap has helped Grubhub achieve a 48% year-over-year revenue growth.


3:35 PM PST

5 considerations for building better lakehouses

Christian Romming, Founder & CEO @ Etleap


Building a data lakehouse can be a headache. More often than not, they end up becoming data swamps: bottomless pits that suck up dev resources and obscure data transparency in their murky waters.


But when done correctly, data lakehouses can be valuable wells of knowledge that transform your business.


We’ll deep-dive into concrete enterprise lakehouse stack examples, and give you 5 refreshing pointers for keeping the dev workload sustainable and ensuring data usability for the long term.


Secure your free pass today!