by Ananth Durai (@vananth22) on Wednesday, 9 May 2018

Vote on this proposal
Status: Confirmed & Scheduled
Technical level



Slack is a communication and collaboration platform for teams. Our millions of users spend 10+ hrs connected to the service on a typical working day.

The Slack data engineering team goal is simple: Drive up speed, efficiency, and reliability of making data-informed decisions. For engineers, For people managers, For salespeople, For every slack customer.

Airflow is the core system in our data infrastructure to orchestrate our data pipeline. We use Airflow to schedule Hive/ Tez, spark, Flink and TensorFlow applications. Airflow helps us to manage our stream processing, statistical analytics, machine learning, and deep learning pipelines.

About six months back, we started on-call rotation for our data pipeline to adopt what we learned from devops paradigm. We found out several airflow performance bottleneck and operational inefficiency that’s been siloed with ad-hoc pipeline management.

In this talk, I will speak about how we identified Airflow performance issues and fixed it. I will talk about our experience as we thrive to resolve our on-call nightmares and make data pipeline simpler and pleasant to operate and the hacks we did to improve alerting and visibility of our data pipeline.

Though the talk tune towards Airflow, the principles we applied for data pipeline visibility engineering is more generic and can apply to any tools/ data pipeline.


  • Intro to slack and the data engineering team
  • problem statement and the customer complaints.
  • Overview of Airflow infrastructure and deployment workflow
  • Scale Airflow Local Executor.
  • Data pipeline operations.
  • Alerting and monitoring data pipeline.


The audience expected to have some basic understanding of how Airflow works.
The airflow official documentation is a good starting point
Our friends at Robinhood wrote an excellent blog post describing why they use Airflow.

Speaker bio

I work Senior data engineer at Slack manage core data infrastructures like Airflow, Kafka, Flink, and Pinot. I love talking about all things ethical data management.