Qubole Sparklens: Understanding the scalability limits of spark applications
by Rohit Karlupia (@sixthelephant) on Monday, 26 March 2018
- Full talk
- Technical level
One of the common requests we receive from customers (at Qubole) is debugging slow spark application. Usually this process is done with trial and error, which takes time. Moreover, it doesn’t tell us where to looks for further improvements. We at Qubole are looking into making this process more self-serve.
Towards this goal we have built a tool (OSS https://github.com/qubole/sparklens) based on spark event listener framework. From a single run of the application, Sparklens provides insights about scalability limits of given spark application. In this talk we will cover the what Sparklens does and theory behind Sparklens. We will talk about how structure of spark application puts important constraints on its scalability. How can we find these structural constraints and how to use these constraints as a guide in solving performance and scalability problems of spark applications.
This talk will help audience in answering the following questions about their spark applications:
1) Will their application run faster with more executors?
2) How will cluster utilization change as number of executors change?
3) What is the absolute minimum time this application will take even if we give it infinite executors?
4) What is the expected wall clock time for the application when we fix the most important structural limits of these application?
1) Single threaded applications
2) Multi-threaded applications
3) Distributed applications using spark
4) When the applicaton “does nothing” and why?
5) Driver, Parallelism & Skew
6) Critical Path of spark application
7) Defining Ideal Spark application
8) Introduction to Sparklens
9) Understanding Sparklens report
10) Where to fish for further improvements