SlideShare a Scribd company logo
1 of 58
Download to read offline
Understanding Spark Tuning
(Magical spells to stop your pager going off at 2:00am)
Holden Karau, Rachel Warren
Rachel
- Rachel Warren → She/ Her
- Data Scientist / Software engineer at Salesforce Einstein
- Formerly at Alpine Data (with Holden)
- Lots of experience scaling Spark in different production environments
- The other half of the High Performance Spark team :)
- @warre_n_peace
- Linked in: https://www.linkedin.com/in/rachelbwarren/
- Slideshare: https://www.slideshare.net/RachelWarren4/
- Github: https://github.com/rachelwarren
Holden:
● My name is Holden Karau
● Prefered pronouns are she/her
● Developer Advocate at Google
● Apache Spark PMC :)
● previously IBM, Alpine, Databricks, Google, Foursquare & Amazon
● co-author of Learning Spark & High Performance Spark
● @holdenkarau
● Slide share http://www.slideshare.net/hkarau
● Code review livestreams & live coding: https://www.twitch.tv/holdenkarau /
https://www.youtube.com/user/holdenkarau
● Github https://github.com/holdenk
● Spark Videos http://bit.ly/holdenSparkVideos
● Talk feedback: http://bit.ly/holdenTalkFeedback http://bit.ly/holdenTalkFeedback
Understanding Spark Tuning: Strata New York
Who we think you wonderful humans are?
● Nice enough people
● I’m sure you love pictures of cats
● Might know something about using Spark, or are using it in production
● Maybe sys-admin or developer
● Are tired of spending so much time fussing with Spark Settings to get jobs to
run
Lori Erickson
The goal of this talk is to give you the resources to programatically tune
your Spark jobs so that they run consistently and efficiently
In terms of and $$$$$
What we will cover?
- A run down of the most important settings
- Getting the most out of Spark’s built-in Auto Tuner Options dynamic allocation
- A few examples of errors and performance problems that can be addressed
by tuning
- A job can go out of tune over time as the world and it changes, much like Holden’s Vespa.
- How to tune jobs “statically” e.g. without historical data
- How to collect historical data
- An example
- The latest and greatest auto tuner tools
I can haz application :p
val conf = new SparkConf()
.setMaster("local")
.setAppName("my_awesome_app")
val sc = SparkContext.getOrCreate(newConf)
val rdd = sc.textFile(inputFile)
val words: RDD[String] = rdd.flatMap(_.split(“ “).
map(_.trim.toLowerCase))
val wordPairs = words.map((_, 1))
val wordCounts = wordPairs.reduceByKey(_ + _)
wordCount.saveAsTextFile(outputFile)
Trish Hamme
Settings go here
This is a shuffle
I can haz application :p
val conf = new SparkConf()
.setMaster("local")
.setAppName("my_awesome_app")
val sc = SparkContext.getOrCreate(newConf)
val rdd = sc.textFile(inputFile)
val words: RDD[String] = rdd.flatMap(_.split(“ “).
map(_.trim.toLowerCase))
val wordPairs = words.map((_, 1))
val wordCounts = wordPairs.reduceByKey(_ + _)
wordCount.saveAsTextFile(outputFile)
Trish Hamme
Start of application
End Stage 1
Stage 2
Action, Launches Job
Spark Execution Environment
Other App
My Spark App
- Node can have several executors
- But on executor can only be on
one node
- Executors have same amount of
memory and cores
- One task per core
- Task is the compute for one
partition
- Rdd is distributed set of partitions
How many resources to give my application?
● Spark.executor.memory
● Spark.driver.memory
● spark.executor.vcores
● Enable dynamic allocation
○ (or set # number of executors)
val conf = new SparkConf()
.setMaster("local")
.setAppName("my_awesome_app")
.set("spark.executor.memory", ???)
.set("spark.driver.memory", ???)
.set("spark.executor.vcores", ???)
Executor and Driver Memory
- Driver Memory
- As small as it can be without failing (but that can be pretty big)
- Executor memory + overhead < less than the size of the container
- Think about binning
- if you have 12 gig nodes making an 8 gig executor is maybe silly
- Pros of Fewer Larger Executors Per Node
- Maybe less likely to oom
- Pros of More Small Executors
- Some people report slow down with more than 5ish cores … (more on that later)
- If using dynamic allocation easier to “scale up” on a busy cluster
Vcores
- Remember 1 core = 1 task*. So number of concurrent tasks is limited by total
cores
- In HDFS too many cores per executor may cause issue with concurrent hdfs threads maybe?
- 1 core per executor takes away some benefit of things like broadcast
variables
- Think about “burning” cpu and memory equally
- If you have 60Gb ram & 10 core nodes, making default executor size 30 gb but with ten cores
maybe not so great
How To Enable Dynamic Allocation
Dynamic Allocation allows Spark to add and subtract executors between
Jobs over the course of an application
- To configure
- spark.dynamicAllocation.enabled=true
- spark.shuffle.service.enabled=true (you have to configure external shuffle service on each worker)
- spark.dynamicAllocation.minExecutors
- spark.dynamicAllocation.maxExecutors
- spark.dynamicAllocation.initialExecutors
- To Adjust
- Spark will add executors when there are pending tasks (spark.dynamicAllocation.schedulerBacklogTimeout)
- and exponentially increase them as long as tasks in the backlog persist
(spark...sustainedSchedulerBacklogTimeout)
- Executors are decommissioned when they have been idle for
spark.dynamicAllocation.executorIdleTimeout
Why To Enable Dynamic Allocation
When
- Most important for shared or cost sensitive environments
- Good when an application contains several jobs of differing sizes
- The only real way to adjust resources throughout an application
Improvements
- If jobs are very short adjust the timeouts to be shorter
- For jobs that you know are large start with a higher number of initial executors to avoid slow spin up
- If you are sharing a cluster, setting max executors can prevent you from hogging it
Run it!
Matthew Hoelscher
Oh no! It failed :( How could we adjust it?
Suppose that in the driver log, we see a “container lost exception” and on the
executor logs we see:
java.lang.OutOfMemoryError: Java heap space
Or:
Application Memory Overhead Exceeded
This points to an out of memory error on the executors
hkase
Addressing Executor OOM
- If we have more executor memory to give it, try that!
- Lets try increasing the number of partitions so that each executor will process
smaller pieces of the data at once
- Spark.default.parallelism = 10
- Or by adding the number of partitions to the code e.g. reduceByKey(numPartitions = 10)
- Many more things you can do to improve the code
Low Cluster Utilization: Idle Executors Susanne Nilsson
What to do about it?
- If we see idle executors but the total size of our job is small, we may just be
requesting too many executors
- If all executors are idle it maybe because we are doing a large computation in
the driver
- If the computation is very large, and we see idle executors, this maybe
because the executors are waiting for a “large” task → so we can increase
partitions
- At some point adding partitions will slow the job down
- But only if not too much skew
Toshiyuki IMAI
Shuffle Spill to Disk/ Slow Shuffle Fung0131
Preventing Shuffle Spill to Disk
- Larger executors
- Configure off heap storage
- More partitions can help (ideally the labor of all the partitions on one executor
can “fit” in that executor’s memory)
- We can adjust shuffle settings
- Increase shuffle memory fraction (spark.shuffle.memory.fraction)
- Try increasing:
- spark.shuffle.file.buffer
- Configure an external shuffle service, so that the shuffle files will not need to
be stored in the spark executors
- spark.shuffle.io.serverThreads
- spark.shuffle.io.backLog
jaci XIII
Signs of Too Many Partitions
Number of partitions is the size of the data each core is computing … smaller
pieces are easier to process only up to a point
- Spark needs to keep metadata about each partition on the driver
- Driver memory errors & driver overhead errors
- Very long task “spin up” time
- Too many partitions at read usually caused by small part files
- Lots of pending tasks & Low memory utilization
- Long file write time for relatively small I/O “size” (especially with blockstores)
Dorian Wallender
PYTHON SETTINGS
● Application memory overhead
○ We can tune this based on if an app is PySpark or not
○ In the proposed PySpark on K8s PR this is done for us
○ More tuning may still be required
● Buffers & batch sizes oh my
○ spark.sql.execution.arrow.maxRecordsPerBatch
○ spark.python.worker.memory - default to 512 but default mem for Python can be lower :(
■ Set based on amount memory assigned to Python to reduce OOMs
○ Normal: automatic, sometimes set wrong - code change required :(
Nessima E.
Tuning Can Help With
- Low cluster utilization ($$$$)
- Out of memory errors
- Spill to Disk / Slow shuffles
- GC errors / High GC overhead
- Driver / Executor overhead exceptions
- Reliable deployment
Melinda Seckington
Things you can’t “tune away”
- Silly Shuffles
- You can make each shuffle faster through tuning but you cannot change the fundamental size
or number of shuffles without adjusting code
- Think about how much you are moving data, and how to do it more efficiently
- Be very careful with column based operations on Spark SQL
- Unbalanced shuffles (caused by unbalanced keys)
- if one or two tasks are much larger than the others
- #Partitions is bounded by #distinct keys
- Bad Object management/ Data structures choices
- Can lead to memory exceptions / memory overhead exceptions / gc errors
- Serialization exceptions*
- *Exceptions due to cost of serialization can be tuned
- Python
- This makes Holden sad, buuuut. Arrow?
Neil Piddock
But enough doom, lets watch (our new) software fail.
Christian Walther
But what if we run a billion Spark jobs per day?
Tuning Could Depend on 4 Factors:
1. The execution environment
2. The size of the input data
3. The kind of computation (ML, python, streaming)
_______________________________
1. The historical runs of that job
We can get all this information programmatically!
Execution Environment
What we need to know about where the job will run
- How much memory do I have available to me: on a single node/ on the cluster
- In my queue
- How much CPU: on a single node, on the cluster → corresponds to total
number of concurrent tasks
- We can get all this information from the yarn api (https://dzone.com/articles/how-to-use-the-yarn-
api-to-determine-resources-ava-1)
- Can I configure dynamic allocation?
- Cluster Health
Size of the input data
- How big is the input data on Disk
- How big will it be in memory?
- “Coefficient of In Memory Expansion: shuffle Spill Memory / shuffle Spill disk
- Can get historically or guess
- How many part files? (the default number of partitions at read)
- Cardinality and type?
Setting # partitions on the first try
Assuming you have many distinct keys, you want to try to make
partitions small enough that each partition fits in the memory “available
to each task” to avoid spilling to disk or failing
See Sandy Ryza’s very much sighted post
https://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-
jobs-part-2/
Size of input data in memory
Amount of memory available per task on the executors
This bite is too big
Child = executor
with one core
Partition of
melon
How much memory should each task use?
def availableTaskMemoryMB(executorMemory : Long): Double = {
val memFraction = sparkConf.getDouble("spark.memory.fraction", 0.6)
val storageFraction = sparkConf.getDouble("spark.memory.storageFraction", 0.5)
val nonStorage = 1-storageFraction
val cores = sparkConf.getInt("spark.executor.cores", 1)
Math.ceil((executorMemory * memFraction * nonStorage )/ cores)
}
Partitions could be inputDataSize/ memory per task
def determinePartitionsFromInputDataSize(inputDataSize: Double) : Int = {
Math.round(inputDataSize/availableTaskMemoryMB()).toInt
}
Historical Data
What kinds of historical data could we use?
- Historical cluster info
- (was one node sad? How many executors were used?)
- Stage Metrics
- duration (only on an isolated cluster)
- executor CPU
- vcores/Second
- Task Metrics
- Shuffle read & shuffle write → how much data is being moved & how big is input data
- “Total task time” = time for each task * number of tasks
- Shuffle read spill memory / shuffle spill disk = input data expansion in memory
- What kind of computation was run (python/streaming)
- Who was running the computation
How do we save the metrics?
- For a human: Spark history server can be persisted even if the cluster is
terminated
- For a program: Can use listeners to save everything from web UI
- (See Spark Measure and read these at the start of job)
- Can also get significantly more by capturing the full event stream
- We can save information at task/ stage/ job level.
Meet Robin Sparkles!!!!!
- Saves historical data using listeners from Spark Measure
- ( https://github.com/LucaCanali/sparkMeasure)
- Create directory scheme to save task and stage data for
successive runs
- Read in n previous runs before creating conf
- Use the result of the previous runs in tandem with current Spark
Conf to Optimize some Spark Settings. (WIP)
- Note: not to be used in production, simply the start of a spell
book.
Lets go to the
Mall!
Start a Spark listener and save metrics
Extend Spark Measure flight recorder which automatically saves metrics:
class RobinStageListener(sc: SparkContext, override val metricsFileName: String)
extends ch.cern.sparkmeasure.FlightRecorderStageMetrics(sc.getConf) {
}
Start Spark Listener
val myStageListener = new RobinStageListener(sc,
stageMetricsPath(runNumber))
sc.addSparkListener(myStageListener)
Add listener to program code
def run(sc: SparkContext, id: Int, metricsDir: String,
inputFile: String, outputFile: String): Unit = {
val metricsCollector = new MetricsCollector(sc, metricsDir)
metricsCollector.startSparkJobWithRecording(id)
//some app code
}
Read in the metrics
val STAGE_METRICS_SUBDIR = "stage_metrics"
val metricsDir = s"$metricsRootDir/${appName}"
val stageMetricsDir = s"$metricsDir/$STAGE_METRICS_SUBDIR"
def stageMetricsPath(n: Int): String = s"$metricsDir/run=$n"
def readStageInfo(n : Int) =
ch.cern.sparkmeasure.Utils.readSerializedStageMetrics(stageMetricsPath(n))
To use: read in metrics, then create optimized conf
val conf = new SparkConf() ….
val (newConf: SparkConf, id: Int) = Runner.getOptimizedConf(metricsDir, conf)
val sc = SparkContext.getOrCreate(newConf)
Runner.run(sc, id, metricsDir, inputFile, outputFile)
Put it all together!
A programmatic way of defining the number of
partitions …
Partitions
Performance
Parallelism / number of partitions
Keep increasing the number of partitions until the metric we care about stops
improving
- Spark.default.parallelism is only default if not in code
- Different stages could / should have different partitions
- can also compute for each stage and then set in shuffle phase:
- rdd.reduceByKey(numPartitions = x)
- You can design your job to get number of partitions from a variable in the conf
- Which stage to tune if we can only do one:
- We can search for the longest stage
- If we are tuning for parallelism, we might want to capture the stage “with the biggest shuffle”
- We use “total shuffle write” which is the number of bytes written in shuffle files during the stage
(this should be setting agnostic)
What metric defines better?
- Depends on your business … often a trade off between speed and efficiency
- If you just want speed Holden has several cloud solutions to sell you (or would if she knew
anyone in the sales department).
- cluster is “fully utilized”
- Stage duration * executors = sum(task time)/ executors) + fuge factor
- WARNING: Stage duration is dependent on instance & exec spin up, network delays excetera
- Could use anything that Spark measures:
- Executor Serialization time
- Deserialization time
- Executor Idle time
- I use Stage is “Fastest”-> Executor Cpu Time (total cpu time for all the
tasks)
Compute partitions given a list of metrics for the “biggest stage” across several runs
def fromStageMetricSharedCluster(previousRuns: List[StageInfo]): Int = {
previousRuns match {
case Nil =>
//If this is the first run and parallelism is not provided, use the number of concurrent tasks
//We could also look at the file on disk
possibleConcurrentTasks()
case first :: Nil =>
val fromInputSize = determinePartitionsFromInputDataSize(first.totalInputSize)
Math.max(first.numPartitionsUsed + math.max(first.numExecutors,1), fromInputSize)
}
If we have several runs of the same computation
case _ => val first = previousRuns(previousRuns.length - 2)
val second = previousRuns(previousRuns.length - 1)
if(morePartitionsIsBetter(first,second)) { //Increase the number of partitions from everything that we have tried
Seq(first.numPartitionsUsed, second.numPartitionsUsed).max + second.numExecutors
} else{ //If we overshot the number of partitions, use whichever run had the best executor cpu time
previousRuns.sortBy(_.executorCPUTime).head.numPartitionsUsed
} }
}
How to use Robin-Sparkles
- Assume user constructs the Spark Context
- Configure listener to write to a persistent location
- Maybe need to be in an external location if you are using a web based system
- Modify the application code that creates the Spark context to use Robin
Sparkles to set the settings
Note: not applicable for use where Spark context is created for you except for
tuning number of partitions or variables that can change at the job (rather than
application level)
She could do so much more!
- Additional settings to tune
- Set executor memory and driver memory based on system
- Executor health and system monitoring
- Could surface warnings about data skew
- Shuffle settings could be adjusted
- Currently we don’t get info if the stage fails
- Our reader would actually fail if stage data is malformed, we should read partial stage
- We could use the firehost listener that records every event from the event stream, to monitor
failures
- These other listeners would contain additional information that is not in the web ui
But first a nap
Donald Lee Pardue
Other Tools
Sometimes the right answer isn’t tuning, it’s telling the user to change the code
(see Sparklens) or telling the administrator to look at their cluster
Melinda Seckington
Sparklens - Tells us what to tune or refactor
● AppTimelineAnalyzer
● EfficiencyStatisticsAnalyzer
● ExecutorTimelineAnalyzer
● ExecutorWallclockAnalyzer
● HostTimelineAnalyzer
● JobOverlapAnalyzer
● SimpleAppAnalyzer
● StageOverlapAnalyzer
● StageSkewAnalyzer
Kitty Terwolbeck
Dr Elephant
● Spark ConfigurationHeurestic
○ val SPARK_DRIVER_MEMORY_KEY = "spark.driver.memory"
○ val SPARK_EXECUTOR_MEMORY_KEY = "spark.executor.memory"
○ val SPARK_EXECUTOR_INSTANCES_KEY = "spark.executor.instances"
○ val SPARK_EXECUTOR_CORES_KEY = "spark.executor.cores"
○ val SPARK_SERIALIZER_KEY = "spark.serializer"
○ val SPARK_APPLICATION_DURATION = "spark.application.duration"
○ val SPARK_SHUFFLE_SERVICE_ENABLED = "spark.shuffle.service.enabled"
○ val SPARK_DYNAMIC_ALLOCATION_ENABLED = "spark.dynamicAllocation.enabled"
○ val SPARK_DRIVER_CORES_KEY = "spark.driver.cores"
○ val SPARK_DYNAMIC_ALLOCATION_MIN_EXECUTORS =
"spark.dynamicAllocation.minExecutors"
○ etc.
● Among other things
Other monitoring/linting/tuning tools
● Kamon
● Sparklint
● Sparkoscope
● Graphite + Spark
Jakub Szestowicki
Upcoming settings in 2.4 (probably)
spark.executor.instances used dynamic allocation for Spark Streaming (zomg!)
spark.executor.pyspark.memory - hey what if we told Python how much memory
to use?
Some upcoming talks
● Lambda World Seattle - Bringing the Jewels of Python to Scala
● Spark Summit London
● Reversim Tel Aviv (keynote)
● PyCon Canada Toronto (keynote)
● ScalaX London
● Big Data Spain
High Performance Spark!
You can buy it today! And come to our book signing at 2:30.
The settings didn’t get its own chapter is in the appendix, (doing things on time is hard)
Cats love it*
*Or at least the box it comes in. If buying for a cat, get print rather than e-book.
Thanks and keep in touch!
Discussion and slides - http://bit.ly/2CNcpUQ :D

More Related Content

What's hot

Emr spark tuning demystified
Emr spark tuning demystifiedEmr spark tuning demystified
Emr spark tuning demystifiedOmid Vahdaty
 
Rapid Prototyping in PySpark Streaming: The Thermodynamics of Docker Containe...
Rapid Prototyping in PySpark Streaming: The Thermodynamics of Docker Containe...Rapid Prototyping in PySpark Streaming: The Thermodynamics of Docker Containe...
Rapid Prototyping in PySpark Streaming: The Thermodynamics of Docker Containe...Richard Seymour
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architectureSohil Jain
 
Scaling Apache Spark at Facebook
Scaling Apache Spark at FacebookScaling Apache Spark at Facebook
Scaling Apache Spark at FacebookDatabricks
 
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai YuAn Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai YuDatabricks
 
Apache Spark Performance: Past, Future and Present
Apache Spark Performance: Past, Future and PresentApache Spark Performance: Past, Future and Present
Apache Spark Performance: Past, Future and PresentDatabricks
 
Enterprise Scale Topological Data Analysis Using Spark
Enterprise Scale Topological Data Analysis Using SparkEnterprise Scale Topological Data Analysis Using Spark
Enterprise Scale Topological Data Analysis Using SparkAlpine Data
 
Apache Spark Core – Practical Optimization
Apache Spark Core – Practical OptimizationApache Spark Core – Practical Optimization
Apache Spark Core – Practical OptimizationDatabricks
 
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
Improving Apache Spark by Taking Advantage of Disaggregated ArchitectureImproving Apache Spark by Taking Advantage of Disaggregated Architecture
Improving Apache Spark by Taking Advantage of Disaggregated ArchitectureDatabricks
 
Jags Ramnarayan's presentation
Jags Ramnarayan's presentationJags Ramnarayan's presentation
Jags Ramnarayan's presentationpunesparkmeetup
 
Stories About Spark, HPC and Barcelona by Jordi Torres
Stories About Spark, HPC and Barcelona by Jordi TorresStories About Spark, HPC and Barcelona by Jordi Torres
Stories About Spark, HPC and Barcelona by Jordi TorresSpark Summit
 
Solving low latency query over big data with Spark SQL
Solving low latency query over big data with Spark SQLSolving low latency query over big data with Spark SQL
Solving low latency query over big data with Spark SQLJulien Pierre
 
Deep Learning on Apache® Spark™ : Workflows and Best Practices
Deep Learning on Apache® Spark™ : Workflows and Best PracticesDeep Learning on Apache® Spark™ : Workflows and Best Practices
Deep Learning on Apache® Spark™ : Workflows and Best PracticesJen Aman
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonBenjamin Bengfort
 
Spark-on-YARN: Empower Spark Applications on Hadoop Cluster
Spark-on-YARN: Empower Spark Applications on Hadoop ClusterSpark-on-YARN: Empower Spark Applications on Hadoop Cluster
Spark-on-YARN: Empower Spark Applications on Hadoop ClusterDataWorks Summit
 
Have your Cake and Eat it Too - Architecture for Batch and Real-time processing
Have your Cake and Eat it Too - Architecture for Batch and Real-time processingHave your Cake and Eat it Too - Architecture for Batch and Real-time processing
Have your Cake and Eat it Too - Architecture for Batch and Real-time processingDataWorks Summit
 
Accelerating Data Processing in Spark SQL with Pandas UDFs
Accelerating Data Processing in Spark SQL with Pandas UDFsAccelerating Data Processing in Spark SQL with Pandas UDFs
Accelerating Data Processing in Spark SQL with Pandas UDFsDatabricks
 
Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...
Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...
Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...Spark Summit
 
Cloudera Impala, updated for v1.0
Cloudera Impala, updated for v1.0Cloudera Impala, updated for v1.0
Cloudera Impala, updated for v1.0Scott Leberknight
 

What's hot (20)

Emr spark tuning demystified
Emr spark tuning demystifiedEmr spark tuning demystified
Emr spark tuning demystified
 
Rapid Prototyping in PySpark Streaming: The Thermodynamics of Docker Containe...
Rapid Prototyping in PySpark Streaming: The Thermodynamics of Docker Containe...Rapid Prototyping in PySpark Streaming: The Thermodynamics of Docker Containe...
Rapid Prototyping in PySpark Streaming: The Thermodynamics of Docker Containe...
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architecture
 
Scaling Apache Spark at Facebook
Scaling Apache Spark at FacebookScaling Apache Spark at Facebook
Scaling Apache Spark at Facebook
 
Incredible Impala
Incredible Impala Incredible Impala
Incredible Impala
 
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai YuAn Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu
 
Apache Spark Performance: Past, Future and Present
Apache Spark Performance: Past, Future and PresentApache Spark Performance: Past, Future and Present
Apache Spark Performance: Past, Future and Present
 
Enterprise Scale Topological Data Analysis Using Spark
Enterprise Scale Topological Data Analysis Using SparkEnterprise Scale Topological Data Analysis Using Spark
Enterprise Scale Topological Data Analysis Using Spark
 
Apache Spark Core – Practical Optimization
Apache Spark Core – Practical OptimizationApache Spark Core – Practical Optimization
Apache Spark Core – Practical Optimization
 
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
Improving Apache Spark by Taking Advantage of Disaggregated ArchitectureImproving Apache Spark by Taking Advantage of Disaggregated Architecture
Improving Apache Spark by Taking Advantage of Disaggregated Architecture
 
Jags Ramnarayan's presentation
Jags Ramnarayan's presentationJags Ramnarayan's presentation
Jags Ramnarayan's presentation
 
Stories About Spark, HPC and Barcelona by Jordi Torres
Stories About Spark, HPC and Barcelona by Jordi TorresStories About Spark, HPC and Barcelona by Jordi Torres
Stories About Spark, HPC and Barcelona by Jordi Torres
 
Solving low latency query over big data with Spark SQL
Solving low latency query over big data with Spark SQLSolving low latency query over big data with Spark SQL
Solving low latency query over big data with Spark SQL
 
Deep Learning on Apache® Spark™ : Workflows and Best Practices
Deep Learning on Apache® Spark™ : Workflows and Best PracticesDeep Learning on Apache® Spark™ : Workflows and Best Practices
Deep Learning on Apache® Spark™ : Workflows and Best Practices
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and Python
 
Spark-on-YARN: Empower Spark Applications on Hadoop Cluster
Spark-on-YARN: Empower Spark Applications on Hadoop ClusterSpark-on-YARN: Empower Spark Applications on Hadoop Cluster
Spark-on-YARN: Empower Spark Applications on Hadoop Cluster
 
Have your Cake and Eat it Too - Architecture for Batch and Real-time processing
Have your Cake and Eat it Too - Architecture for Batch and Real-time processingHave your Cake and Eat it Too - Architecture for Batch and Real-time processing
Have your Cake and Eat it Too - Architecture for Batch and Real-time processing
 
Accelerating Data Processing in Spark SQL with Pandas UDFs
Accelerating Data Processing in Spark SQL with Pandas UDFsAccelerating Data Processing in Spark SQL with Pandas UDFs
Accelerating Data Processing in Spark SQL with Pandas UDFs
 
Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...
Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...
Indicium: Interactive Querying at Scale Using Apache Spark, Zeppelin, and Spa...
 
Cloudera Impala, updated for v1.0
Cloudera Impala, updated for v1.0Cloudera Impala, updated for v1.0
Cloudera Impala, updated for v1.0
 

Similar to Understanding Spark Tuning: Strata New York

Spark autotuning talk final
Spark autotuning talk finalSpark autotuning talk final
Spark autotuning talk finalRachel Warren
 
Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...
Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...
Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...Spark Summit
 
Tuning tips for Apache Spark Jobs
Tuning tips for Apache Spark JobsTuning tips for Apache Spark Jobs
Tuning tips for Apache Spark JobsSamir Bessalah
 
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Olalekan Fuad Elesin
 
Big Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMRBig Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMRVijay Rayapati
 
Spark Performance Tuning .pdf
Spark Performance Tuning .pdfSpark Performance Tuning .pdf
Spark Performance Tuning .pdfAmit Raj
 
Apache Spark Performance tuning and Best Practise
Apache Spark Performance tuning and Best PractiseApache Spark Performance tuning and Best Practise
Apache Spark Performance tuning and Best PractiseKnoldus Inc.
 
Go performance tooling
Go performance toolingGo performance tooling
Go performance toolingAdil Hafeez
 
Architecting and productionising data science applications at scale
Architecting and productionising data science applications at scaleArchitecting and productionising data science applications at scale
Architecting and productionising data science applications at scalesamthemonad
 
Caching and tuning fun for high scalability @ FrOSCon 2011
Caching and tuning fun for high scalability @ FrOSCon 2011Caching and tuning fun for high scalability @ FrOSCon 2011
Caching and tuning fun for high scalability @ FrOSCon 2011Wim Godden
 
Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018
Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018
Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018Holden Karau
 
Building Apache Cassandra clusters for massive scale
Building Apache Cassandra clusters for massive scaleBuilding Apache Cassandra clusters for massive scale
Building Apache Cassandra clusters for massive scaleAlex Thompson
 
Explore big data at speed of thought with Spark 2.0 and Snappydata
Explore big data at speed of thought with Spark 2.0 and SnappydataExplore big data at speed of thought with Spark 2.0 and Snappydata
Explore big data at speed of thought with Spark 2.0 and SnappydataData Con LA
 
How Many Slaves (Ukoug)
How Many Slaves (Ukoug)How Many Slaves (Ukoug)
How Many Slaves (Ukoug)Doug Burns
 
Caching and tuning fun for high scalability @ phpBenelux 2011
Caching and tuning fun for high scalability @ phpBenelux 2011Caching and tuning fun for high scalability @ phpBenelux 2011
Caching and tuning fun for high scalability @ phpBenelux 2011Wim Godden
 
Migrating and living on rds aurora
Migrating and living on rds auroraMigrating and living on rds aurora
Migrating and living on rds auroraBalazs Pocze
 
Fundamentals of performance tuning PHP on IBM i
Fundamentals of performance tuning PHP on IBM i  Fundamentals of performance tuning PHP on IBM i
Fundamentals of performance tuning PHP on IBM i Zend by Rogue Wave Software
 
How Adobe Does 2 Million Records Per Second Using Apache Spark!
How Adobe Does 2 Million Records Per Second Using Apache Spark!How Adobe Does 2 Million Records Per Second Using Apache Spark!
How Adobe Does 2 Million Records Per Second Using Apache Spark!Databricks
 

Similar to Understanding Spark Tuning: Strata New York (20)

Spark autotuning talk final
Spark autotuning talk finalSpark autotuning talk final
Spark autotuning talk final
 
Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...
Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...
Fault Tolerance in Spark: Lessons Learned from Production: Spark Summit East ...
 
Tuning tips for Apache Spark Jobs
Tuning tips for Apache Spark JobsTuning tips for Apache Spark Jobs
Tuning tips for Apache Spark Jobs
 
Spark 101
Spark 101Spark 101
Spark 101
 
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2
 
Big Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMRBig Data and Hadoop in Cloud - Leveraging Amazon EMR
Big Data and Hadoop in Cloud - Leveraging Amazon EMR
 
Spark Performance Tuning .pdf
Spark Performance Tuning .pdfSpark Performance Tuning .pdf
Spark Performance Tuning .pdf
 
Apache Spark Performance tuning and Best Practise
Apache Spark Performance tuning and Best PractiseApache Spark Performance tuning and Best Practise
Apache Spark Performance tuning and Best Practise
 
Go performance tooling
Go performance toolingGo performance tooling
Go performance tooling
 
Architecting and productionising data science applications at scale
Architecting and productionising data science applications at scaleArchitecting and productionising data science applications at scale
Architecting and productionising data science applications at scale
 
Caching and tuning fun for high scalability @ FrOSCon 2011
Caching and tuning fun for high scalability @ FrOSCon 2011Caching and tuning fun for high scalability @ FrOSCon 2011
Caching and tuning fun for high scalability @ FrOSCon 2011
 
Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018
Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018
Apache spark as a gateway drug to FP concepts taught and broken - Curry On 2018
 
Building Apache Cassandra clusters for massive scale
Building Apache Cassandra clusters for massive scaleBuilding Apache Cassandra clusters for massive scale
Building Apache Cassandra clusters for massive scale
 
Explore big data at speed of thought with Spark 2.0 and Snappydata
Explore big data at speed of thought with Spark 2.0 and SnappydataExplore big data at speed of thought with Spark 2.0 and Snappydata
Explore big data at speed of thought with Spark 2.0 and Snappydata
 
How Many Slaves (Ukoug)
How Many Slaves (Ukoug)How Many Slaves (Ukoug)
How Many Slaves (Ukoug)
 
Vmfs
VmfsVmfs
Vmfs
 
Caching and tuning fun for high scalability @ phpBenelux 2011
Caching and tuning fun for high scalability @ phpBenelux 2011Caching and tuning fun for high scalability @ phpBenelux 2011
Caching and tuning fun for high scalability @ phpBenelux 2011
 
Migrating and living on rds aurora
Migrating and living on rds auroraMigrating and living on rds aurora
Migrating and living on rds aurora
 
Fundamentals of performance tuning PHP on IBM i
Fundamentals of performance tuning PHP on IBM i  Fundamentals of performance tuning PHP on IBM i
Fundamentals of performance tuning PHP on IBM i
 
How Adobe Does 2 Million Records Per Second Using Apache Spark!
How Adobe Does 2 Million Records Per Second Using Apache Spark!How Adobe Does 2 Million Records Per Second Using Apache Spark!
How Adobe Does 2 Million Records Per Second Using Apache Spark!
 

Recently uploaded

UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"DianaGray10
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesMd Hossain Ali
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsSeth Reyes
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?IES VE
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6DianaGray10
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
99.99% of Your Traces Are (Probably) Trash (SRECon NA 2024).pdf
99.99% of Your Traces  Are (Probably) Trash (SRECon NA 2024).pdf99.99% of Your Traces  Are (Probably) Trash (SRECon NA 2024).pdf
99.99% of Your Traces Are (Probably) Trash (SRECon NA 2024).pdfPaige Cruz
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding TeamAdam Moalla
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Websitedgelyza
 
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IES VE
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxGDSC PJATK
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintMahmoud Rabie
 
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDELiveplex
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...Aggregage
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1DianaGray10
 

Recently uploaded (20)

UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
UiPath Clipboard AI: "A TIME Magazine Best Invention of 2023 Unveiled"
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and Hazards
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?How Accurate are Carbon Emissions Projections?
How Accurate are Carbon Emissions Projections?
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6
 
20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
99.99% of Your Traces Are (Probably) Trash (SRECon NA 2024).pdf
99.99% of Your Traces  Are (Probably) Trash (SRECon NA 2024).pdf99.99% of Your Traces  Are (Probably) Trash (SRECon NA 2024).pdf
99.99% of Your Traces Are (Probably) Trash (SRECon NA 2024).pdf
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Website
 
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
IESVE Software for Florida Code Compliance Using ASHRAE 90.1-2019
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptx
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership Blueprint
 
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
 
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
The Data Metaverse: Unpacking the Roles, Use Cases, and Tech Trends in Data a...
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
 

Understanding Spark Tuning: Strata New York

  • 1. Understanding Spark Tuning (Magical spells to stop your pager going off at 2:00am) Holden Karau, Rachel Warren
  • 2. Rachel - Rachel Warren → She/ Her - Data Scientist / Software engineer at Salesforce Einstein - Formerly at Alpine Data (with Holden) - Lots of experience scaling Spark in different production environments - The other half of the High Performance Spark team :) - @warre_n_peace - Linked in: https://www.linkedin.com/in/rachelbwarren/ - Slideshare: https://www.slideshare.net/RachelWarren4/ - Github: https://github.com/rachelwarren
  • 3. Holden: ● My name is Holden Karau ● Prefered pronouns are she/her ● Developer Advocate at Google ● Apache Spark PMC :) ● previously IBM, Alpine, Databricks, Google, Foursquare & Amazon ● co-author of Learning Spark & High Performance Spark ● @holdenkarau ● Slide share http://www.slideshare.net/hkarau ● Code review livestreams & live coding: https://www.twitch.tv/holdenkarau / https://www.youtube.com/user/holdenkarau ● Github https://github.com/holdenk ● Spark Videos http://bit.ly/holdenSparkVideos ● Talk feedback: http://bit.ly/holdenTalkFeedback http://bit.ly/holdenTalkFeedback
  • 5. Who we think you wonderful humans are? ● Nice enough people ● I’m sure you love pictures of cats ● Might know something about using Spark, or are using it in production ● Maybe sys-admin or developer ● Are tired of spending so much time fussing with Spark Settings to get jobs to run Lori Erickson
  • 6. The goal of this talk is to give you the resources to programatically tune your Spark jobs so that they run consistently and efficiently In terms of and $$$$$
  • 7. What we will cover? - A run down of the most important settings - Getting the most out of Spark’s built-in Auto Tuner Options dynamic allocation - A few examples of errors and performance problems that can be addressed by tuning - A job can go out of tune over time as the world and it changes, much like Holden’s Vespa. - How to tune jobs “statically” e.g. without historical data - How to collect historical data - An example - The latest and greatest auto tuner tools
  • 8. I can haz application :p val conf = new SparkConf() .setMaster("local") .setAppName("my_awesome_app") val sc = SparkContext.getOrCreate(newConf) val rdd = sc.textFile(inputFile) val words: RDD[String] = rdd.flatMap(_.split(“ “). map(_.trim.toLowerCase)) val wordPairs = words.map((_, 1)) val wordCounts = wordPairs.reduceByKey(_ + _) wordCount.saveAsTextFile(outputFile) Trish Hamme Settings go here This is a shuffle
  • 9. I can haz application :p val conf = new SparkConf() .setMaster("local") .setAppName("my_awesome_app") val sc = SparkContext.getOrCreate(newConf) val rdd = sc.textFile(inputFile) val words: RDD[String] = rdd.flatMap(_.split(“ “). map(_.trim.toLowerCase)) val wordPairs = words.map((_, 1)) val wordCounts = wordPairs.reduceByKey(_ + _) wordCount.saveAsTextFile(outputFile) Trish Hamme Start of application End Stage 1 Stage 2 Action, Launches Job
  • 10. Spark Execution Environment Other App My Spark App - Node can have several executors - But on executor can only be on one node - Executors have same amount of memory and cores - One task per core - Task is the compute for one partition - Rdd is distributed set of partitions
  • 11. How many resources to give my application? ● Spark.executor.memory ● Spark.driver.memory ● spark.executor.vcores ● Enable dynamic allocation ○ (or set # number of executors) val conf = new SparkConf() .setMaster("local") .setAppName("my_awesome_app") .set("spark.executor.memory", ???) .set("spark.driver.memory", ???) .set("spark.executor.vcores", ???)
  • 12. Executor and Driver Memory - Driver Memory - As small as it can be without failing (but that can be pretty big) - Executor memory + overhead < less than the size of the container - Think about binning - if you have 12 gig nodes making an 8 gig executor is maybe silly - Pros of Fewer Larger Executors Per Node - Maybe less likely to oom - Pros of More Small Executors - Some people report slow down with more than 5ish cores … (more on that later) - If using dynamic allocation easier to “scale up” on a busy cluster
  • 13. Vcores - Remember 1 core = 1 task*. So number of concurrent tasks is limited by total cores - In HDFS too many cores per executor may cause issue with concurrent hdfs threads maybe? - 1 core per executor takes away some benefit of things like broadcast variables - Think about “burning” cpu and memory equally - If you have 60Gb ram & 10 core nodes, making default executor size 30 gb but with ten cores maybe not so great
  • 14. How To Enable Dynamic Allocation Dynamic Allocation allows Spark to add and subtract executors between Jobs over the course of an application - To configure - spark.dynamicAllocation.enabled=true - spark.shuffle.service.enabled=true (you have to configure external shuffle service on each worker) - spark.dynamicAllocation.minExecutors - spark.dynamicAllocation.maxExecutors - spark.dynamicAllocation.initialExecutors - To Adjust - Spark will add executors when there are pending tasks (spark.dynamicAllocation.schedulerBacklogTimeout) - and exponentially increase them as long as tasks in the backlog persist (spark...sustainedSchedulerBacklogTimeout) - Executors are decommissioned when they have been idle for spark.dynamicAllocation.executorIdleTimeout
  • 15. Why To Enable Dynamic Allocation When - Most important for shared or cost sensitive environments - Good when an application contains several jobs of differing sizes - The only real way to adjust resources throughout an application Improvements - If jobs are very short adjust the timeouts to be shorter - For jobs that you know are large start with a higher number of initial executors to avoid slow spin up - If you are sharing a cluster, setting max executors can prevent you from hogging it
  • 17. Oh no! It failed :( How could we adjust it? Suppose that in the driver log, we see a “container lost exception” and on the executor logs we see: java.lang.OutOfMemoryError: Java heap space Or: Application Memory Overhead Exceeded This points to an out of memory error on the executors hkase
  • 18. Addressing Executor OOM - If we have more executor memory to give it, try that! - Lets try increasing the number of partitions so that each executor will process smaller pieces of the data at once - Spark.default.parallelism = 10 - Or by adding the number of partitions to the code e.g. reduceByKey(numPartitions = 10) - Many more things you can do to improve the code
  • 19. Low Cluster Utilization: Idle Executors Susanne Nilsson
  • 20. What to do about it? - If we see idle executors but the total size of our job is small, we may just be requesting too many executors - If all executors are idle it maybe because we are doing a large computation in the driver - If the computation is very large, and we see idle executors, this maybe because the executors are waiting for a “large” task → so we can increase partitions - At some point adding partitions will slow the job down - But only if not too much skew Toshiyuki IMAI
  • 21. Shuffle Spill to Disk/ Slow Shuffle Fung0131
  • 22. Preventing Shuffle Spill to Disk - Larger executors - Configure off heap storage - More partitions can help (ideally the labor of all the partitions on one executor can “fit” in that executor’s memory) - We can adjust shuffle settings - Increase shuffle memory fraction (spark.shuffle.memory.fraction) - Try increasing: - spark.shuffle.file.buffer - Configure an external shuffle service, so that the shuffle files will not need to be stored in the spark executors - spark.shuffle.io.serverThreads - spark.shuffle.io.backLog jaci XIII
  • 23. Signs of Too Many Partitions Number of partitions is the size of the data each core is computing … smaller pieces are easier to process only up to a point - Spark needs to keep metadata about each partition on the driver - Driver memory errors & driver overhead errors - Very long task “spin up” time - Too many partitions at read usually caused by small part files - Lots of pending tasks & Low memory utilization - Long file write time for relatively small I/O “size” (especially with blockstores) Dorian Wallender
  • 24. PYTHON SETTINGS ● Application memory overhead ○ We can tune this based on if an app is PySpark or not ○ In the proposed PySpark on K8s PR this is done for us ○ More tuning may still be required ● Buffers & batch sizes oh my ○ spark.sql.execution.arrow.maxRecordsPerBatch ○ spark.python.worker.memory - default to 512 but default mem for Python can be lower :( ■ Set based on amount memory assigned to Python to reduce OOMs ○ Normal: automatic, sometimes set wrong - code change required :( Nessima E.
  • 25. Tuning Can Help With - Low cluster utilization ($$$$) - Out of memory errors - Spill to Disk / Slow shuffles - GC errors / High GC overhead - Driver / Executor overhead exceptions - Reliable deployment Melinda Seckington
  • 26. Things you can’t “tune away” - Silly Shuffles - You can make each shuffle faster through tuning but you cannot change the fundamental size or number of shuffles without adjusting code - Think about how much you are moving data, and how to do it more efficiently - Be very careful with column based operations on Spark SQL - Unbalanced shuffles (caused by unbalanced keys) - if one or two tasks are much larger than the others - #Partitions is bounded by #distinct keys - Bad Object management/ Data structures choices - Can lead to memory exceptions / memory overhead exceptions / gc errors - Serialization exceptions* - *Exceptions due to cost of serialization can be tuned - Python - This makes Holden sad, buuuut. Arrow? Neil Piddock
  • 27. But enough doom, lets watch (our new) software fail. Christian Walther
  • 28. But what if we run a billion Spark jobs per day?
  • 29. Tuning Could Depend on 4 Factors: 1. The execution environment 2. The size of the input data 3. The kind of computation (ML, python, streaming) _______________________________ 1. The historical runs of that job We can get all this information programmatically!
  • 30. Execution Environment What we need to know about where the job will run - How much memory do I have available to me: on a single node/ on the cluster - In my queue - How much CPU: on a single node, on the cluster → corresponds to total number of concurrent tasks - We can get all this information from the yarn api (https://dzone.com/articles/how-to-use-the-yarn- api-to-determine-resources-ava-1) - Can I configure dynamic allocation? - Cluster Health
  • 31. Size of the input data - How big is the input data on Disk - How big will it be in memory? - “Coefficient of In Memory Expansion: shuffle Spill Memory / shuffle Spill disk - Can get historically or guess - How many part files? (the default number of partitions at read) - Cardinality and type?
  • 32. Setting # partitions on the first try Assuming you have many distinct keys, you want to try to make partitions small enough that each partition fits in the memory “available to each task” to avoid spilling to disk or failing See Sandy Ryza’s very much sighted post https://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark- jobs-part-2/ Size of input data in memory Amount of memory available per task on the executors This bite is too big Child = executor with one core Partition of melon
  • 33. How much memory should each task use? def availableTaskMemoryMB(executorMemory : Long): Double = { val memFraction = sparkConf.getDouble("spark.memory.fraction", 0.6) val storageFraction = sparkConf.getDouble("spark.memory.storageFraction", 0.5) val nonStorage = 1-storageFraction val cores = sparkConf.getInt("spark.executor.cores", 1) Math.ceil((executorMemory * memFraction * nonStorage )/ cores) }
  • 34. Partitions could be inputDataSize/ memory per task def determinePartitionsFromInputDataSize(inputDataSize: Double) : Int = { Math.round(inputDataSize/availableTaskMemoryMB()).toInt }
  • 36. What kinds of historical data could we use? - Historical cluster info - (was one node sad? How many executors were used?) - Stage Metrics - duration (only on an isolated cluster) - executor CPU - vcores/Second - Task Metrics - Shuffle read & shuffle write → how much data is being moved & how big is input data - “Total task time” = time for each task * number of tasks - Shuffle read spill memory / shuffle spill disk = input data expansion in memory - What kind of computation was run (python/streaming) - Who was running the computation
  • 37. How do we save the metrics? - For a human: Spark history server can be persisted even if the cluster is terminated - For a program: Can use listeners to save everything from web UI - (See Spark Measure and read these at the start of job) - Can also get significantly more by capturing the full event stream - We can save information at task/ stage/ job level.
  • 38. Meet Robin Sparkles!!!!! - Saves historical data using listeners from Spark Measure - ( https://github.com/LucaCanali/sparkMeasure) - Create directory scheme to save task and stage data for successive runs - Read in n previous runs before creating conf - Use the result of the previous runs in tandem with current Spark Conf to Optimize some Spark Settings. (WIP) - Note: not to be used in production, simply the start of a spell book. Lets go to the Mall!
  • 39. Start a Spark listener and save metrics Extend Spark Measure flight recorder which automatically saves metrics: class RobinStageListener(sc: SparkContext, override val metricsFileName: String) extends ch.cern.sparkmeasure.FlightRecorderStageMetrics(sc.getConf) { } Start Spark Listener val myStageListener = new RobinStageListener(sc, stageMetricsPath(runNumber)) sc.addSparkListener(myStageListener)
  • 40. Add listener to program code def run(sc: SparkContext, id: Int, metricsDir: String, inputFile: String, outputFile: String): Unit = { val metricsCollector = new MetricsCollector(sc, metricsDir) metricsCollector.startSparkJobWithRecording(id) //some app code }
  • 41. Read in the metrics val STAGE_METRICS_SUBDIR = "stage_metrics" val metricsDir = s"$metricsRootDir/${appName}" val stageMetricsDir = s"$metricsDir/$STAGE_METRICS_SUBDIR" def stageMetricsPath(n: Int): String = s"$metricsDir/run=$n" def readStageInfo(n : Int) = ch.cern.sparkmeasure.Utils.readSerializedStageMetrics(stageMetricsPath(n))
  • 42. To use: read in metrics, then create optimized conf val conf = new SparkConf() …. val (newConf: SparkConf, id: Int) = Runner.getOptimizedConf(metricsDir, conf) val sc = SparkContext.getOrCreate(newConf) Runner.run(sc, id, metricsDir, inputFile, outputFile)
  • 43. Put it all together! A programmatic way of defining the number of partitions … Partitions Performance
  • 44. Parallelism / number of partitions Keep increasing the number of partitions until the metric we care about stops improving - Spark.default.parallelism is only default if not in code - Different stages could / should have different partitions - can also compute for each stage and then set in shuffle phase: - rdd.reduceByKey(numPartitions = x) - You can design your job to get number of partitions from a variable in the conf - Which stage to tune if we can only do one: - We can search for the longest stage - If we are tuning for parallelism, we might want to capture the stage “with the biggest shuffle” - We use “total shuffle write” which is the number of bytes written in shuffle files during the stage (this should be setting agnostic)
  • 45. What metric defines better? - Depends on your business … often a trade off between speed and efficiency - If you just want speed Holden has several cloud solutions to sell you (or would if she knew anyone in the sales department). - cluster is “fully utilized” - Stage duration * executors = sum(task time)/ executors) + fuge factor - WARNING: Stage duration is dependent on instance & exec spin up, network delays excetera - Could use anything that Spark measures: - Executor Serialization time - Deserialization time - Executor Idle time - I use Stage is “Fastest”-> Executor Cpu Time (total cpu time for all the tasks)
  • 46. Compute partitions given a list of metrics for the “biggest stage” across several runs def fromStageMetricSharedCluster(previousRuns: List[StageInfo]): Int = { previousRuns match { case Nil => //If this is the first run and parallelism is not provided, use the number of concurrent tasks //We could also look at the file on disk possibleConcurrentTasks() case first :: Nil => val fromInputSize = determinePartitionsFromInputDataSize(first.totalInputSize) Math.max(first.numPartitionsUsed + math.max(first.numExecutors,1), fromInputSize) }
  • 47. If we have several runs of the same computation case _ => val first = previousRuns(previousRuns.length - 2) val second = previousRuns(previousRuns.length - 1) if(morePartitionsIsBetter(first,second)) { //Increase the number of partitions from everything that we have tried Seq(first.numPartitionsUsed, second.numPartitionsUsed).max + second.numExecutors } else{ //If we overshot the number of partitions, use whichever run had the best executor cpu time previousRuns.sortBy(_.executorCPUTime).head.numPartitionsUsed } } }
  • 48. How to use Robin-Sparkles - Assume user constructs the Spark Context - Configure listener to write to a persistent location - Maybe need to be in an external location if you are using a web based system - Modify the application code that creates the Spark context to use Robin Sparkles to set the settings Note: not applicable for use where Spark context is created for you except for tuning number of partitions or variables that can change at the job (rather than application level)
  • 49. She could do so much more! - Additional settings to tune - Set executor memory and driver memory based on system - Executor health and system monitoring - Could surface warnings about data skew - Shuffle settings could be adjusted - Currently we don’t get info if the stage fails - Our reader would actually fail if stage data is malformed, we should read partial stage - We could use the firehost listener that records every event from the event stream, to monitor failures - These other listeners would contain additional information that is not in the web ui
  • 50. But first a nap Donald Lee Pardue
  • 51. Other Tools Sometimes the right answer isn’t tuning, it’s telling the user to change the code (see Sparklens) or telling the administrator to look at their cluster Melinda Seckington
  • 52. Sparklens - Tells us what to tune or refactor ● AppTimelineAnalyzer ● EfficiencyStatisticsAnalyzer ● ExecutorTimelineAnalyzer ● ExecutorWallclockAnalyzer ● HostTimelineAnalyzer ● JobOverlapAnalyzer ● SimpleAppAnalyzer ● StageOverlapAnalyzer ● StageSkewAnalyzer Kitty Terwolbeck
  • 53. Dr Elephant ● Spark ConfigurationHeurestic ○ val SPARK_DRIVER_MEMORY_KEY = "spark.driver.memory" ○ val SPARK_EXECUTOR_MEMORY_KEY = "spark.executor.memory" ○ val SPARK_EXECUTOR_INSTANCES_KEY = "spark.executor.instances" ○ val SPARK_EXECUTOR_CORES_KEY = "spark.executor.cores" ○ val SPARK_SERIALIZER_KEY = "spark.serializer" ○ val SPARK_APPLICATION_DURATION = "spark.application.duration" ○ val SPARK_SHUFFLE_SERVICE_ENABLED = "spark.shuffle.service.enabled" ○ val SPARK_DYNAMIC_ALLOCATION_ENABLED = "spark.dynamicAllocation.enabled" ○ val SPARK_DRIVER_CORES_KEY = "spark.driver.cores" ○ val SPARK_DYNAMIC_ALLOCATION_MIN_EXECUTORS = "spark.dynamicAllocation.minExecutors" ○ etc. ● Among other things
  • 54. Other monitoring/linting/tuning tools ● Kamon ● Sparklint ● Sparkoscope ● Graphite + Spark Jakub Szestowicki
  • 55. Upcoming settings in 2.4 (probably) spark.executor.instances used dynamic allocation for Spark Streaming (zomg!) spark.executor.pyspark.memory - hey what if we told Python how much memory to use?
  • 56. Some upcoming talks ● Lambda World Seattle - Bringing the Jewels of Python to Scala ● Spark Summit London ● Reversim Tel Aviv (keynote) ● PyCon Canada Toronto (keynote) ● ScalaX London ● Big Data Spain
  • 57. High Performance Spark! You can buy it today! And come to our book signing at 2:30. The settings didn’t get its own chapter is in the appendix, (doing things on time is hard) Cats love it* *Or at least the box it comes in. If buying for a cat, get print rather than e-book.
  • 58. Thanks and keep in touch! Discussion and slides - http://bit.ly/2CNcpUQ :D

Editor's Notes

  1. Photo from https://www.flickr.com/photos/lorika/4148361363/in/photolist-7jzriM-9h3my2-9Qn7iD-bp55TS-7YCJ4G-4pVTXa-7AFKbm-bkBfKJ-9Qn6FH-aniTRF-9LmYvZ-HD6w6-4mBo3t-8sekvz-mgpFzD-5z6BRK-de513-8dVhBu-bBZ22n-4Vi2vS-3g13dh-e7aPKj-b6iHHi-4ThGzv-7NcFNK-aniTU6-Kzqxd-7LPmYs-4ok2qy-dLY9La-Nvhey-Kte6U-74B7Ma-6VfnBK-6VjrY7-58kAY9-7qUeDK-4eoSxM-6Vjs5A-9v5Pvb-26mja-4scwq3-GHzAL-672eVr-nFUomD-4s8u8F-5eiQmQ-bxXXCc-5P9cCT-5GX8no
  2. What will be cover The most important spark settings in the context of the spark execution environment Some example of errors and performance problems that we see in spark jobs Tuning jobs statically statically but with historical data Open source data collect historical data Do stuff Other auto tuning spark tools Holden Karau, Rachel Warren, and Anya Bida explore auto-tuning jobs using both historical and live job information, using systems like Apache BEAM, Mahout, and internal Spark ML jobs as workloads. Much of the data required to effectively tune jobs is already collected inside of Spark. You just need to understand it. Holden, Rachel, and Anya outline sample auto-tuners and discuss the options for improving them and applying similar techniques in your own work. They also discuss what kind of tuning can be done statically (e.g., without depending on historic information) and look at Spark’s own built-in components for auto-tuning (currently dynamically scaling cluster size) and how you can improve them. Even if the idea of building an auto-tuner sounds as appealing as using a rusty spoon to debug the JVM on a haunted supercomputer, this talk will give you a better understanding of the knobs available to you to tune your Apache Spark jobs. Also, to be clear, Holden, Rachel, and Anya don’t promise to stop your pager going off at 2:00am, but hopefully this helps.
  3. Big data talk joke So this is the spark conf This is where we put the spark settings if we need them Starting the spark context correspond to starting the spark applications (kicks of the mathince) Action Two stages everything thing that happens before the data movement goes
  4. FIX THIS …. Spark Messos / Yarn / S3 / Standalone cluster
  5. Mention that it doesn’t always look like this and that there are so other ways of Think about how to present this because the executors are different sizes;...
  6. A few larger executors is bad because of GC Look up which HDFS deamon this is
  7. Sort of, unless you change it. Terms and conditions apply to Python users.
  8. Spark will add executors when there are pending tasks (spark.dynamicAllocation.schedulerBacklogTimeout) and exponentially increase them as long as tasks in the backlog persist (spark...sustainedSchedulerBacklogTimeout) Executors are decommissioned when they have been idle for
  9. Spark will add executors when there are pending tasks (spark.dynamicAllocation.schedulerBacklogTimeout) and exponentially increase them as long as tasks in the backlog persist (spark...sustainedSchedulerBacklogTimeout) Executors are decommissioned when they have been idle for
  10. Now we are going to look at some ways that we can fall over
  11. HOLDEN Speakzzzz
  12. RESEARCH OTHER METRICS OF CLUSTER UTALIZATION
  13. if partitions = total core cores, then next stage will wait on slowest partition If we have more partitions than the number of total cores then several tasks with less records could be executed while one tasks with many records is executed. At some point there are too many partitions and overhead gets high and job will slow down
  14. Shuffle spill (memory) is the size of the deserialized form of the data in memory at the time when we spill it, whereas shuffle spill (disk) is the size of the serialized form of the data on disk after we spill it. This is why the latter tends to be much smaller than the former. Note that both metrics are aggregated over the entire duration of the task (i.e. within each task you can spill multiple times).
  15. External shuffle service is storing shuffle files somewhere else
  16. Relative CPU and GC time maybe define them ...
  17. Requesting too many resources should say that earlier
  18. Rachel Starts to Speak again :)))))
  19. Give that configuration
  20. Mem is everything that is spark fraction that is reserved for execution and storage. Storage is the amount that is reserved for caching (but can be used for execution) .. e.g. could have cached data in it wooo Storage is for caching
  21. https://upload.wikimedia.org/wikipedia/commons/a/a3/Cat_with_book_2320356661.jpg
  22. I should talk about executor cpu time Vcores per second…. Look into spark lint and fill this out more smartly
  23. Link to spark measure !!! Add picture of robin sparkles
  24. This is where to put the graph
  25. The metrics that we most care about
  26. Up
  27. NB from Anya: