Data Processing and Data Analysis — Improved workflow using AWS EMR

Photo by Carlos Muza on Unsplash

In this document, I will demonstrate how to do data analysis and plots using a simple workflow with EMR notebooks. This workflow eliminates the complexity of disjointed workflow of separate data processing (in EMR) and exploratory data analysis (in Python notebook). This is based on my experience using the methods from AWS blog entry


Data scientists dive deep into data using aggregation and plotting the results. On a small dataset this can be done on laptop using many tools (R, Python, etc). But as the size of data grows, this is not practical since the data needs large amounts of disk space and and processing will need large amounts of RAM and processing.

Big data tools like Apache Spark process large amounts of data. But the aggregated data has to be transported and plotted in another environment. I was following this workflow but I needed to follow several steps and keep the data in sync between processing step (in AWS EMR) and analysis step (my Jupyter notebook on my laptop).

Old way — notebooks in Amazon EMR produce csv

Amazon EMR is big data platform to process and analyze large amounts of data using Apache Spark, Apache Hadoop and other open source projects (Hive, Presto, etc.). Amazon EMR has introduced notebooks with both spark and Pyspark kernels to interact with spark sessions and process the data.

For many weeks, I processed my data using notebooks (Pyspark) and output my spark data frame with aggregated results into csv files on S3. I still use this method if I want to have a file containing results. But for producing quick analysis and charts this method is inefficient and forces user into a disjointed workflow.


New way — set up EMR and use sparkmagic

I started doing data analysis inside Pyspark notebook using the methods from AWS blog entry In then next two sections I will describe my describe my experience in using the two methods proposed in the blog.

I was able to setup EMR cluster using defaults. The only decision I made is choice of applications — I chose EMR version 6.4.0 with JupyterEnterpriseGateway 2.1.0, Spark 3.1.2, Presto 0.254.1, Livy 0.7.1, JupyterHub 1.4.1. I used web interface and it took me only few clicks.
I recommend using minimum of m5.x4large.

Best Option — Use libraries provided by EMR

EMR has pandas and plotly installed on my local jupyter environment (EMR 6.4.0) but I needed to stream the data from spark kernel to local kernel. I needed sparkmagic and I was pleasantly surprised to find out that EMR has it already enabled.
AWS blog entry has documented the steps but I would recommend reading the documentation of sparkmagic (also use %%help to get quick help in notebook).
Note: Blog said they could find matplotlib in local libs but I only found plotly in EMR 6.3 and EMR 6.4. You may check the package list for your EMR using conda list (may depend on your EMR version) .

conda list

Transform data to local pandas dataframe.

Only limitation here is to make sure the data frame you are taking to pandas is not very huge (<100MB). There are two options — direct transform and sql magic.

There are two options and I prefer the second.

If you already have spark data frame ready — you will just transport it to local data frame. Same name will exist in both Pyspark and local.

%spark -o all_students -n -1

I actually prefer using sqlmagic — I can quickly write a query and even choose a new name for the pandas dataframe

%%sql -o school_num_students -n -1 -q
SELECT s_name AS school, COUNT(st_name) AS num_students
GROUP BY s_name

Write Python Code and Functions

Treat %%local as your local python kernel and write code (think functions) that can be used in notebook later.

import datetime
def update_fig(fig, title_text):

Use Pandas and Plotly to do more analysis and plot

import pandas as pd
import as px
fig =,x="school",y="num_students")
update_fig(fig,'Students by School')

Pro Tips:

  • If you are getting errors in spark startup — restart your notebook kernel
  • Choose a bigger instance type if you get errors related to memory
  • Choose auto terminating EMR cluster if you would like the cluster to shut down at the end of day


This article covered easier way of doing data analysis on EMR notebook after doing data processing. This eliminates the need for multiple steps withe separate data processing and data analysis.

Another option — Installing python packages into cluster:

This may be a better option if you like to work with libraries not available in local (see previous section). For me pandas and plotly are all I need so I do not use this option but my experience is captured below and I can confirm it works.

I tried to install pandas and other libs onto cluster as described in AWS blog entry

sc.install_pypi_package("matplotlib", "") #Install matplotlib from given PyPI repository

But I ran into issue with package versions, there may be other possible issues finding repositories and getting past the security setup for your EMR cluster b your organization.

Failed building wheel for pillow
Command "/tmp/1636747058971-0/bin/python -u -c "import setuptools, tokenize;__file__='/mnt/tmp/pip-build-d664m4y8/pillow/';f=getattr(tokenize, 'open', open)(__file__);'\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-3vwbtvlr-record/install-record.txt --single-version-externally-managed --compile --install-headers /tmp/1636747058971-0/include/site/python3.7/pillow" failed with error code 1 in /mnt/tmp/pip-build-d664m4y8/pillow/
You are using pip version 9.0.1, however version 21.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

I found a workaround like this and got everything working.

sc.install_pypi_package("matplotlib==3.1.1", "") #Install matplotlib from given PyPI repository

I got everything to work best practice is to uninstall at end of notebook but if you own the cluster, you may keep them installed for other notebooks. It is also best practice to uninstall packages from cluster.





Data Science | Machine Learning | Operations Research

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

4d Result Live — How to Win 4d Damacai?

4d Result Live - How to Win 4d Damacai?

Precise Asset Inspections using Fly-To-Draw in Hammer

USDR’s appointment finder tools and services for faster, easier access to COVID-19 vaccines

Level-up with Semi-joins in R

How to Win 4d Damacai?

How to Win 4d Damacai?

Inefficiency in pie charts and radar charts.

The elegance of Musical Fountains

What is Context Transition ?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ram Thiruveedhi

Ram Thiruveedhi

Data Science | Machine Learning | Operations Research

More from Medium

Python Data Preprocessing Using Pyspark

Hello world, Apache Spark

Set up a local Pyspark Environment with Jupyter on Windows/Mac

Create a Spark/Hive meta-store table using nested JSON with invalid field names