Mhrpzz8kqtghuprffqbe

From 0 to 1 : Spark for Data Science with Python

Get your data to fly using Spark for analytics, machine learning and data science​

Course Description
If you are an analyst or a data scientist, you're used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.
Using Spark and Python you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease.
Spark's core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We'll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets.

Learning Outcomes

Use Spark for a variety of analytics and Machine Learning tasks
Implement complex algorithms like PageRank or Music Recommendations
Work with a variety of datasets from Airline delays to Twitter, Web graphs, Social networks and Product Ratings
Use all the different features and libraries of Spark: RDDs, Dataframes, Spark SQL, MLlib, Spark Streaming and GraphX

Pre-requisites:

The course assumes knowledge of Python. You can write Python code directly in the PySpark shell. If you already have IPython Notebook installed, we'll show you how to configure it for Spark
For the Java section, we assume basic knowledge of Java. An IDE which supports Maven, like IntelliJ IDEA/Eclipse would be helpful
All examples work with or without Hadoop. If you would like to use Spark with Hadoop, you'll need to have Hadoop installed (either in pseudo-distributed or cluster mode.

Who is this course intended for?
Analysts who want to leverage Spark for analyzing interesting datasets.
Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.

Engineers who want to use a distributed computing engine for batch or stream processing or both.


Your Instructor


Loonycorn
Loonycorn
Loonycorn is us, Janani Ravi and Vitthal Srinivasan. Between us, we have studied at Stanford, been admitted to IIM Ahmedabad and have spent years working in tech, in the Bay Area, New York, Singapore and Bangalore.

Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft

Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too

We think we might have hit upon a neat way of teaching complicated tech courses in a funny, practical, engaging way, which is why we are so excited to be here on Learnsector!

We hope you will try our offerings, and think you'll like them :-)

Class Curriculum


  Introduction
Available in days
days after you enroll
  Graph Libraries
Available in days
days after you enroll

Frequently Asked Questions


When does the course start and finish?
The course starts now and never ends! It is a completely self-paced online course - you decide when you start and when you finish.
How long do I have access to the course?
How does lifetime access sound? After enrolling, you have unlimited access to this course for as long as you like - across any and all devices you own.
What if I am unhappy with the course?
We would never want you to be unhappy! If you are unsatisfied with your purchase, contact us in the first 30 days and we will give you a full refund.

Get started now!