Spark and Redshift, both are very different technologies. As an Big Data Architect you must know when to use what.
At the time of this answer, if you look under the hood of the most advanced tech start-ups in Silicon Valley, you will likely find both Spark and Redshift. Spark is getting a little bit more attention these days because it’s a new shiny toy. But they cover different use cases ().
Let’s give you a decision-making framework that can guide you through your thinking.
- What it is
- Data architecture
- Redshift and Spark: Data engineering
WHAT IT IS
is a data processing engine. With Spark you can:
- in real-time
- in Java, Scala, Python and R
- use for building those apps
There is a general execution engine (Spark Core) and all other functionality is built on top of.
for three reasons:
Spark is fast because it distributes data across a cluster, and processes that data in parallel. It tries to process data in memory, vs. shuffling things in and out of disk (like e.g. MapReduce does).
Spark is easy because it has a high level of abstraction, allowing you to write applications with less lines of code. Plus, Scale and R are attractive for data manipulation.
Spark is extensible via the pre-built libraries, e.g. for machine learning, streaming apps or data ingestion. These libraries are either part of Spark or.
In short, the promise of Spark is to speed up development, make applications more portable and extensible, and make the actual application run faster.
A few more noteworthy points on Spark:
- Spark is open source, so you can and start running it yourself, e.g. on Amazon’s . Companies like (founded by the people who created Spark) offer support to make your life easier.
- I think mentions a key distinction that isn’t always clear. Spark “ ”. You will need some sort of persistent data storage that Spark can pull data from (i.e. a data source, such as Amazon S3 or – hint, hint – ).
- Spark then “reads data into memory” to process it. Once that’s done, Spark will require a place to store / pass on the results (because Spark is not a database) . Could be back And from (see where this answer is going?).
You need to know how to write code to use Spark (the “write applications” part). So the people who use Spark are typically developers.
Amazon Redshift is an analytical database. With Redshift you can:
- build a central data warehouse unifying data from many sources
- run big, complex analytic queries against that data with SQL
- report and pass on the results to dashboards or other apps
Redshift is a managed service provided by Amazon. Raw data flows into Redshift (called “ETL”), where it’s processed and transformed at a regular cadence (“transformation” or “aggregations”), or on an ad-hoc basis (“ad-hoc queries”). Another term for loading and transforming data is also “data pipelines”.
The image below gives you a high level overview of the Amazon Redshift Architecture and how queries flow.
People are excited about Redshift for three reasons:
Redshift is fast because its massively parallel processing (MPP) architecture distributes and parallelizes queries. Redshift allows a high query concurrency and processes queries in memory.
Redshift is easy because it can ingest structured, semi-structured and unstructured datasets (via S3 or DynamoDB) up to a petabyte or more, to then slice ‘n dice that data any way you can imagine with SQL.
Redshift is cheap because you can store data for a $935/TB annual fee (if you use thefor a 3-year reserved instance). That price-point is unheard of in the world of data warehousing.
In short, the promise of Redshift is to make data warehousing cheaper, faster and easier. You can analyze much bigger and complex datasets than ever before, and there’s athat work with Redshift.
A few more noteworthy points about Redshift:
- Redshift is a “fully managed service”. I’d say the “managed service” part of that is true. You run a credit card and you’re off to the races. The “fully” is misleading – lots of knobs to turn to get good performance.
- Your cluster comes empty. But there are plenty of that allow you to quickly populate your cluster to start analyzing and for business intelligence (“BI Analytics”) purposes.
- Redshift is a database, so you can store a history of your raw data AND the results of your transformations. In April 2017, Amazon also “ ”, which enables you to run queries against data in (which is a much cheaper way of storing your data).
You need to know how to write SQL queries to use Redshift (the “run big, complex queries” part). So the people who use Redshift are typically analysts or data scientists.
In summary, one way to think about Spark and Redshift is to distinguish them by what they are, what you do with them, how you interact with them, and who the typical user is.
Source: created for this Quora answer.
I’ve hinted at how you see both Spark and Redshift deployed. That gets us to data architecture.
In very simple terms, you can build an application with Spark, and then use Redshift both as a source and a destination for data.
Why would you do that? A key reason is the difference between Spark and Redshift in the way they process data, and how much time it takes to product a result.
- With Spark, you can do real-time stream processing, i.e. you get a real-time response to events in your data streams.
- With Redshift, you can do near-real time batch operations, i.e. you ingest small batches of events from data streams, to then run your analysis to get a response to events.
A highly simplified example: Fraud detection. You could build an app with Spark that detects fraud in real-time from e.g. a stream of bitcoin transactions. Given it’s near-real time character, Redshift would not be a great fit in this case.
But let’s say if you wanted to have more signals for your fraud detection, for better predictability. You could load data from Spark into Redshift. There, you join it with historic data on fraud patterns. But you can’t do that in real-time, the result would come too late for you to block the transaction. So you use Spark to e.g. block a transaction in real-time, and then wait for the result from Redshift to decide if you keep blocking it, send it to a human for verification, or approve it.
In December 2017, the Amazon Big Data Blog had another example of using both Redshift and Spark: “”. The post covers how to build a predictive app that tells you how likely a flight will be delayed. The prediction happens based on the time of day or the airline carrier, by using multiple data sources and processing them across Redshift and Spark.
You can see how the separation of “apps” and “data warehousing” we created at the start of this post is in reality an area that’s shifting or even merging. That takes us to the final part of this already way too long answer: Data engineering.
REDSHIFT & SPARK: DATA ENGINEERING
The border between developers and business intelligence analysts / data scientists are fading. That has given rise to a new occupation: Data engineering. I’ll use aby :
“In relation to previously existing roles, the data engineering field [is] a superset of business intelligence and data warehousing that brings more elements from software engineering, [and it] integrates the operation of ‘big data’ distributed systems”.
Spark is such a “big data” distributed system. Redshift is the data warehousing part. Data engineering is the discipline that brings both together. That’s because you see “code” moving its way into data warehousing. Code allows you to author, schedule and monitor data pipelines that feed into Redshift, incl. the transformations on the data once it sits inside your cluster. And you’ll very likely have to ingest data from Spark. And so the trend to “code” in warehousing implies that knowing SQL is not sufficient any more. each one to fulfill a specific use case that’s is best suited for.