PySpark GroupBy Guide: Super Simple Way to Group Data

PySpark GroupBy Guide


Master efficient data grouping techniques with PySpark GroupBy for optimized data analysis.

Do you struggle effectively managing big datasets? Are you bored with rigid, slow approaches to organizing data?

This post will discuss PySpark's GroupBy capabilities and how they could transform your data processing chores.

What is Data Grouping?

The next step in data analytics is data grouping. We could sort the data bits into many different groups, or we could control the data based on certain rules.

With this method, you can combine data, which makes it easier to find trends, patterns, and outliers in a dataset.

For instance:

  • Month-based grouping of sales statistics (Knowledge of seasonal tendencies.)
  • Grouping information according to area. (Showing variations in revealing performance in several spheres.)

Many disciplines, including:

  • Finance
  • Marketing
  • Sociology and related fields

What is PySpark GroupBy functionality?

PySpark GroupBy is a useful tool often used to group data and do different things on each group as needed.

People who work with data can use this method to combine one or more columns and use one or more aggregation operations on a DataFrame, such as sum, average, count, min, max, and so on.


DataFrames: fb_eu_energy, fb_na_energy, fb_asia_energy
Expected Output Type: pandas.DataFrame

In this question, Meta is asked to calculate the total (i.e., cumulative sum) energy consumption of the Meta/Facebook data centers in all three continents by the date.

Link to this question: https://platform.stratascratch.com/coding/10084-cum-sum-energy-consumption/official-solution

Steps to Solve:

  1. Merge data frames.
  2. Organize by Date
  3. Calculate the cumulative sum.
  4. Calculate the consumption in terms of a percentage of the whole cumulative.
  5. Handling Unfiltered Data

Here is the code.

import pandas as pd
import numpy as np
from pyspark.sql import functions as F
from pyspark.sql import Window

merged_df = fb_eu_energy.union(fb_na_energy).union(fb_asia_energy)

df = merged_df.groupby('date').agg(F.sum('consumption').alias('consumption')).orderBy('date')
df = df.withColumn('cumulative_total_consumption', F.sum('consumption').over(Window.orderBy('date')))
df = df.withColumn('percentage_of_total_consumption', F.round((F.col('cumulative_total_consumption') / F.sum('consumption').over(Window.partitionBy())).cast('double') * 100))

df = df.drop('consumption')
df = df.withColumn('date', F.date_format(F.col('date'), 'yyyy-MM-dd'))

result = df.toPandas()
result

Here is the output.

All required columns and the first 5 rows of the solution are shown

datecumulative_total_consumptionpercentage_of_total_consumption
2020-01-01105013
2020-01-02217527
2020-01-03327540
2020-01-04445055
2020-01-05565069

To learn more about PySpark, check out this: What is PySpark?

How does PySpark GroupBy work, and its advantages over traditional grouping methods?

Standard methods may take a long time to work with large amounts of data and may run out of memory. Distributed computing is used by PySpark. This is a much faster and better way to handle huge amounts of data.


DataFrames: winemag_p1, winemag_p2
Expected Output Type: pandas.DataFrame

In this question, we are asked to find all wines from the winemag_p2 dataset produced in the country with the highest sum of points in the winemag_p1 dataset.

Link to this question: https://platform.stratascratch.com/coding/10040-find-all-wines-from-the-winemag_p2-dataset-which-are-produced-in-countries-that-have-the-highest-sum-of-points-in-the-winemag_p1-dataset

Steps to Solve:

1. Remove every non-null value.
2. Collect and Group
3. Highest national points for a certain country
4. Combine Datasets

Here is the code.

import pyspark.sql.functions as F

country = winemag_p1.filter(F.col('country').isNotNull())

high_point = country.groupBy('country').agg(F.sum('points').alias('sum')).orderBy(F.desc('sum')).limit(1).select('country')

result = winemag_p2.join(high_point, 'country', 'inner')

result.toPandas()

Here is the output’s first two rows;

All required columns and the first 5 rows of the solution are shown

idcountrydescriptiondesignationpointspriceprovinceregion_1region_2taster_nametaster_twitter_handletitlevarietywinery
118040USA bit too soft and thus lacks structure. Shows a good array of wild berry and briary, brambly flavors. Dry and spicy, and ready now.The Convict Rocky Ridge Vineyard8638CaliforniaRockpileSonomaParadise Ridge 2006 The Convict Rocky Ridge Vineyard Zinfandel (Rockpile)ZinfandelParadise Ridge
117951USHeavy in alcohol and overripe, but entirely dry, and the black currant, chocolate and pepper flavors will play well against richly sauced barbecue.The Caboose8430CaliforniaAlexander ValleySonomaStarry Night 2007 The Caboose Zinfandel (Alexander Valley)ZinfandelStarry Night
8052USA boisterous, easy-drinking red wine that gushes ripe fruit and sweet oak aromas. It tastes almost sweet, like blackberry jam, and seems soft-textured in spite of its full body.Dante's Inferno8631CaliforniaCaliforniaCalifornia OtherJim Gordon@gordone_cellarsDante Robere 2012 Dante's Inferno Red (California)Rhane-style Red BlendDante Robere
18672USSuperripe and oaky in pineapple, white peach purae, pear tart, buttered toast and vanilla-infused crame bralae, this Chardonnay is saved from mere richness by crisp coastal acidity and a fine spine of minerals. Exotic and delicious, it's at its best now.Split Rock Vineyard9230CaliforniaSonoma CoastSonomaBartz-Allen 2008 Split Rock Vineyard Chardonnay (Sonoma Coast)ChardonnayBartz-Allen
75703USThis white is a study in unexpected complexity and pleasure. Quince, sulfur and grapefruit notes define it best, as supple minerality and gravelly texture provide dimensionality.Olema9015CaliforniaSonoma CountySonomaVirginie Boone@vbooneAmici 2014 Olema Chardonnay (Sonoma County)ChardonnayAmici

The final DataFrame will show wines from the nation with the most points, therefore highlighting PySpark's GroupBy ability to manage challenging aggregation chores.

With PySpark's GroupBy tools, we can manage far more vast datasets than conventional approaches and more effectively complete difficult data aggregation chores.

How do you perform data grouping using PySpark GroupBy?

Simply put, Pyspark Groupyby lets you summarize or control data in a massive dataset environment where specific criteria are important.


DataFrames: fb_eu_energy, fb_asia_energy, fb_na_energy
Expected Output Type: pandas.DataFrame

This question asks us to identify the date from the Meta/Facebook data centers with the greatest total energy consumption and then output the date with the overall energy consumption across all data centers.

Link to this question: https://platform.stratascratch.com/coding/10064-highest-energy-consumption

Steps to Solve:

1. Union DataFrames
2. Group by Date
3. Find Maximum Consumption Date
4. Filter and Select Results

Here is the code.

import pyspark.sql.functions as F

df = fb_eu_energy.union(fb_asia_energy).union(fb_na_energy)
consumption = df.groupBy('date').agg(F.sum('consumption').alias('total_consumption'))
result = consumption.filter(consumption['total_consumption'] == consumption.select(F.max('total_consumption')).collect()[0][0]).select('date', 'total_consumption')
result.toPandas()

Here is the output.

All required columns and the first 5 rows of the solution are shown

dateconsumption
2020-01-06 00:00:001250
2020-01-07 00:00:001250

The resulting DataFrame will display the date with the highest total energy consumption and the corresponding total consumption value. This example demonstrates how to use PySpark's GroupBy functionality to efficiently perform data grouping and aggregation.

Real-World Examples

Data grouping and aggregation are common tasks for data analysis in real-world scenarios. PySpark's GroupBy functionality provides an efficient way to handle these operations on large datasets. Here, we will look at a specific example from the Meta/Facebook data centers.


DataFrames: fb_eu_energy, fb_na_energy, fb_asia_energy
Expected Output Type: pandas.DataFrame

In this question, we are asked to calculate the total energy consumption of the Meta data centers in all three continents by the date.

Link to this question: https://platform.stratascratch.com/coding/10084-cum-sum-energy-consumption/

Steps to Solve:

1. Union DataFrames
2. Group by Date
.
3. Calculate Cumulative Sum
4. Calculate Percentage
5. Clean Data

Here is the code.

import pandas as pd
import numpy as np
from pyspark.sql import functions as F
from pyspark.sql import Window

merged_df = fb_eu_energy.union(fb_na_energy).union(fb_asia_energy)

df = merged_df.groupBy('date').agg(F.sum('consumption').alias('consumption')).orderBy('date')

df = df.withColumn('cumulative_total_consumption', F.sum('consumption').over(Window.orderBy('date')))

df = df.withColumn('percentage_of_total_consumption', F.round((F.col('cumulative_total_consumption') / F.sum('consumption').over(Window.partitionBy())).cast('double') * 100))

df = df.drop('consumption')
df = df.withColumn('date', F.date_format(F.col('date'), 'yyyy-MM-dd'))

result = df.toPandas()
result

Here is the output.

All required columns and the first 5 rows of the solution are shown

datecumulative_total_consumptionpercentage_of_total_consumption
2020-01-01105013
2020-01-02217527
2020-01-03327540
2020-01-04445055
2020-01-05565069

The final DataFrame will have the totals of all the energy data points and the percentage of total consumption for each date. This shows how powerful PySpark's GroupBy feature is for quickly and easily combining large amounts of data.

Common grouping operations such as aggregation, filtering, and sorting

Aggregation, filtering, and sorting are all common grouping operations that are used in data analysis to summarize and get insights from big datasets. The GroupBy feature in PySpark makes these tasks easier, which makes working with big data easier.

Aggregation

This question asks us to find out how many checks happen in the 94102 postal code area each January, May, or November.


DataFrame: sf_restaurant_health_violations
Expected Output Type: pandas.DataFrame

Link to this question: https://platform.stratascratch.com/coding/9734-number-of-inspections-by-zip

Steps to Solve:

1. Filter by month and postal code
2. Sort by month and year
3. Count the inspections
4. Pivot and Aggregate

Here is the code.

import pyspark.sql.functions as F

result = sf_restaurant_health_violations \
    .where((F.col('business_postal_code') == 94102) & (F.month('inspection_date').isin(1, 5, 11))) \
    .groupBy(F.year('inspection_date').alias('year'), F.month('inspection_date').alias('mm')) \
    .agg(F.count('*').alias('cnt')) \
    .groupBy('year') \
    .pivot('mm') \
    .agg(F.sum('cnt')) \
    .fillna(0) \
    .toPandas()

result

Here is the output.

All required columns and the first 5 rows of the solution are shown

inspection_date1511
2016111
2017020
2018010

With a different column for each month, the resulting DataFrame will show how many inspections were done each year in January, May, and November. This shows how to use PySpark's GroupBy feature to do complex aggregation and pivot actions quickly.

Filtering

In this question, we need to find out the user ID, language, and location of all Nexus 5 control group users in Italy who don't speak Italian.


DataFrames: playbook_experiments, playbook_users
Expected Output Type: pandas.DataFrame

Link to this question: https://platform.stratascratch.com/coding/9609-find-nexus5-control-group-users-in-italy-who-dont-speak-italian

Steps to Solve:

1. Filter by Group, Location, and Language
2. Join DataFrames
3. Sort Results

Let’s see the code.

playbook_experiments = playbook_experiments.filter((playbook_experiments.device == 'nexus 5') & (playbook_experiments.experiment_group == 'control_group') & (playbook_experiments.location == 'Italy'))
playbook_users = playbook_users.filter((playbook_users.language != 'Italian') & (playbook_users.language != 'italian'))
merged_df = playbook_experiments.join(playbook_users, on='user_id', how='inner')
result = merged_df.sort('occurred_at').select('user_id', 'language', 'location').toPandas()
result

Here is the output.

All required columns and the first 5 rows of the solution are shown

user_idlanguagelocation
52spanishItaly
52germanItaly
2581englishItaly
3931englishItaly
5805spanishItaly

Sorting

In this question, we are asked to arrange a column of random IDs based on their second character in ascending alphabetical order.


DataFrame: random_id
Expected Output Type: pandas.DataFrame

Link to this question: https://platform.stratascratch.com/coding/2166-sorting-by-second-character

Steps to Solve:

1. Extract Second Character
2. Sort by Second Character

Here is the code.

import pyspark.sql.functions as F

random_id = random_id.withColumn("second", F.substring(random_id["id"], 2, 1))
result = random_id.orderBy("second").drop("second")
result.toPandas()

Here is the output.

All required columns and the first 5 rows of the solution are shown

id
3ASD1
2ABS5
4AOS9
2CBS6
3CUY1

The resulting DataFrame will display the IDs sorted by the second character of each ID, demonstrating PySpark's ability to handle sorting operations efficiently.

Best Practices for Efficient Data Grouping

It's important to group data efficiently if you want to get the most out of your computer and make sure that data analysis jobs go smoothly, especially when you're working with big datasets. Here are some of the best ways to use PySpark to efficiently group data:

1. Use the Right Types of Data:

  • Make sure that the fields used to group and aggregate data are the right type of data. For example, when grouping, use numbers or categorical data instead of text data if you can.

2. Sort data early:

  • Early in the pipeline, apply filters to your information to get rid of extra processed data. This helps keep the computing load as low as possible.
  • Improve how memory is used:

3. Improve how memory is used:

  • If intermediate results are used more than once in the process, you can store them in PySpark's cache() or persist() methods.

4. Use of leverage Built-in Functions:

  • You can use PySpark's built-in functions to perform basic tasks like collecting, filtering, and sorting. These features have been tweaked to work better.

5. Avoid Shuffling Data Unnecessarily:

  • Reduce the number of shuffles your PySpark jobs perform. Moving data around the network is expensive, so try to avoid it if you can.

Of course, doing practice is the best way. Check PySpark Interview Questions to do this practice you need.

Conclusion

In this article, we explored the efficient data grouping capabilities of PySpark's GroupBy functionality. We covered data grouping basics, demonstrated how to use PySpark for grouping and aggregating data, and highlighted the advantages of PySpark over traditional methods.

To master these techniques, practice and apply them using our platform. The platform offers a vast collection of coding questions and datasets, providing hands-on experience to tackle complex data problems confidently and efficiently.

Start practicing today to refine your data analysis skills!

PySpark GroupBy Guide


Become a data expert. Subscribe to our newsletter.