What Are the Most Common Python Basic Interview Questions?

Most common python basic interview questions


This article covers key Python interview questions for beginners, focusing on basics and data handling in Python. Let's dive in!

Did you know that Python is now the most used programming language? As of October 2022, more people use Python than C or Java. This fact comes from the TIOBE Index, a famous ranking for programming languages.

Another fact that, Python's popularity keeps growing fast. Every year, it gets 22% more users. By 2022, over four million developers were using Python on GitHub.

In this article, we will talk about the most common Python questions in job interviews, especially for beginners. We will look at basic things and also how to work with data in Python, buckle up and let’s get started!

Basic Python Interview Question #1: Find out search details for apartments designed for a sole-person stay

This question asks us to identify the search details for apartments that are suitable for just one person to stay in by Airbnb.


DataFrame: airbnb_search_details
Expected Output Type: pandas.DataFrame

Link to the question: https://platform.stratascratch.com/coding/9615-find-out-search-details-for-apartments-designed-for-a-sole-person-stay

Let’s see our data.

Table: airbnb_search_details
idpriceproperty_typeroom_typeamenitiesaccommodatesbathroomsbed_typecancellation_policycleaning_feecityhost_identity_verifiedhost_response_ratehost_sinceneighbourhoodnumber_of_reviewsreview_scores_ratingzipcodebedroomsbeds
12513361555.68ApartmentEntire home/apt{TV,"Wireless Internet","Air conditioning","Smoke detector","Carbon monoxide detector",Essentials,"Lock on bedroom door",Hangers,Iron}21Real BedflexibleFALSENYCt89%2015-11-18East Harlem3871002901
7196412366.36CabinPrivate room{"Wireless Internet",Kitchen,Washer,Dryer,"Smoke detector","First aid kit","Fire extinguisher",Essentials,"Hair dryer","translation missing: en.hosting_amenity_49","translation missing: en.hosting_amenity_50"}23Real BedmoderateFALSELAf100%2016-09-10Valley Glen14919160611
16333776482.83HousePrivate room{TV,"Cable TV",Internet,"Wireless Internet",Kitchen,"Free parking on premises","Pets live on this property",Dog(s),"Indoor fireplace","Buzzer/wireless intercom",Heating,Washer,Dryer,"Smoke detector","Carbon monoxide detector","First aid kit","Safety card","Fire extinguisher",Essentials,Shampoo,"24-hour check-in",Hangers,"Hair dryer",Iron,"Laptop friendly workspace","translation missing: en.hosting_amenity_49","translation missing: en.hosting_amenity_50","Self Check-In",Lockbox}21Real BedstrictTRUESFt100%2013-12-26Richmond District117969411811
1786412448.86ApartmentPrivate room{"Wireless Internet","Air conditioning",Kitchen,Heating,"Suitable for events","Smoke detector","Carbon monoxide detector","First aid kit","Fire extinguisher",Essentials,Shampoo,"Lock on bedroom door",Hangers,"translation missing: en.hosting_amenity_49","translation missing: en.hosting_amenity_50"}21Real BedstrictTRUENYCt93%2010-05-11Williamsburg8861121111
14575777506.89VillaPrivate room{TV,Internet,"Wireless Internet","Air conditioning",Kitchen,"Free parking on premises",Essentials,Shampoo,"translation missing: en.hosting_amenity_49","translation missing: en.hosting_amenity_50"}62Real BedstrictTRUELAt70%2015-10-2221009070333

We are looking at information about apartments made for one person. We use two tools, pandas and numpy, which are like helpers for managing and understanding data.

  • First, we focus on the data that shows apartments for one person. We check where 'accommodates' is equal to 1.
  • Then, we also want these apartments to be of a specific type - 'Apartment'. So, we look for where 'property_type' says 'Apartment'.
  • By combining these two conditions, we get details only for apartments perfect for one person.
  • We store this specific information in a new place called 'result'.

In simple words, we are just picking out the apartment searches that match two things: meant for one person and are apartments. Let’s see the code.

import pandas as pd
import numpy as np

result = airbnb_search_details[(airbnb_search_details['accommodates'] == 1) & (airbnb_search_details['property_type'] == 'Apartment')]

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

idpriceproperty_typeroom_typeamenitiesaccommodatesbathroomsbed_typecancellation_policycleaning_feecityhost_identity_verifiedhost_response_ratehost_sinceneighbourhoodnumber_of_reviewsreview_scores_ratingzipcodebedroomsbeds
5059214431.75ApartmentPrivate room{TV,"Wireless Internet","Air conditioning",Kitchen,"Free parking on premises",Breakfast,Heating,"Smoke detector","Carbon monoxide detector","First aid kit","Fire extinguisher",Essentials,Shampoo,"Lock on bedroom door",Hangers,"Laptop friendly workspace","Private living room"}13Real BedstrictFALSENYCf2014-03-14 00:00:00Harlem01003021
10923708340.12ApartmentPrivate room{TV,Internet,"Wireless Internet","Air conditioning",Kitchen,"Pets live on this property",Cat(s),"Buzzer/wireless intercom",Heating,"Family/kid friendly",Washer,"Smoke detector","Carbon monoxide detector","First aid kit","Fire extinguisher",Essentials}11Real BedstrictFALSENYCt100%2014-06-30 00:00:00Harlem166911003111
1077375400.73ApartmentPrivate room{"Wireless Internet",Heating,"Family/kid friendly","Smoke detector","Carbon monoxide detector","Fire extinguisher",Essentials,Shampoo,Hangers,Iron,"Laptop friendly workspace","translation missing: en.hosting_amenity_50"}11Real BedmoderateTRUENYCt2015-04-04 00:00:00Sunset Park11001122011
13121821501.06ApartmentPrivate room{TV,"Cable TV",Internet,"Wireless Internet","Air conditioning",Kitchen,Heating,"Smoke detector","First aid kit",Essentials,Hangers,"Hair dryer",Iron,"Laptop friendly workspace"}11Real BedflexibleFALSENYCf2014-09-20 00:00:00Park Slope01121511
19245819424.85ApartmentPrivate room{Internet,"Wireless Internet",Kitchen,"Pets live on this property",Dog(s),Washer,Dryer,"Smoke detector","Fire extinguisher"}11Real BedmoderateFALSESFt2010-03-16 00:00:00Mission District12909411011

Basic Python Interview Question #2: Users Activity Per Month Day

Basic Python Interview Question from Facebook

This question is about figuring out how active users are on different days of the month on Facebook. Specifically, it asks for a count of how many posts are made each day, asked by Meta/Facebook.


DataFrame: facebook_posts
Expected Output Type: pandas.DataFrame

Link to the question: https://platform.stratascratch.com/coding/2006-users-activity-per-month-day

Let’s see our data.

Table: facebook_posts
post_idposterpost_textpost_keywordspost_date
02The Lakers game from last night was great.[basketball,lakers,nba]2019-01-01
11Lebron James is top class.[basketball,lebron_james,nba]2019-01-02
22Asparagus tastes OK.[asparagus,food]2019-01-01
31Spaghetti is an Italian food.[spaghetti,food]2019-01-02
43User 3 is not sharing interests[#spam#]2019-01-01

We are analyzing how often users post on Facebook during different days of the month. We use pandas, a tool for data handling, to do this.

  • First, we change the post dates into a format that's easy to work with.
  • Then, we look at these dates and focus on the day part of each date.
  • For each day, we count how many posts were made.
  • We then make a new table called 'user_activity' to show these counts.
  • Finally, we make sure this table is easy to read by resetting its layout.

Simply, we are counting Facebook posts for each day of the month and presenting it in a clear table. Let’s see the code.

import pandas as pd

result = facebook_posts.groupby(pd.to_datetime(facebook_posts['post_date']).dt.day)['post_id'].count().to_frame('user_activity').reset_index()

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

post_dateuser_activity
13
23

Basic Python Interview Question #3: Customers Who Purchased the Same Product

This question involves finding customers who bought the same furniture items, asked by Meta. It asks for details like the furniture's product ID, brand name, the unique customer IDs who bought each item, and how many different customers bought each item.

The final list should start with the furniture items bought by the most customers


DataFrames: online_orders, online_products
Expected Output Type: pandas.DataFrame

Link to the question: https://platform.stratascratch.com/coding/2150-customers-who-purchased-the-same-product

Let’s see our data.

Table: online_orders
product_idpromotion_idcost_in_dollarscustomer_iddateunits_sold
11212022-04-014
33632022-05-246
122102022-05-013
12322022-05-019
221022022-05-011
Table: online_products
product_idproduct_classbrand_nameis_low_fatis_recyclableproduct_categoryproduct_family
1ACCESSORIESFort WestNN3GADGET
2DRINKFort WestNY2CONSUMABLE
3FOODFort WestYN1CONSUMABLE
4DRINKGoldenYY3CONSUMABLE
5FOODGoldenYN2CONSUMABLE

We are focusing on customers who are interested in buying furniture. We use pandas and numpy, which help us organize and analyze data.

  • We start by combining two sets of data: one with order details (online_orders) and the other with product details (online_products). We match them using 'product_id'.
  • Then, we only keep the data that is about furniture.
  • We simplify this data to show only product ID, brand name, and customer ID, removing any duplicates.
  • Next, we count how many different customers bought each product.
  • We create a new table showing these counts along with product ID, brand name, and customer ID.
  • Lastly, we arrange this table so the products with the most unique buyers are at the top.

In short, we are finding and listing furniture items based on how popular they are with different customers, showing the most popular first. Let’s see the code.

import pandas as pd
import numpy as np

merged = pd.merge(online_orders, online_products, on="product_id", how="inner")
merged = merged.loc[merged["product_class"] == "FURNITURE", :]
merged = merged[["product_id", "brand_name", "customer_id"]].drop_duplicates()
unique_cust = (
    merged.groupby(["product_id"])["customer_id"]
    .nunique()
    .to_frame("unique_cust_no")
    .reset_index()
)
result = pd.merge(merged, unique_cust, on="product_id", how="inner").sort_values(
    by="unique_cust_no", ascending=False
)

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

product_idbrand_namecustomer_idunique_cust_no
10American Home23
10American Home13
10American Home33
8Lucky Joe31
11American Home11

Basic Python Interview Question #4: Sorting Movies By Duration Time

This basic Python interview question requires sorting a list of movies based on how long they last, with the longest movies shown first, asked by Google.


DataFrame: movie_catalogue
Expected Output Type: pandas.DataFrame

Link to the question: https://platform.stratascratch.com/coding/2163-sorting-movies-by-duration-time

Let’s see our data.

Table: movie_catalogue
show_idtitlerelease_yearratingduration
s1Dick Johnson Is Dead2020PG-1390 min
s95Show Dogs2018PG90 min
s108A Champion Heart2018G90 min
s163Marshall2017PG-13118 min
s174Snervous Tyler Oakley2015PG-1383 min

We need to organize movies based on their duration, from longest to shortest. We use pandas, a tool for handling data, to do this.

  • We start by focusing on the movie duration. We extract the duration in minutes from the 'duration' column.
  • We change these duration values into numbers so that we can sort them.
  • Next, we sort the whole movie catalogue based on these duration numbers, putting the longest movies at the top.
  • After sorting, we remove the column with the duration in minutes since we don't need it anymore.

In simple terms, we are putting the movies in order from the longest to the shortest based on their duration. Let’s see the code.

import pandas as pd

movie_catalogue["movie_minutes"] = (
    movie_catalogue["duration"].str.extract("(\d+)").astype(float)
)

result = movie_catalogue.sort_values(by="movie_minutes", ascending=False).drop(
    "movie_minutes", axis=1
)

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

show_idtitlerelease_yearratingduration
s8083Star Wars: Episode VIII: The Last Jedi2017PG-13152 min
s6201Avengers: Infinity War2018PG-13150 min
s6326Black Panther2018PG-13135 min
s8052Solo: A Star Wars Story2018PG-13135 min
s8053Solo: A Star Wars Story (Spanish Version)2018PG-13135 min

Basic Python Interview Question #5: Find the date with the highest opening stock price

Basic Python Interview Question from Apple

This question asks us to identify the date when a stock (presumably Apple's, given the dataframe name) had its highest opening price, by Apple.


DataFrame: aapl_historical_stock_price
Expected Output Type: pandas.DataFrame

Link to the question: https://platform.stratascratch.com/coding/9613-find-the-date-with-the-highest-opening-stock-price

Let’s see our data.

Table: aapl_historical_stock_price
dateyearmonthopenhighlowclosevolumeid
2012-12-31201212510.53506.5509532.1723553255273
2012-12-28201212510.29506.5508.12509.5912652749274
2012-12-27201212513.54506.5504.66515.0616254240275
2012-12-26201212519506.5511.1251310801290276
2012-12-24201212520.35506.5518.71520.176276711277

We are looking to find the day when a specific stock had its highest starting price. We use pandas and numpy, tools for data analysis, and handle dates with datetime and time.

  • We start with the stock price data, named 'aapl_historical_stock_price'.
  • Then, we adjust the dates to a standard format ('YYYY-MM-DD').
  • Next, we search for the highest opening price in the data. The 'open' column shows us the starting price of the stock on each day.
  • Once we find the highest opening price, we look for the date(s) when this price occurred.
  • The result shows us the date or dates with this highest opening stock price.

In summary, we are identifying the date when the stock started trading at its highest price. Let’s see the code.

import pandas as pd
import numpy as np
import datetime, time 

df = aapl_historical_stock_price
df['date'] = df['date'].apply(lambda x: x.strftime('%Y-%m-%d'))

result = df[df['open'] == df['open'].max()][['date']]

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

date
2012-09-21

Basic Python Interview Question #6: Low Fat and Recyclable

This question wants us to calculate what proportion of all products are both low fat and recyclable by Meta/Facebook.


DataFrame: facebook_products
Expected Output Type: pandas.Series

Link to the question: https://platform.stratascratch.com/coding/2067-low-fat-and-recyclable

Let’s see our data.

Table: facebook_products
product_idproduct_classbrand_nameis_low_fatis_recyclableproduct_categoryproduct_family
1ACCESSORIESFort WestNN3GADGET
2DRINKFort WestNY2CONSUMABLE
3FOODFort WestYN1CONSUMABLE
4DRINKGoldenYY3CONSUMABLE
5FOODGoldenYN2CONSUMABLE

We need to find out how many products are both low in fat and can be recycled. We use pandas for data analysis.

  • First, we look at the products data and pick out only those that are marked as low fat ('Y' in 'is_low_fat') and recyclable ('Y' in 'is_recyclable').
  • We then count how many products meet both these conditions.
  • Next, we compare this number to the total number of products in the dataset.
  • We calculate the percentage by dividing the number of low fat, recyclable products by the total number of products and multiplying by 100.

Simply put, we are figuring out the fraction of products that are both healthy (low fat) and environmentally friendly (recyclable) and expressing it as a percentage, let’s see the code.

df = facebook_products[(facebook_products.is_low_fat == 'Y') & (facebook_products.is_recyclable == 'Y')]
result = len(df) / len(facebook_products) * 100.0

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

8.333

Basic Python Interview Question #7: Products with No Sales

This question asks us to find products that have not been sold at all by Amazon. We need to list the ID and market name of these unsold products.


DataFrames: fct_customer_sales, dim_product
Expected Output Type: pandas.DataFrame

Link to the question: https://platform.stratascratch.com/coding/2109-products-with-no-sales

Let’s see our data.

Table: fct_customer_sales
cust_idprod_sku_idorder_dateorder_valueorder_id
C274P4742021-06-281500O110
C285P4722021-06-28899O118
C282P4872021-06-30500O125
C282P4762021-07-02999O146
C284P4872021-07-07500O149
Table: dim_product
prod_sku_idprod_sku_nameprod_brandmarket_name
P472iphone-13AppleApple IPhone 13
P473iphone-13-promaxAppleApply IPhone 13 Pro Max
P474macbook-pro-13AppleApple Macbook Pro 13''
P475macbook-air-13AppleApple Makbook Air 13''
P476ipadAppleApple IPad

We are looking for products that haven't been sold yet. We use a merge function, a way of combining two sets of data, for this task.

  • We start by joining two data sets: 'fct_customer_sales' (which has sales details) and 'dim_product' (which has product details). We link them using 'prod_sku_id', which is like a unique code for each product.
  • We then look for products that do not have any sales. We do this by checking for missing values in the 'order_id' column. If 'order_id' is missing, it means the product wasn't sold.
  • After finding these products, we create a list showing their ID ('prod_sku_id') and market name ('market_name').

In simple words, we are identifying products that have never been sold and listing their ID and the market they are associated with, let’s see the code.

sales_and_products = fct_customer_sales.merge(dim_product, on='prod_sku_id', how='right')
result = sales_and_products[sales_and_products['order_id'].isna()][['prod_sku_id', 'market_name']]

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

prod_sku_idmarket_name
P473Apply IPhone 13 Pro Max
P481Samsung Galaxy Tab A
P483Dell XPS13
P488JBL Charge 5

Basic Python Interview Question #8: Most Recent Employee Login Details

Basic Python Interview Question from Amazon

This question is about finding the latest login information for each employee at Amazon's IT department.


DataFrame: worker_logins
Expected Output Type: pandas.DataFrame

Link to the question: https://platform.stratascratch.com/coding/2141-most-recent-employee-login-details

Let’s see our data.

Table: worker_logins
idworker_idlogin_timestampip_addresscountryregioncitydevice_type
012021-12-14 09:01:0065.111.191.14USAFloridaMiamidesktop
142021-12-18 10:05:0046.212.154.172NorwayVikenSkjettendesktop
232021-12-15 08:55:0080.211.248.182PolandMazoviaWarsawdesktop
352021-12-19 09:55:0010.2.135.23FranceNorthRoubaixdesktop
462022-01-03 11:55:00185.103.180.49SpainCataloniaAlcarrasdesktop

We need to identify when each employee last logged in and gather all the details about these logins. We use pandas and numpy for data management and analysis.

  • We start with the 'worker_logins' data, which records employees' login times.
  • For each employee ('worker_id'), we find the most recent ('max') login time.
  • We then create a new table ('most_recent') that shows the latest login time for each employee.
  • Next, we merge this table with the original login data. This helps us match each employee's most recent login time with their other login details.
  • We ensure that we're combining the data based on both employee ID and their last login time.
  • Finally, we remove the 'last_login' column from the result as it's no longer needed.

In short, we are sorting out the most recent login for each employee and displaying all related information about that login, let’s see the code.

import pandas as pd
import numpy as np

most_recent = (
    worker_logins.groupby(["worker_id"])["login_timestamp"]
    .max()
    .to_frame("last_login")
)
result = pd.merge(
    most_recent,
    worker_logins,
    how="inner",
    left_on=["worker_id", "last_login"],
    right_on=["worker_id", "login_timestamp"],
).drop(columns=['last_login'])

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

worker_ididlogin_timestampip_addresscountryregioncitydevice_type
1202022-01-26 08:58:0065.111.191.14USAFloridaMiamidesktop
2142022-01-10 09:52:0066.68.93.191USATexasAustindesktop
3162022-01-25 08:58:0080.211.248.182PolandMazoviaWarsawdesktop
4152022-01-24 08:48:0046.212.154.172NorwayVikenSkjettendesktop
532021-12-19 09:55:0010.2.135.23FranceNorthRoubaixdesktop

Basic Python Interview Question #9: Customer Consumable Sales Percentages

This Python question requires us to compare different brands based on the percentage of unique customers who bought consumable products from them, following a recent advertising campaign, asked by Meta/Facebook.


DataFrames: online_orders, online_products
Expected Output Type: pandas.DataFrame

Link to the question: https://platform.stratascratch.com/coding/2149-customer-consumable-sales-percentages

Let’s see our data.

Table: online_orders
product_idpromotion_idcost_in_dollarscustomer_iddateunits_sold
11212022-04-014
33632022-05-246
122102022-05-013
12322022-05-019
221022022-05-011
Table: online_products
product_idproduct_classbrand_nameis_low_fatis_recyclableproduct_categoryproduct_family
1ACCESSORIESFort WestNN3GADGET
2DRINKFort WestNY2CONSUMABLE
3FOODFort WestYN1CONSUMABLE
4DRINKGoldenYY3CONSUMABLE
5FOODGoldenYN2CONSUMABLE

We are comparing brands to see how popular their consumable products are with customers. We use pandas for data handling.

  • We begin by combining two data sets: one with customer orders (online_orders) and another with product details (online_products). We link them using 'product_id'.
  • Then, we focus on consumable products by filtering the data to include only items in the 'CONSUMABLE' product family.
  • For each brand, we count how many different customers bought their consumable products.
  • We then calculate the percentage of these unique customers out of all customers in the dataset.
  • We round these percentages to the nearest whole number for simplicity.
  • Finally, we arrange the brands so that those with the highest percentage of unique customers are listed first.

In short, we are finding out which brands had the most unique customers for their consumable products, and presenting this information in an easy-to-understand percentage form, ordered from most to least popular, let’s see the code.

import pandas as pd

merged = pd.merge(online_orders, online_products, on="product_id", how="inner")
consumable_df = merged.loc[merged["product_family"] == "CONSUMABLE", :]
result = (
    consumable_df.groupby("brand_name")["customer_id"]
    .nunique()
    .to_frame("pc_cust")
    .reset_index())

unique_customers = merged.customer_id.nunique()
result["pc_cust"] = (100.0 * result["pc_cust"] / unique_customers).round()
result

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

brand_namepc_cust
Fort West80
Golden80
Lucky Joe20

Basic Python Interview Question #10: Unique Employee Logins

This question asks by Meta/Facebook us to identify the worker IDs of individuals who logged in during a specific week in December 2021, from the 13th to the 19th inclusive.


DataFrame: worker_logins

Link to the question: https://platform.stratascratch.com/coding/2156-unique-employee-logins

Let’s see our data.

Table: worker_logins
idworker_idlogin_timestampip_addresscountryregioncitydevice_type
012021-12-14 09:01:0065.111.191.14USAFloridaMiamidesktop
142021-12-18 10:05:0046.212.154.172NorwayVikenSkjettendesktop
232021-12-15 08:55:0080.211.248.182PolandMazoviaWarsawdesktop
352021-12-19 09:55:0010.2.135.23FranceNorthRoubaixdesktop
462022-01-03 11:55:00185.103.180.49SpainCataloniaAlcarrasdesktop

We are searching for the IDs of workers who logged in between the 13th and 19th of December 2021. We use pandas, a tool for managing data, and datetime for handling dates.

  • We start with the 'worker_logins' data, which has records of when workers logged in.
  • First, we make sure the login timestamps are in a date format that's easy to use.
  • Then, we find the logins that happened between the 13th and 19th of December 2021. We use the 'between' function for this.
  • From these selected logins, we gather the unique worker IDs.
  • The result will be a list of worker IDs who logged in during this specific time period.

Simply put, we are pinpointing which workers logged in during a certain week in December 2021 and listing their IDs, let’s see the code.

import pandas as pd
import datetime as dt

worker_logins["login_timestamp"] = pd.to_datetime(worker_logins["login_timestamp"])
dates_df = worker_logins[
    worker_logins["login_timestamp"].between("2021-12-13", "2021-12-19")
]
result = dates_df["worker_id"].unique()

Here is the expected output.

All required columns and the first 5 rows of the solution are shown

0
1
4
3
5

Final Thoughts

So, we've explored some of the most common basic Python interview questions. From basic syntax to complex data manipulation, we've covered topics that mirror real-world scenarios, and asked by the big tech companies.

Practice is the key to becoming not just good, but great at data science. Theory is important, but the real learning happens when you apply what you've learned. If you want to see more, here are the python interview questions.

Most common python basic interview questions


Become a data expert. Subscribe to our newsletter.