Recap: Women Who Code CONNECT Empower

Recap: Women Who Code CONNECT Empower

This month Women Who Code (WWC) held their CONNECT Empower conference. It was a one-day virtual event with different technical sessions and social talks. I had not been able to attend the conference in previous years but highly recommend it. Women Who Code is a global non-profit organization that promotes women in technology and has resources for career-building, networking, and continuous learning. Their CONNECT Empower event incorporates sessions that represent the group’s mission goals.

The event is formatted to have two parts to cater to attendees across different time zones. I opted for talks in the morning and afternoon in EST, but for those who cannot attend, sessions are recorded and can be found on WWC’s YouTube. Certain parts of the event like virtual career fair booths or networking are draws for attending the event in real time as well. Compared to conferences I have attended previously, it impressed me that WWC chooses their speakers based off anonymous proposals to eliminate bias that could be created based on background, job title, or years of experience.

A couple main takeaways from the conference were:

  • Work to celebrate and help fellow technologists. Having a sense of community in the tech field can help build confidence and dispel feelings of imposter syndrome.
  • Practicing consistency with learning goals, like the 100 Days of Code challenge, can be helpful for progressively building programming skills.
  • Inclusion and access were two of the major themes for the WWC keynote, and both are core values worth building upon across organizations. This can be in the form of open-for-all events or content.
  • Be open to asking for feedback from technical interviews.
  • Don’t worry about if you think a question you need to ask is stupid, it’s better to own up to ignorance and be open to learning.
  • It can be helpful to expand your interests outside your main area of focus in order to increase adaptability.

Overall, the WWC CONNECT Empower event is a great opportunity to meet fellow tech-minded folks and attend talks that encourage further skill-building and a greater sense of community.

Georgia Tech’s OMSA: Halfway Point Reflections

Georgia Tech’s OMSA: Halfway Point Reflections

In fall 2021, I started Georgia Tech’s Online Master’s of Science in Analytics (OMSA). Here are some thoughts so far on the courses I’ve taken and overall experience as I head into my sixth class of the program.

Lettie Pate Whitehead Evans Administration Building at Night
Lettie Pate Whitehead Evans Administration Building at Georgia Tech in Atlanta, December 2022.

OMSA Program Background

Georgia Tech’s OMSA program is one of a few well-known online graduate programs in the data community. As data science and analytics become more mainstream and academia leans further into online curriculums, I assume similar program offerings will continue to grow. I heard about Georgia Tech’s Master’s in Analytics initially through my brother who knew about their online cybersecurity program. My main deciding factors were the online format and price tag of about $10,000 USD. I completely support building data skills through free MOOCs (shoutout to Codecademy and O’Reilly, both of which I still find as great resources). However, I figured the structure, schedule, and breadth of what Georgia Tech offered would keep me accountable in my studies. There are 3 different tracks for coursework: Analytical Tools, Business Analytics, and Computational Data Analytics. I am in the Analytical Tools track.

Class Reviews

CSE 6040: Computing for Data Analysis – Fall Semester 2021

This class works primarily in Python to illustrate computing concepts. The homework assignments and tests were auto-graded from JuPyter notebooks and were open book and open internet. The instant feedback was helpful but overall I found the time restrictions for the midterm and final to be challenging. I found creating a comprehensive Python file with code snippets from the whole class helpful for quick searches on the final. There were some optional course items like a project that could be submitted for extra credit. Aside from some difficulties with overthinking, the class was fairly enjoyable for the range of information it covered.

ISYE 6501: Introduction to Analytics Modeling – Fall Semester 2021

ISYE 6501 is essentially a whirlwind tour of analytics modeling concepts through R. Homework assignments are peer-graded and quizzes allow for a restricted number of note sheets (1 or 2 pages). This was a solid refresher in R and I found the content helpful for other classes like Regression Analysis. The course provided a decent background to the theory and troubleshooting involved in real world analytics problems.

MGT 8803: Business Fundamentals for Analytics – Spring Semester 2022

The class covered as many business concepts as possible including marketing, accounting, and supply chain optimization. There were different professors who helped guide the modules so it was interesting to have that mix for course. The level of straight memorization required to succeed with graded assignments was a bit much for me. Unfortunately this was a required course so whether or not the evaluation style was in my lane, I had to deal with it.

ISYE 6414: Regression Analysis – Summer Semester 2022

Regression Analysis gave a breadth of model building and illustrated underlying concepts behind each. The course uses R and material included cleaning and transforming data, variable selection, and linear and logistic regression. The homework assignments were fairly simple and I found having code snippets prepared in one file for the open book portion of exams to be helpful.

CSE 6242: Data and Visual Analytics – Fall Semester 2022

As one of the advanced requirements for the OMSA program, I was a little hesitant on the learning curve this course was rumored to have. The class was a grand tour of dabbling in different languages and programs like Python, SQL, Spark, and D3. A course project is a huge component of this class and it was nice to collaborate with classmates and translate what we learned into something tangible. Overall the homework assignments were only a major bummer because you could have a solution that looked exactly like the answer, but still get zero or minimal points from the auto-grader. There are no explicit homework solutions and I struggled with understanding what exactly needed to be corrected when I did not get full credit. Luckily, there were ample opportunities for extra credit to make up for any missed points from the homework.

ISYE 7406: Data Mining and Statistical Learning – Spring Semester 2023

I’m currently a couple weeks into the course for the Spring 2023 semester but so far the blend of theoretical background for statistics and practical R analysis has been manageable.

Balancing Professional, Academic, and Social Obligations

I attended a meetup in December 2022 at the main Atlanta campus for the Analytics and Cybersecurity programs and reflected on my personal experience after speaking with fellow students and alumni. There seemed to be a decent mix of professional backgrounds and mostly everyone I spoke with also worked full time for the duration of the program.

Personally, I have found that one course a semester has worked best for me. The only semester I doubled up was my first semester for two of the required core classes: ISYE 6501 and CSE 6040. In hindsight I think I would have been better off just taking one course instead of constantly feeling like I was flipping back and forth between the two.

For time spent on homework and studying, I have found that chipping away a little bit everyday has given me the best results thus far. It tends to keep concepts relevant as opposed to taking couple-day breaks between learning material. There have still been times where I have taken breaks for travel or vacation but a lot of the courses allow for some leniency for frontloading assignments and learning with the schedules for module releases.

Overall OMSA Experience

I’ve been pleasantly surprised in the program so far but still have plenty of days where I have the same frustrations that anyone else might with learning new material. The practical translation of concepts from the courses seems to be the main draw and highlight for folks in the OMSA program. I look forward to continuing to build skills in future classes. I am likely taking MGT 6203 (Data Analytics in Business) for summer 2023 and then ISYE 6669 (Deterministic Optimization) for fall 2023.

Simple Linear Regression for LTER Data

Simple Linear Regression for LTER Data

A sampler data package called lterdatasampler from the Long Term Ecological Research (LTER) program allows anyone to work with some neat environmental data. Data samples include weights for bison, fiddler crab body size, and meteorological data from a field station. The package homepage gives suggestions for modeling relationships, such as linear relationships and time series analysis. This will go over simple linear regression using R statistical software and a sugar maple dataset from the lterdatasampler package.

Forest floor of a watershed at Hubbard Brook Experimental Forest showing trees, plants, and leaf litter.
Forest floor of watershed at Hubbard Brook Experimental Forest, August 2014.

The data was collected at Hubbard Brook Experimental Forest in New Hampshire, which I am partial to having collected and analyzed data from there during an undergraduate internship.


The sugar maple data comes from a study and paper from Stephanie Juice and Tim Fahey from Cornell University on the ‘Health of Sugar Maple (Acer saccharum) Seedlings in Response to Calcium Addition (2003-2004), Hubbard Brook LTER’. The data summary page points out the leaf samples were collected in transects from a watershed treated with calcium, and reference watershed sites. The data sample is 359 rows with the following 11 variables: year, watershed, elevation, transect, sample, stem_length, leaf1area, leaf2area, leaf_dry_mass, stem_dry_mass, and corrected_leaf_area.

R Code

The code that follows can be found in an associated GitHub repository here in an R-Markdown file. As mentioned above, the sugar maple data was chosen for simple linear regression due to the linear relationship noted from the data package site.

First, install the lterdatasampler package. If this is your first usage, use the following:


Once the package is installed, call it using the library function and load the car (Companion to Applied Regression) package and caTools (Tools: Moving Window Statistics, GIF, Base64, ROC AUC, etc) package for later use.


Call hbr_maples to preview the data.

A table shows variables from the hbr_maples dataset organized in rows and columns.

For background to the study, I plotted the corrected leaf area (in centimeters squared) between the reference and calcium treated Watershed 1. This shows the overall differences in samples collected between each area. This simple linear regression analysis will look at the overall measurements without respect to watershed, but this boxplot can help give an idea of the spread of data.

plot(hbr_maples$watershed, hbr_maples$corrected_leaf_area,
     ylab='Corrected Leaf Area (cm^2)',
     main='Sugar Maple Leaf Area for Watershed Samples')
Boxplot shows leaf area for reference and calcium treated watersheds

Next, I created scatterplots between the corrected leaf area and stem length, stem dry mass, and leaf dry mass. These plots are meant to show a preliminary relationship between these variables to confirm a linear relationship appears to be proper to explore further.

     xlab='Leaf Area (cm^2)',
     ylab='Stem Length (mm)')
     xlab='Leaf Area (cm^2)',
     ylab='Stem Dry Mass (g)')
     xlab='Leaf Area (cm^2)',
     ylab='Leaf Dry Mass (g)')
title("Scatterplots of Stem Length, Stem Dry Mass, and Leaf Dry Mass", line = -1, outer = TRUE)
Scatterplots show corrected leaf area plotted against variables for stem length, stem dry mass, and leaf dry mass.

I decided to work with the leaf dry mass as the predicting variable and corrected leaf area as the response for this regression. As a precaution, I check to see if there are null or NA values in the variable I plan on using for the response variable. Missing values here would not be helpful in training the model so those can be found and removed.

hbr_maples_cleaned <- hbr_maples[!$corrected_leaf_area),]

The preliminary graphs above seem to show at least some indication of an outlier. That can be checked with another quick plot and summary statistics for the response variable.

A scatterplot shows leaf dry mass and one outlier far outside the other points.
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.01170 0.03540 0.04745 0.05169 0.06105 0.38700

The scatterplot shows one value in particular that can be removed. Find the respective index and then drop it from a newly declared variable.

hbr_maples_cleaned2 <- hbr_maples_cleaned[-53, ]

Split the data into training and test sets for later evaluation using the split function from caTools. With such a small dataset, the type of accuracy we will generate will not be reliable, but I would like to show these steps as good practice for larger datasets. I chose a 70% for training and 30% for test split. This splits as 167 records for the training set and 72 records for the test set.

maple_split <- sample.split(hbr_maples_cleaned2$corrected_leaf_area, SplitRatio = 0.7)
train_data <- hbr_maples_cleaned2[maple_split==TRUE,]
test_data <- hbr_maples_cleaned2[maple_split==FALSE,]

Create a simple linear regression model to generate the corrected leaf area based on the leaf dry mass using the lm() function. Preview the model output using summary().

leafarea_model <- lm(corrected_leaf_area ~ leaf_dry_mass, data=train_data)
## Call:
## lm(formula = corrected_leaf_area ~ leaf_dry_mass, data = train_data)
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -13.5812  -1.7899  -0.3127   1.5761   6.9762 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     9.3121     0.5963   15.62   <2e-16 ***
## leaf_dry_mass 350.8474    11.0620   31.72   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Residual standard error: 3.061 on 165 degrees of freedom
## Multiple R-squared:  0.8591, Adjusted R-squared:  0.8582 
## F-statistic:  1006 on 1 and 165 DF,  p-value: < 2.2e-16

Check the confidence interval of the linear regression model using the confint() function. This shows at the default of 95% confidence, the coefficient for leaf dry mass is between about 329 and 373.

##                    2.5 %    97.5 %
## (Intercept)     8.134648  10.48949
## leaf_dry_mass 329.006035 372.68880

Next, plot the residuals from the model against the fitted values. Declare variables for each and then plot in a scatterplot. The variance among the plotted values appears generally constant, but with some widely spread-out points as the fitted values increase.

leafarea_resids <- residuals(leafarea_model)
leafarea_fitted <- leafarea_model$fitted

plot(leafarea_fitted, leafarea_resids,
     main='Residuals vs Fitted Values of Leaf Area Model',
     xlab='Fitted Values',
     ylab='Residual Values')
lines(lowess(leafarea_fitted, leafarea_resids), col='red')
A scatterplot of residuals and fitted values from the leaf area model.

Plot a histogram of the residuals from the model to check and see if there is normal distribution. The normality assumption for linear regression would suggest that the data is about normal if we see a standard distribution.

hist(leafarea_resids,main="Histogram of Residuals",xlab="Residuals")
A histogram and QQ-plot show the residuals of the leaf area model.

We can see from the histogram of residuals that the data might benefit from a transformation to give the errors a more normal distribution.

To identify outliers that may influence the model, we can use the Cook’s distance. This shows which data points could potentially be influencing the model and points them out for potential removal. This provied the index of 175 as an outlier that may need to be removed.

cd_leafarea_model <- cooks.distance(leafarea_model)
leafarea_model_abovethreshold <- as.numeric(names(cd_leafarea_model)[(cd_leafarea_model > 1)])
## [1] 175

Then, we can locate the row that held the outlier and remove it. Overall, the handling of outliers depends on the goals for regression analysis and sometimes they are better to keep in.

train_data_cleaned <- train_data[-leafarea_model_abovethreshold, ]

Create a new linear regression model in the data where the outlier has been removed. We can see from the summary output that the model is the same as above, so removing the outlier did not have an effect.

leafarea_model2 <- lm(corrected_leaf_area ~ leaf_dry_mass, data=train_data_cleaned)
## Call:
## lm(formula = corrected_leaf_area ~ leaf_dry_mass, data = train_data_cleaned)
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -13.5812  -1.7899  -0.3127   1.5761   6.9762 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     9.3121     0.5963   15.62   <2e-16 ***
## leaf_dry_mass 350.8474    11.0620   31.72   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Residual standard error: 3.061 on 165 degrees of freedom
## Multiple R-squared:  0.8591, Adjusted R-squared:  0.8582 
## F-statistic:  1006 on 1 and 165 DF,  p-value: < 2.2e-16

Check to see if the model would benefit from a Box-Cox transformation to improve the model’s fit. This can help better meet the assumptions of simple linear regression, such as residual distribution. The lambda value given will dictate the transformation action that is suggested. From the output below, the optimal lambda value rounded to the nearest 0.5 would be 1. This means no transformation is suggested.

bc_leafarea <- boxCox(leafarea_model2)
A Box-Cox plot shows the lambda value range of the leaf area model.
lambda_bc_model2 <- bc_leafarea$x[which(bc_leafarea$y==max(bc_leafarea$y))]
## [1] 1.070707

The linear regression equation we can construct from the model is:

Corrected Leaf Area (cm2) = 9.3121 + 350.8474*(Leaf Dry Mass (g))

The Multiple R-Squared value from the summary means about 86% of the variance found can be explained by the model.

Last, use the test data to check the model’s performance by using the predict() function and calculate the mean squared prediction error (MSPE). The MSPE is about 9.347519 and that value can be used as a performance comparison if other models are created.

pred_test <- predict(leafarea_model, test_data)
mse.model <- mean((pred_test-test_data$corrected_leaf_area)^2)
cat("The mean squared prediction error is",mse.model,"\n")
## The mean squared prediction error is 9.347519


This is an simple example of linear regression, and more practice can be gained with lm() when looking at other variables in the data. Given the linear relationship, multiple linear regression models can be good to explore as well after working through standard variable selection processes. There is plenty more to discover in the other lterdatapackage samples.

A rock with a marker for the Hubbard Brook Experimental Forest in New Hampshire.
A rock with a marker outside a building at Hubbard Brook Experimental Forest, August 2014.


Juice, S. and T. Fahey. 2019. Health and mycorrhizal colonization response of sugar maple (Acer saccharum) seedlings to calcium addition in Watershed 1 at the Hubbard Brook Experimental Forest ver 3. Environmental Data Initiative.

Species Richness and Distribution in the National Parks with pandas

Here’s how to use Python and pandas to explore species data for the United States National Parks to find the average species richness and the distribution of species categories. This goes over some of the built-in functions in pandas and how to use those for exploratory data analysis. The source data is available via Kaggle, or the National Parks Species website. The associated Github repository for a more in-depth look at the two Jupyter Notebooks for the code below is available here.

An elk sits off a trail at Yellowstone National Park as visitors walk by.
An elk sits near a trail in Yellowstone National Park, May 2017.


Import the necessary packages, including pandas, matplotlib, seaborn, and math.

import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
import math

Load the species dataset and the parks dataset with pandas. Given the size and that I used Jupyter Notebook for this, I give the low memory argument a value of False for the species data so it loads without too much trouble. The dataframes can be given any name.

species_data = pd.read_csv('species.csv', low_memory=False)
parks_data = pd.read_csv('parks.csv')

Preview the datasets with the head() and info() pandas functions. The columns of focus will be park name and category for the species dataframe, and park name and acres for the parks dataframe.

A pandas dataframe shows columns and rows for species data.
A preview of the species dataframe info shows all the columns, their counts, and data types.
A pandas dataframe shows columns and rows for parks data.

Species Richness

Species richness is the quantification of species in a given area and can be a helpful metric for biodiversity.

For this, we’ll want to merge the species data and the parks data. The species data will need to be re-formatted beforehand.

Out of the box, the species data has one record per type of species per row. This will need to be turned into individual counts per column to get the number of species in each park. Use the groupby() function in pandas specifying the park name column, and then use count().

all_species_data = species_data.groupby(['Park Name']).count()
A pandas dataframe shows columns per each national park and their respective counts.

Next, I narrow down to just the Species ID column and change the name of it to reflect just ‘Species’.

species_counts = all_species_data[['Species ID']].copy()
species_counts = species_counts.rename(columns={'Species ID' : 'Species'}

The parks data will need the names set as the index as well. Then, the two dataframes will be in good shape to merge. Specify the two dataframes, and give the left_index and right_index arguments values of true because they are the same between each.

parks_data = parks_data.set_index('Park Name')
richness_data = species_counts.merge(parks_data, left_index=True, right_index=True)

Preview the newly merged dataframe to confirm it looks correct.

A pandas dataframe shows merged species and park data columns.

To take this one step further, estimate the number of species per acre using the species counts. Create a function to take the species count column and acres column and divide them, and normalize the result to an integer with math’s floor() function.

def species_abundance(df):
   return df.apply(
       lambda row:
           row['Acres'] / row['Species']),

Create a new column in the dataframe and apply the species abundance function.

richness_data['Species Abundance'] = species_abundance(richness_data)
A dataframe show the number of species and abundance of species per acre in individual national parks.

Calculate the mean of species per acre in all the parks. There are a lot of variables to consider that might affect this number, but it serves its purpose as a quick summary statistic to give us about 525 species per acre.

print(richness_data['Species Abundance'].mean())

Species Type Distribution

Species distribution in this setting will be the ratio of species types throughout each park. The different species categories in this data set are: ‘Mammal’, ‘Bird’, ‘Reptile’, ‘Amphibian’, ‘Fish’, ‘Vascular Plant’, ‘Spider/Scorpion’, ‘Insect’, ‘Invertebrate’, ‘Fungi’, ‘Nonvascular Plant’, ‘Crab/Lobster/Shrimp’, ‘Slug/Snail’, ‘Algae’.

To extract just the park names and categories, create a new dataframe with just these columns.

types = species_data[['Park Name', 'Category']].copy()
A pandas dataframe shows one row per species record of a given park and category.

Use pandas’ groupby() function to group by park name and category. Then, use the size function to specify the number of rows for the unstack function, which will create a column for each of the unique row values. I give the argument of fill_value for the unstack function a value of 0, to keep anything with NaN values consistent for math operations.

df = types.groupby(['Park Name','Category']).size().unstack(fill_value=0)
A pandas dataframe shows the number of species in each category per park.

Next, take the counts from each and put all on the same scale of out of 100. Use pandas’ div() function to divide based on the sum of the dataframe per each row. Then, multiply each value by 100 for better readability and translation for visuals.

ratios = df.div(df.sum(axis=1), axis=0).multiply(100)
A pandas dataframe shows raw values for each park's species category ratios.

To clean the dataframe values further, round each variable to 2 decimal places using Python’s round() function.

rounded = ratios.round(2)
A pandas dataframe shows rounded values for ratios in each park's species category.

Create a box plot using matplotlib and seaborn to show the breakdown of species categories across all parks.

f, ax = plt.subplots(figsize=(12, 8))
sb.boxplot(data=rounded, orient='h')
sb.despine(trim=True, left=True)
plt.title('Species Category Distribution in the National Parks', fontsize=16)
A boxplot shows the species category types and their distributions.

Optionally, export the species categories dataframe as a CSV for further use.

Park NameAlgaeAmphibianBirdCrab/Lobster/ShrimpFishFungiInsectInvertebrateMammalNonvascular PlantReptileSlug/SnailSpider/ScorpionVascular Plant
Acadia National Park0.00.8821.
Arches National Park0.00.7619.560.
Badlands National Park0.00.7217.210.01.7312.4617.210.074.610.00.940.00.0745.0
Big Bend National Park0.00.5718.290.02.340.
Biscayne National Park0.00.4613.50.047.390.00.641.971.620.02.320.00.032.1
Black Canyon of the Gunnison National Park0.00.1815.820.01.450.
Bryce Canyon National Park0.00.3116.870.
Canyonlands National Park0.00.5717.990.
Capitol Reef National Park0.00.3815.840.00.960.
Carlsbad Caverns National Park0.00.9823.890.00.330.
Channel Islands National Park3.240.2118.940.5814.480.00.1110.42.333.610.581.750.0543.71
Congaree National Park3.191.858.620.262.812.0226.580.651.680.
Crater Lake National Park5.80.536.71.060.355.1126.441.812.555.130.530.240.4543.3
Cuyahoga Valley National Park0.01.2412.670.414.380.
Death Valley National Park1.351.611.960.860.22.7720.340.414.780.52.031.620.7250.87
Denali National Park and Preserve0.00.0813.560.
Dry Tortugas National Park0.00.033.370.
Everglades National Park0.00.8217.750.020.440.
Gates Of The Arctic National Park and Preserve0.
Glacier Bay National Park and Preserve3.270.2613.184.9118.340.11.795.982.965.010.151.890.1542.0
Glacier National Park0.080.2310.840.231.0610.87.710.082.715.810.160.780.049.53
Grand Canyon National Park0.00.5717.390.
Grand Teton National Park0.050.3413.
Great Basin National Park0.00.8312.440.230.790.5717.191.093.880.
Great Sand Dunes National Park and Preserve0.00.6325.210.00.630.
Great Smoky Mountains National Park0.00.924.110.151.629.5436.451.271.427.970.771.391.5732.83
Guadalupe Mountains National Park0.00.6915.580.290.173.786.590.44.350.293.213.320.1161.23
Haleakala National Park0.00.121.710.70.232.5641.121.430.588.80.391.821.7838.76
Hawaii Volcanoes National Park0.00.122.371.70.120.2143.335.250.454.090.392.153.0636.75
Hot Springs National Park1.231.3819.850.464.620.00.771.132.670.922.670.10.064.21
Isle Royale National Park0.00.9318.681.04.510.
Joshua Tree National Park0.220.2213.
Katmai National Park and Preserve0.00.0818.120.333.598.570.572.374.411.630.00.570.059.76
Kenai Fjords National Park0.00.0922.230.03.970.190.283.975.
Kobuk Valley National Park0.
Lake Clark National Park and Preserve0.00.059.620.052.7410.960.150.32.4914.
Lassen Volcanic National Park0.110.9513.630.831.112.845.291.05.568.91.220.330.058.21
Mammoth Cave National Park0.01.328.40.044.880.
Mesa Verde National Park0.00.6419.
Mount Rainier National Park0.00.9210.730.01.321.782.70.03.9620.480.
North Cascades National Park0.00.366.720.00.9816.0316.620.02.3511.450.30.01.2543.95
Olympic National Park0.00.8215.910.04.980.04.470.
Petrified Forest National Park0.00.9428.
Pinnacles National Park0.00.7112.010.560.421.9122.812.
Redwood National Park1.760.527.941.933.9121.611.795.292.443.980.622.330.1135.77
Rocky Mountain National Park4.760.168.791.240.389.7121.451.522.3513.20.10.320.735.34
Saguaro National Park0.00.5513.410.
Sequoia and Kings Canyon National Parks0.00.6511.030.00.950.
Shenandoah National Park0.00.865.760.060.8816.436.830.041.357.520.820.040.0959.31
Theodore Roosevelt National Park0.00.6919.140.092.750.
Voyageurs National Park0.01.0316.380.03.990.212.270.484.340.760.410.00.070.13
Wind Cave National Park0.00.516.850.00.573.087.530.06.380.00.861.790.062.44
Wrangell - St Elias National Park and Preserve0.00.1111.750.
Yellowstone National Park5.140.238.321.590.480.2841.051.971.970.380.231.491.0835.8
Yosemite National Park0.00.7212.930.00.480.
Zion National Park0.00.3916.760.00.840.


Further exploration of the species category and count outputs might involve comparing a select number of parks against each other. Unique factors such as location, park size, and biomes provide opportunities for further analysis and insights.

Sentiment Analysis of Product Reviews with Python Using NLTK

Sentiment Analysis of Product Reviews with Python Using NLTK

Here is a brief overview of how to use the Python package Natural Language Toolkit (NLTK) for sentiment analysis with Amazon food product reviews. This is a basic way to use text classification on a dataset of words to help determine whether a review is positive or negative. The following is a snippet of a more comprehensive tutorial I put together for a workshop for the Syracuse Women in Machine Learning and Data Science group.


The data for this tutorial comes from the Grocery and Gourmet Food Amazon reviews set from Jianmo Ni found at Amazon Review Data (2018). Out of the review categories to choose from, this set seemed like it would have a diverse range of people’s sentiment about food products. The data set itself is fairly large, so I use a smaller subset of 20,000 reviews in the example below.

A data frame preview shows the categories available from the reviews data set.
A preview of the full Groceries and Gourmet Food reviews data set from Amazon shows the available data features.

Steps to clean the main data using pandas are detailed in the Jupyter Notebook. The reviews are categorized on an overall rating scale of 1 to 5, with 1 being the lowest approval and 5 being the highest. I split the data so that reviews set as a 1 or 2 is labeled as negative and those set as 4 or 5 as positive. I omit ratings of 3 for this exercise because they could vary between negative and positive.

Prepare Data for Classification

Import the necessary packages. The steps below assume the data has already been cleaned using pandas.

import pandas as pd
import random
import string
import nltk
from nltk.tokenize import WhitespaceTokenizer
from nltk.corpus import stopwords
from nltk import classify
from nltk import NaiveBayesClassifier

Load in the cleaned data from a CSV from a data folder using pandas.

reviews = pd.read_csv('data/combined_reviews.csv')

The main cleaned dataframe has three columns: overview, reviewText, and reaction. The overview column has the numeric review rating, the reviewText column has the product reviews in strings, and the reaction column is marked with ‘positive’ or ‘negative’. Each row represents an individual review.

A condensed dataframe shows three columns: overall rating, review text, and reaction.
The cleaned pandas dataframe shows the three columns for overall rating, review text, and reaction type for the product reviews.

Reduce the main pandas dataframe to a smaller group using the sample function from the random package and a lambda function on the reaction column. I use an even split of 20,000 reviews.

sample_df = reviews.groupby('reaction').apply(lambda x: x.sample(n=10000)).reset_index(drop = True)

Use this sample dataframe to create a list for each sentiment type. Use the loc function from pandas to specify each entry that has ‘positive’ or ‘negative’ in the reaction column, respectively. Then, use the pandas tolist() function to convert the dataframe to a list type.

pos_df = sample_df.loc[sample_df['reaction'] == 'positive']
pos_list = pos_df['reviewText'].tolist()

neg_df = sample_df.loc[sample_df['reaction'] == 'negative']
neg_list = neg_df['reviewText'].tolist()

With these lists, use the lower() function and list comprehension to make each review lowercase. This reduces variance in the types of forms a word with various syntax can have.

pos_list_lowered = [word.lower() for word in pos_list] 
neg_list_lowered = [word.lower() for word in neg_list]

Turn the lists into string types to more easily separate words and prepare for more cleaning. For this text classification, we will consider the frequency of words in each type of review.

pos_list_to_string = ' '.join([str(elem) for elem in pos_list_lowered])  
neg_list_to_string = ' '.join([str(elem) for elem in neg_list_lowered])

To eliminate noise in the data, stop words (examples: ‘and’, ‘how’, ‘but’) should be removed, along with punctuation. Use NLTK’s built-in function for stop words to specify a variable for both stop words and punctuation.

stop = set(stopwords.words('english') + list(string.punctuation))

Create a variable for the tokenizer. Tokenizing will separate all the words in the list based on a specific variable. In this example, I chose to use a whitespace tokenizer. This means words will be separated based on whitespace.

tokenizer = WhitespaceTokenizer()

Use list comprehension on the positive and negative word lists to tokenize any word that is not a stop word or a punctuation item.

filtered_pos_list = [w for w in tokenizer.tokenize(pos_list_to_string) if w not in stop] 

filtered_neg_list = [w for w in tokenizer.tokenize(neg_list_to_string) if w not in stop]

Remove any punctuation that may be leftover if it was attached to a word itself.

filtered_pos_list2 = [w.strip(string.punctuation) for w in filtered_pos_list]
filtered_neg_list2 = [w.strip(string.punctuation) for w in filtered_neg_list]

As an optional sidebar, use NLTK’s Frequency Distribution function to check some of the most common words and their number of appearances in the respective reviews.

fd_pos = nltk.FreqDist(filtered_pos_list2) 
fd_neg = nltk.FreqDist(filtered_neg_list2)
A frequency distribution for positive food product reviews shows common words and their counts.
A list shows individual words pulled from positive food product reviews and their relative frequency in the sample set.

Create a function to make the feature sets for text classification. This will take the lists and create dictionaries with the proper labels.

def word_features(words):
     return dict([(word, True) for word in words.split()])

Label the sets of word features and combine into one set to be split for training and testing for sentiment analysis.

positive_features = [(word_features(f), 'pos') for f in filtered_pos_list2]
negative_features = [(word_features(f), 'neg') for f in filtered_neg_list2]

labeledwords = positive_features + negative_features

Randomly shuffle the list of words before use in the classifier to reduce the likelihood of bias toward a given feature label.


Training and Testing the Text Classifier for Sentiment

Create a training set and a test set from the list. From NLTK, call upon the Naïve Bayes Classifier model and specify the training set will train the model for sentiment analysis.

train_set, test_set = labeledwords[2000:], labeledwords[:500]
classifier = nltk.NaiveBayesClassifier.train(train_set)

Calculate the accuracy of the model.

print(nltk.classify.accuracy(classifier, test_set))

Provide some test example reviews for proof of concept and print the results.

print(classifier.classify(word_features('I hate this product, it tasted weird')))

Use NLTK to show the most informative features of the text classifier. This generates a list based on certain features and shows the likelihood that they point to a specific classification of positive or negative review.

NLTK's output for most informative features shows a list of words, their feature labels, and the likelihood of their occurrence in each review classification.
Output from NLTK’s most informative features for the Naïve Bayes Classifier.

Further Steps

This was an overview of sentiment analysis with NLTK. There are opportunities to increase the accuracy of the classification model. One example would be to use part-of-speech tagging to train the model using descriptive adjectives or nouns. Another idea to pursue would be to use the results of the frequency distribution and select the most common positive and negative words to train the model.

The full GitHub repository tutorial for this can be found here.

How to Build an Inventory App with Tkinter

How to Build an Inventory App with Tkinter
A app shows features to edit and show an inventory database.

Here’s how to build an inventory app connected to a SQLite database using Python and tkinter. This is a basic GUI (graphical user interface) to view, edit, and calculate specific inventory sums. The example below is for an inventory of supplies to compliment small-scale shop keeping tasks. View the Github repository here.

1. Set up the initial SQLite database with desired column names.

First, we’ll have to create a SQLite database to connect to if one does not already exist. Import sqlite3 and contextlib to start.

import sqlite3

Create a connection to a database. In this instance, a new database will be created if one does not already exist with this name.

connection = sqlite3.connect("inventory.db")
cursor = connection.cursor()

Establish a cursor with the connection which we use to execute the creation of desired database columns. After ‘CREATE TABLE’, provide a name for the table, in this case it is ‘items’. In parentheses, list the desired column names followed by the data types to store each in. The full list of data type options for SQLite can be found here.

cursor.execute("CREATE TABLE items (name TEXT, quantity INTEGER, price INTEGER)")

Commit the changes to the database, and close the connection.


2. Create the main window.

Import the necessary packages (tkinter and sqlite3).

from tkinter import *
import sqlite3

Form the initial window for the application. Specify the dimensions using geometry and the title which will be in the header for the window.

window = Tk()
window.title("Inventory Summary")

3. Create the entry fields, labels, functions, and buttons to access the database.

Add a Record to the Database

Create entry boxes and associated labels for the database columns (name, quantity, and price). The entry function creates an entry field within the specified window. The label function gives a label to the feature.

item_name = Entry(window, width=20)
item_name.grid(row=0, column=1, pady=2, sticky=W)
item_quantity = Entry(window, width=20)
item_quantity.grid(row=1, column=1, pady=2, sticky=W)
item_price = Entry(window, width=20)
item_price.grid(row=2, column=1, pady=2, sticky=W)

item_name_label = Label(window, text='Name ')
item_name_label.grid(row=0, column=0, pady=2, sticky=E)
item_quantity_label = Label(window,  text='Quantity ')
item_quantity_label.grid(row=1, column=0, pady=2, sticky=E)
item_price_label = Label(window, text ='Price ($) ')
item_price_label.grid(row=2,column=0, pady=2, sticky=E)

Write a function to carry out adding the new record to the database. Each function follows the same basic format where we create a connection to the database and set up the cursor. We use insert for the values entered in the form, close the connection, and then clear out the entries.

def submit():
    connection = sqlite3.connect("inventory.db")
    cursor = connection.cursor()
    cursor.execute("INSERT INTO items(name,quantity,price) VALUES (?,?,?)",(item_name.get(),item_quantity.get(),item_price.get()))
    item_name.delete(0, END)
    item_quantity.delete(0, END)
    item_price.delete(0, END)

Create a button to click to add the record to the database.

submit_btn = Button(window, text="Add Record to Database", command=submit)
submit_btn.grid(row=3, column=0, columnspan=2, pady=2)

Show Records

Create the function to print out the records in the database. This selects all columns, including their original IDs from the SQLite table. Provide formatting for the display of data, including customizations for the price to come out in standard United State Dollar (USD).

def query():
    connection = sqlite3.connect("inventory.db")
    cursor = connection.cursor()
    cursor.execute("SELECT *, oid FROM items")
    records = cursor.fetchall()
    print_records = ''
    for record in records:
        print_records += str(record[0]) + ", " + str(record[1]) + " items, $" + "{:.2f}".format(float(record[2])) + ", ID" + "\t" + str(record[3]) +"\n"
    query_label = Label(window, text=print_records)
    query_label.grid(row=5, column=0, columnspan=2)

Create a button to show the database’s records.

query_btn = Button(window, text="Show Records", command=query)
query_btn.grid(row=4, column=0, columnspan=2, pady=2)

Update a Record

The update record feature runs based on the ID specified in the ‘Select ID’ field.

select_box=Entry(window, width=20)
select_box.grid(row=6, column=1, pady=2, sticky=W)

select_box_label = Label(window, text='Select ID ')
select_box_label.grid(row=6, column=0, pady=2, sticky=E)

Then, I create two functions: one for actually updating the database, and another for creating the separate window where this action takes place. The separate window incorporates many of the same elements we already have in the primary window.

def update():
    connection = sqlite3.connect("inventory.db")
    cursor = connection.cursor()
    record_id = select_box.get()

        'UPDATE items SET name=?, quantity=?, price=? WHERE oid=?',

def edit():
    global editor
    editor = Tk()
    editor.title("Edit Inventory")
    connection = sqlite3.connect("inventory.db")
    cursor = connection.cursor()
    record_id = select_box.get()

    cursor.execute("SELECT * FROM items WHERE oid=?",(record_id))
    records = cursor.fetchall()

    global item_name_editor
    global item_quantity_editor
    global item_price_editor

    item_name_editor = Entry(editor, width=20)
    item_name_editor.grid(row=0, column=1, sticky=W)
    item_quantity_editor = Entry(editor, width=20)
    item_quantity_editor.grid(row=1, column=1, sticky=W)
    item_price_editor = Entry(editor, width=20)
    item_price_editor.grid(row=2, column=1, sticky=W)

    item_name_label_editor = Label(editor, text='Name ')
    item_name_label_editor.grid(row=0, column=0, sticky=E)
    item_quantity_label_editor = Label(editor,  text='Quantity ')
    item_quantity_label_editor.grid(row=1, column=0, sticky=E)
    item_price_label_editor = Label(editor, text ='Price ($) ')
    item_price_label_editor.grid(row=2,column=0, sticky=E)

    for record in records:
        item_name_editor.insert(0, record[0])
        item_quantity_editor.insert(0, record[1])
        item_price_editor.insert(0, record[2])
    save_btn = Button(editor, text="Save Record", command=update)
    save_btn.grid(row=11, column=0, columnspan=2, pady=10, padx=10, ipadx=145)

Create the button for updating records.

edit_btn = Button(window, text="Update Record", command=edit)
edit_btn.grid(row=11, column=0, columnspan=2, pady=2)

Delete a Record

This function runs based on the ID specified in the ‘Select ID’ form.

def delete():
    connection = sqlite3.connect("inventory.db")
    cursor = connection.cursor()
    cursor.execute("DELETE from items WHERE oid=?",(select_box.get()))

Create the button to remove a record from the database.

delete_btn = Button(window, text="Delete Record", command=delete)
delete_btn.grid(row=12, column=0, columnspan=2, pady=2)

4. Create a calculator button to inform updates.

One feature I wanted was a calculator for the price of a quantity within the total price of an item. For example, if I used 3 of item A, how much from the total price for that inventory would I potentially deduct. This uses the same ‘Select ID’ field mentioned above.

def calc_price():
    connection = sqlite3.connect("inventory.db")
    cursor = connection.cursor()
    cursor.execute("SELECT * FROM items WHERE oid=?",(select_box.get()))
    records = cursor.fetchall()
    price_sum = []
    for record in records:
        price_sum.append(round((record[2] / record[1]) * int(price_calc.get()),2))
    global calc_sum_label
    calc_sum_label = Label(window, text=price_sum)
    calc_sum_label.grid(row=9, column=0, columnspan=2, pady=2)

After running the ‘Calculate Price Sum’ button, the output must be cleared each time before making another calculation.

def clear_output():
    connection = sqlite3.connect("inventory.db")
    cursor = connection.cursor()

Create the entry and label for the price sum function.

price_calc=Entry(window, width=20)
price_calc.grid(row=7, column=1, pady=2, sticky=W)

price_calc_label = Label(window, text='Quantity for Price Sum ')
price_calc_label.grid(row=7, column=0, pady=2, sticky=E)

Create the ‘Caclulate Price Sum’ button and the ‘Clear Output’ button.

calculate_price_btn = Button(window, text="Calculate Price Sum", command=calc_price)
calculate_price_btn.grid(row=8, column=0, columnspan=2, pady=2)

clear_output_btn = Button(window, text="Clear Output", command=clear_output)
clear_output_btn.grid(row=10, column=0, columnspan=2, pady=2)

Layout with Tkinter

There are two methods for layout with tkinter: grid and pack. I use the grid method, which allows the app to be designed using column and row placement. I use additional arguments like column span to use more than one column for placement, and sticky to keep items to either the west or east sides of the specified columns.

Transforming Categorical Survey Data with pandas and GeoPy

Transforming Categorical Survey Data with pandas and GeoPy

In this tutorial, I review ways to take raw categorical survey data and create new variables for analysis and visualizations with Python using pandas and GeoPy. I’ll show how to make new pandas columns from encoding complex responses, geocoding locations, and measuring distances.

Here’s the associated GitHub repository for this workshop, which includes the data set and a Jupyter Notebook for the code.

Thanks to the St. Lawrence Eastern Lake Ontario Partnership for Regional Invasive Species Management (SLELO PRISM), I was able to use boat launch steward data from 2016 for this virtual workshop. The survey data was collected by boat launch stewards around Lake Ontario in upstate New York. Boaters were asked a series of survey questions and their watercrafts were inspected for aquatic invasive species.

This tutorial was originally designed for the Syracuse Women in Machine Learning and Data Science (Syracuse WiMLDS) Meetup group.

Analyze News Headlines with newsgrab and spaCy

Analyze News Headlines with newsgrab and spaCy

Here’s an overview of how to use newsgrab to get news headlines from Google News. Then, the data can be analyzed using the spaCy natural language processing library.

The motivation behind newgrab was to pull data on New York colleges to compare headlines about how institutions were being affected by COVID-19. I used the College Navigator from the National Center for Education Statistics to get a list of 4-year colleges in New York to use as the search data.

I had trouble finding a clean way to scrape headlines from Google News. My brother Randy helped me use Javascript and playwright to write the code for newsgrab.

Run a Search with newsgrab

First, install newsgrab globally through npm from the command line.

npm install -g newsgrab

Run a line with the package name and specify the file path (if outside current working directory) of a line-separated list of desired search terms. For my example, I used the names of New York colleges.

newsgrab ny_colleges.txt

The output of newsgrab is a JSON file called output and will follow the array structure below:


Afterwards, the output can be handled with Python.

Analyze the JSON Data with spaCy

Import the necessary packages for handling the data. These include: json, pandas, matplotlib, seaborn, re, and spaCy. Specific modules to import are the json_normalize module from pandas and the counter module from collections.

import json
import pandas as pd
from import json_normalize
import matplotlib.pyplot as plt
import seaborn as sb
import re
import spacy
from collections import Counter

Bring in one of the pre-trained models from spaCy. I use the model called en_core_web_sm. There are other options in their docs for English models, as well as those for different languages.

nlp = spacy.load("en_core_web_sm")

Read in the JSON data as a list and then normalize it with pandas. Specify the record path as ‘results’ and the meta as ‘search_term’ to correspond with the JSON array data structure from the output file.

with open('output.json',encoding="utf8") as raw_file1:
    list1 = json.load(raw_file1)

search_data = pd.json_normalize(list1, record_path='results', meta='search_term',record_prefix='results')

Gather all separate data through spaCy. I wanted to pull noun chunks, named entities, and tokens from my results column. For the token output, I use the attributes of rule-based matching to specify that I want all tokens except for stop words or punctuation. Then, each output is put into a column of the main dataframe.

noun_chunks = []
named_entity = []
tokens = []

for doc in nlp.pipe(df['results_lower'].astype('unicode').values, batch_size=50,
    if doc.is_parsed:
        noun_chunks.append([chunk.text for chunk in doc.noun_chunks])
        named_entity.append([ent.text for ent in doc.ents])
        tokens.append([token.text for token in doc if not token.is_stop and not token.is_punct])
df['results_noun_chunks'] = noun_chunks
df['results_named_entities'] = named_entity
df['results_tokens_clean'] = tokens

Process Tokens

Take the tokens column and flatten it into a list. Perform some general data cleaning like removing special characters and taking out line breaks and the remnants of ampersands. Then, use the counter module to get a frequency count of each of the words in the list.

word_frequency = Counter(string_list_of_words)
A raw output from the counter in collections shows words and their associated frequency in the text.
Raw output from the counter module shows tokens and their associated value counts in the total text.

Before analyzing the list, I also remove the tokens for my list of original search terms to keep it more focused on the terms outside of these. Then, I create a dataframe of the top results and plot those with seaborn.

A horizontal countplot shows descending value counts for the top tokens found in the text.
A countplot shows all keyword tokens with value counts over 21 for the college news headline data.

Process Noun Chunks

Perform some cleaning to separate the noun chunks lists per each individual search term. I remove excess characters after converting the output to strings, and then use the explode function from pandas to separate them.

Then, create a variable for the value count of each of the noun chunks, turn that into a dictionary, then map it to the dataframe for the following result.

A pandas dataframe shows news headlines, noun chunks, and separated noun segments and value counts.
A dataframe shows headlines, search terms, noun chunks, and new columns for separated noun chunks and associated value counts.

Then, I sort the values in a new dataframe in descending order, remove duplicates, and narrow down to the top 20 noun chunks with frequencies above 10 to graph in a countplot.

A horizontal countplot shows descending value counts for the top noun chunks found in the text.
A countplot shows all noun chunks with value counts over 9 for the college news headline data.

Process Named Entities

Cleaning the named entity outputs for each headline is nearly the same in process as cleaning the noun chunks. The lists are converted to strings, are cleaned, and use the explode function to separate individually. The outputs for named entities can be customized depending on desired type.

After separating the individual named entities, I use spaCy to identify the type of each and create a new column for these.

named_entity_type = []

for doc in nlp.pipe(named['named_entity'].astype('unicode').values, batch_size=50,
    if doc.is_parsed:
        named_entity_type.append([ent.label_ for ent in doc.ents])

named['named_entities_type'] = named_entity_type

Then, I get the value counts for the named entities and append these to a dictionary. I map the dictionary to the named entity column, and put the result in a new column.

As seen in the snippet of the full dataframe below, the model for identifying named entity values and types is not always accurate. There is documentation for training spaCy’s models for those interested in increased accuracy.

A pandas dataframe shows news headlines, named entities, and separated named entities, named entity type, and value counts.
A dataframe shows headlines, search terms, named entities, and new columns for separated named entities, their type, and associated value counts.

From the dataframe, I narrow down the entity types to exclude cardinal and ordinal types to take out any numbers that may have high frequencies within the headlines. Then, I get the top named entity types with frequencies over 6 to graph.

A horizontal countplot shows descending value counts for the top non-numerical named entities found in the text.
A countplot shows all non-numerical named entities with value counts over 6 for the college news headline data.

For full details and cleaning steps to create the visualizations above, please reference below for the associated gist from Github.

Additional Resources

Natural Langauge Processing with Python and spaCy by Yuli Vasiliev

Natural Language Processing with spaCy in Python by Taranjeet Singh

Mapping Song Lyric Locations in Python

Here’s an overview of how to map the coordinates of cities mentioned in song lyrics using Python. In this example, I used Lana Del Rey’s lyrics for my data and focused on United States cities. The full code for this is in a Jupyter Notebook on my GitHub under the lyrics_map repository.

A Lana Del Rey album booklet on a map
A map with Lana Del Rey’s Lust for Life album booklet.

Gather Bulk Song Lyrics Data

First, create an account with Genius to obtain an API key. This is used for making requests to scrape song lyrics data from a desired artist. Store the key in a text file. Then, follow the tutorial steps from this blog post by Nick Pai and reference the API key text file within the code.

You can customize the code to cater to a certain artist and number of songs. To be safe, I put in a request for lyrics from 300 songs.

Find Cities and Countries in the Data

After getting the song lyrics in a text file, open the file and use geotext to grab city names. Append these to a new pandas dataframe.

places = GeoText(content)
cities_from_text = places.cities
city_mentions = pd.DataFrame(cities_from_text, columns=['city'])

Use GeoText to gather country mentions and put these in a column. Then, clean the raw output and create a new dataframe querying only on the United States.

Personally, I focus only on United States cities to reduce errors from geotext reading common words such as ‘Born’ as foreign city names.

A three column dataframe shows city and two country columns.
The results from geotext city and country mentions in a dataframe, with a cleaned country column.
f = lambda x: GeoText(x).country_mentions
origin = city_mentions['city'].apply(f)
city_mentions['country_raw'] = origin

fn = lambda x: list(x)[0]
city_mentions['country'] = city_mentions['country_raw'].apply(fn)

city_mentions = city_mentions[city_mentions['country'] == 'US']

Afterwards, remove the country columns and manually clean the city data. I removed city names that seemed inaccurate.

city_mentions.drop(columns=['country_raw', 'country'], inplace=True)

cities_to_remove = ['Paris','Mustang','Palm','Bradley','Sunset','Pontiac','Green','Paradise',

city_mentions = city_mentions[~city_mentions['city'].isin(cities_to_remove)]

In my example, I corrected Newport and Venice to include ‘Beach’. I understand this can be cumbersome with larger datasets, but I did not see it imperative to automate this task for my example.

city_mentions = city_mentions.replace(to_replace ='Newport', value ='Newport Beach')
city_mentions = city_mentions.replace(to_replace ='Venice', value ='Venice Beach')

Next, save a list and a dataframe with value counts for each city to be used later for the map. Reset the index as well to have the two columns as city and mentions.

city_val_counts = city_mentions['city'].value_counts()
city_counts = pd.DataFrame(city_val_counts)

city_counts = city_counts.reset_index()
city_counts.columns = ['city', 'mentions']
A two column dataframe shows cities and number of mentions.
A pandas dataframe shows city and number of song mentions.

Then, create a list of the unique city values.

unique_list = (city_mentions['city'].unique().tolist())

Geocode the City Names

Use GeoPy to geocode the cities from the unique list, which pulls associated coordinates and location data. The user agent needs to be specified to avoid an error. Create a dataframe from this output.

chrome_user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36"
geolocator = Nominatim(timeout=10,user_agent=chrome_user_agent)

lat_lon = []
for city in unique_list: 
        location = geolocator.geocode(city)
        if location:
    except GeocoderTimedOut as e:
        print("Error: geocode failed on input %s with message %s"%
             (city, e))

city_data = pd.DataFrame(lat_lon, columns=['raw_data','raw_data2'])
city_data = city_data[['raw_data2', 'raw_data']]

This yields one column as the latitude and longitude and another with comma separated location data.

A two column dataframe showing coordinates and location data such as city, county, zip code and state
The raw output of GeoPy’s geocode function in a pandas dataframe, showing the coordinates and associated location fields in a list.

Reduce the Geocode Data to Desired Columns

I cleaned my data to have only city names and associated coordinates. The output from GeoPy allows for more information such as county and state, if desired.

To split the location data (raw_data) column, convert it to a string and then split it and create a new column (city) from the first indexed object.

city_data['city'] = city_data['raw_data'].str.split(',').str[0]
A three column datadrame shows two columns of geocoded output and one for city names.
A dataframe with the outputs from GeoPy geocoder with one new column for string split city names.

Then, convert the coordinates column (raw_data2) into a string type to remove the parentheses and finally split on the comma.

#change the coordinates to a string
city_data['raw_data2'] = city_data['raw_data2'].astype(str)

#split the coordinates using the comma as the delimiter
city_data[['lat','lon']] = city_data.raw_data2.str.split(",",expand=True,)

#remove the parentheses
city_data['lat'] = city_data['lat'].map(lambda x:x.lstrip('()'))
city_data['lon'] = city_data['lon'].map(lambda x:x.rstrip('()'))

Convert the latitude and longitude columns back to floats because this is the usable type for plotly.

city_data = city_data.astype({'lat': 'float64', 'lon': 'float64'})

Next, drop all the unneeded columns.

city_data.drop(['raw_data2', 'raw_data'], axis = 1, inplace=True)

Drop any duplicates and end up with a clean set of city, latitude, and longitude.

A three column dataframe shows city, latitude, and longitude.
The cleaned dataframe for the city, latitude, and longitude.

Create the Final Merged DataFrame and Map

Merge the city coordinates dataframe and city mentions dataframe using a left join on city names.

merged = pd.merge(city_data, city_counts, on='city', how='left')
A four column dataframe shows city names, latitude, longitude, and number of mentions
The final merged dataframe with city, latitude, longitude, and number of song mentions.

Create an account with MapBox to obtain an API key to plot my song lyric locations in a Plotly Express bubble map. Alternatively, it is also possible to generate the map without an API key if you have Dash installed. Customize the map for visibility by adjusting variables such as the color scale, the zoom extent, and the data that appears when hovering over the data.

df =
fig = px.scatter_mapbox(merged, lat='lat', lon='lon', color='mentions', size='mentions',
                  color_continuous_scale=px.colors.sequential.Agsunset, size_max=40, zoom=3, 
        'text': 'US Cities Mentioned in Lana Del Rey Songs',
        'xanchor': 'center',
        'yanchor': 'top'})

#save graph as html
with open('plotly_graph.html', 'w') as f:

Improving Visualizations of Hierarchical Qualitative Data

Improving Visualizations of Hierarchical Qualitative Data

Visualizing qualitative data can be difficult if care is not taken for hierarchical characteristics. Variables representing levels of feelings can be presented in a horizontal range to improve comprehension. The online bank, Simple, includes a poll in its newsletter to account holders and often asks for levels of confidence with financial topics. Here’s how to present hierarchical qualitative data in a few different ways based on visualizations from Simple’s monthly newsletter.

To represent qualitative data, careful consideration should be given to:

  • Graph Type
  • Logical Order of Data
  • Color Scheme

Original Graphs

Graph 1

In September, Simple’s poll question was: “How confident do you feel making big purchases in today’s financial environment?” Here is the visualization that accompanied it.

A pie graph created by Simple bank shows levels of confidence for account holder confidence making big purchases in today's financial climate.
Simple’s pie chart of its September survey results for: “How confident do you feel making big purchases in today’s financial environment?”

Although the legend is presented in a sensible high-to-low order, this graph is pretty confusing. The choice of a pie chart muddles the range of emotions being presented. The viewer’s eye, if moving clockwise, hits ‘Not at all Confident’ at about the same time as ‘Very Confident’. The color palette has no inherent significance for the survey responses. It does not travel on an easily understood color spectrum of high to low.

Graph 2

In November, Simple’s poll question was: “How do you feel about the money you’ll be spending this holiday season?” Below is the graph that illustrated these results.

A bar chart shows
Simple’s bar chart of November survey results for: “How do you feel about the money you’ll be spending this holiday season?”

Simple’s graph shows various emotions, but does not show them in any particular order, whether by percentage or type of feeling. Similar to the pie chart, the color palette does not have any particular significance.

Improved Graphs

Using Python and matplotlib’s horizontal stacked bar chart, I created different representations of the survey data for big purchase confidence and feelings about holiday spending. A bar chart presents results for viewers to read logically from left to right.

Graph 1

A horizontal bar chart shows Simple's survey results from high to low confidence levels for making big purchases in today's financial climate.
A horizontal stacked bar chart shows a variation of Simple’s September survey results.

I associated the levels of confidence with a green to red spectrum to signify the range of positive to negative feelings. Another variation could have been a monochrome spectrum where a dark shade moving to a lighter shades would signify decreasing confidence.

Graph 2

A horizontal stacked bar chart shows a range of emotions for holiday spending.
A horizontal stacked bar chart shows a variation of Simple’s November survey results.

I arranged the emotions from negative to positive feelings so they could show a spectrum. The color palette reflects the movements from troubled to excited by moving from red to green.


The survey data, as mentioned, comes from Simple‘s monthly newsletter.

This article from matplotlib on discrete distribution provided me with the base for these graphs. The main distinction is that I only included one bar to achieve the singular spectrum of survey results. I found variations of tree maps and waffle plots did not divide sections horizontally in rectangles as well as the stacked bar plot would.


Visual #1 – September Survey Data

category_names1 = ['very \nconfident', 'somewhat \nconfident', 'mixed \nfeelings', 'not really \nconfident', 'not at all \nconfident']
results1 = {'': [14,16,30,19,21]}

def survey1(results, category_names):

    labels = list(results.keys())
    data = np.array(list(results.values()))
    data_cum = data.cumsum(axis=1)
    category_colors = plt.get_cmap('RdYlGn_r')(
        np.linspace(0.15, 0.85, data.shape[1]))

    fig, ax = plt.subplots(figsize=(12, 4))
    ax.set_xlim(0, np.sum(data, axis=1).max())

    for i, (colname, color) in enumerate(zip(category_names, category_colors)):
        widths = data[:, i]
        starts = data_cum[:, i] - widths
        ax.barh(labels, widths, left=starts, height=0.5,
                label=colname, color=color)
        xcenters = starts + widths / 2

        r, g, b, _ = color
        text_color = 'white' if r * g * b < 0.5 else 'darkgrey'
        for y, (x, c) in enumerate(zip(xcenters, widths)):
            ax.text(x, y, str(int(c))+'%', ha='center', va='center',
                    color=text_color, fontsize=20, fontweight='bold',
                   fontname='Gill Sans MT')
    ax.legend(ncol=len(category_names), bbox_to_anchor=(0.007, 1),
              loc='lower left',prop={'family':'Gill Sans MT', 'size':'15'})
    return fig, ax

survey1(results1, category_names1)

plt.suptitle(t ='How confident do you feel making big purchases in today\'s financial environment?', x=0.515, y=1.16, 
    fontsize=22, style='italic', fontname='Gill Sans MT')
#plt.savefig('big_purchase_confidence.jpeg', bbox_inches = 'tight')

Visual #2 – November Survey Data

category_names2 = ['in a pickle','worried','fine','calm','excited']
results2 = {'': [14,32,16,29,9]}

def survey2(results, category_names):

    labels = list(results.keys())
    data = np.array(list(results.values()))
    data_cum = data.cumsum(axis=1)
    category_colors = plt.get_cmap('RdYlGn')(
        np.linspace(0.15, 0.85, data.shape[1]))

    fig, ax = plt.subplots(figsize=(10.5, 4))
    ax.set_xlim(0, np.sum(data, axis=1).max())

    for i, (colname, color) in enumerate(zip(category_names, 
        widths = data[:, i]
        starts = data_cum[:, i] - widths
        ax.barh(labels, widths, left=starts, height=0.5,
                label=colname, color=color)
        xcenters = starts + widths / 2

        r, g, b, _ = color
        text_color = 'white' if r * g * b < 0.5 else 'darkgrey'
        for y, (x, c) in enumerate(zip(xcenters, widths)):
            ax.text(x, y, str(int(c))+'%', ha='center', va='center',
                    color=text_color, fontsize=20, fontweight='bold', fontname='Gill Sans MT')
    ax.legend(ncol=len(category_names), bbox_to_anchor=(- 0.01, 1),
              loc='lower left', prop={'family':'Gill Sans MT', 'size':'16'})
    return fig, ax

survey2(results2, category_names2)
plt.suptitle(t ='How do you feel about the money you\'ll be spending this holiday season?', x=0.509, y=1.1, fontsize=22,
            style='italic', fontname='Gill Sans MT')
#plt.savefig('holiday_money.jpeg', bbox_inches = 'tight')