Mapping Song Lyric Locations in Python

Here’s an overview of how to map the coordinates of cities mentioned in song lyrics using Python. In this example, I used Lana Del Rey’s lyrics for my data and focused on United States cities. The full code for this is in a Jupyter Notebook on my GitHub under the lyrics_map repository.

A Lana Del Rey album booklet on a map
A map with Lana Del Rey’s Lust for Life album booklet.

Gather Bulk Song Lyrics Data

First, create an account with Genius to obtain an API key. This is used for making requests to scrape song lyrics data from a desired artist. Store the key in a text file. Then, follow the tutorial steps from this blog post by Nick Pai and reference the API key text file within the code.

You can customize the code to cater to a certain artist and number of songs. To be safe, I put in a request for lyrics from 300 songs.

Find Cities and Countries in the Data

After getting the song lyrics in a text file, open the file and use geotext to grab city names. Append these to a new pandas dataframe.

places = GeoText(content)
cities_from_text = places.cities
city_mentions = pd.DataFrame(cities_from_text, columns=['city'])

Use GeoText to gather country mentions and put these in a column. Then, clean the raw output and create a new dataframe querying only on the United States.

Personally, I focus only on United States cities to reduce errors from geotext reading common words such as ‘Born’ as foreign city names.

A three column dataframe shows city and two country columns.
The results from geotext city and country mentions in a dataframe, with a cleaned country column.
f = lambda x: GeoText(x).country_mentions
origin = city_mentions['city'].apply(f)
city_mentions['country_raw'] = origin

fn = lambda x: list(x)[0]
city_mentions['country'] = city_mentions['country_raw'].apply(fn)

city_mentions = city_mentions[city_mentions['country'] == 'US']

Afterwards, remove the country columns and manually clean the city data. I removed city names that seemed inaccurate.

city_mentions.drop(columns=['country_raw', 'country'], inplace=True)

cities_to_remove = ['Paris','Mustang','Palm','Bradley','Sunset','Pontiac','Green','Paradise',
    'Mansfield','Eden','Crystal','Monroe','Columbia','Laredo','Joplin','Adrian',
    'York','Golden','Oklahoma','Kansas','Coachella','Kokomo','Woodstock']

city_mentions = city_mentions[~city_mentions['city'].isin(cities_to_remove)]

In my example, I corrected Newport and Venice to include ‘Beach’. I understand this can be cumbersome with larger datasets, but I did not see it imperative to automate this task for my example.

city_mentions = city_mentions.replace(to_replace ='Newport', value ='Newport Beach')
city_mentions = city_mentions.replace(to_replace ='Venice', value ='Venice Beach')

Next, save a list and a dataframe with value counts for each city to be used later for the map. Reset the index as well to have the two columns as city and mentions.

city_val_counts = city_mentions['city'].value_counts()
city_counts = pd.DataFrame(city_val_counts)

city_counts = city_counts.reset_index()
city_counts.columns = ['city', 'mentions']
A two column dataframe shows cities and number of mentions.
A pandas dataframe shows city and number of song mentions.

Then, create a list of the unique city values.

unique_list = (city_mentions['city'].unique().tolist())

Geocode the City Names

Use GeoPy to geocode the cities from the unique list, which pulls associated coordinates and location data. The user agent needs to be specified to avoid an error. Create a dataframe from this output.

chrome_user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36"
geolocator = Nominatim(timeout=10,user_agent=chrome_user_agent)

lat_lon = []
for city in unique_list: 
    try:
        location = geolocator.geocode(city)
        if location:
            lat_lon.append(location)
    except GeocoderTimedOut as e:
        print("Error: geocode failed on input %s with message %s"%
             (city, e))

city_data = pd.DataFrame(lat_lon, columns=['raw_data','raw_data2'])
city_data = city_data[['raw_data2', 'raw_data']]

This yields one column as the latitude and longitude and another with comma separated location data.

A two column dataframe showing coordinates and location data such as city, county, zip code and state
The raw output of GeoPy’s geocode function in a pandas dataframe, showing the coordinates and associated location fields in a list.

Reduce the Geocode Data to Desired Columns

I cleaned my data to have only city names and associated coordinates. The output from GeoPy allows for more information such as county and state, if desired.

To split the location data (raw_data) column, convert it to a string and then split it and create a new column (city) from the first indexed object.

city_data['city'] = city_data['raw_data'].str.split(',').str[0]
A three column datadrame shows two columns of geocoded output and one for city names.
A dataframe with the outputs from GeoPy geocoder with one new column for string split city names.

Then, convert the coordinates column (raw_data2) into a string type to remove the parentheses and finally split on the comma.

#change the coordinates to a string
city_data['raw_data2'] = city_data['raw_data2'].astype(str)

#split the coordinates using the comma as the delimiter
city_data[['lat','lon']] = city_data.raw_data2.str.split(",",expand=True,)

#remove the parentheses
city_data['lat'] = city_data['lat'].map(lambda x:x.lstrip('()'))
city_data['lon'] = city_data['lon'].map(lambda x:x.rstrip('()'))

Convert the latitude and longitude columns back to floats because this is the usable type for plotly.

city_data = city_data.astype({'lat': 'float64', 'lon': 'float64'})

Next, drop all the unneeded columns.

city_data.drop(['raw_data2', 'raw_data'], axis = 1, inplace=True)

Drop any duplicates and end up with a clean set of city, latitude, and longitude.

city_data.drop_duplicates(keep=False,inplace=True)
A three column dataframe shows city, latitude, and longitude.
The cleaned dataframe for the city, latitude, and longitude.

Create the Final Merged DataFrame and Map

Merge the city coordinates dataframe and city mentions dataframe using a left join on city names.

merged = pd.merge(city_data, city_counts, on='city', how='left')
A four column dataframe shows city names, latitude, longitude, and number of mentions
The final merged dataframe with city, latitude, longitude, and number of song mentions.

Create an account with MapBox to obtain an API key to plot my song lyric locations in a Plotly Express bubble map. Alternatively, it is also possible to generate the map without an API key if you have Dash installed. Customize the map for visibility by adjusting variables such as the color scale, the zoom extent, and the data that appears when hovering over the data.

px.set_mapbox_access_token(open("mapbox_token.txt").read())
df = px.data.carshare()
fig = px.scatter_mapbox(merged, lat='lat', lon='lon', color='mentions', size='mentions',
                  color_continuous_scale=px.colors.sequential.Agsunset, size_max=40, zoom=3, 
                        hover_data=['city'])
fig.update_layout(
    title={
        'text': 'US Cities Mentioned in Lana Del Rey Songs',
        'y':0.95,
        'x':0.5,
        'xanchor': 'center',
        'yanchor': 'top'})
fig.show()

#save graph as html
with open('plotly_graph.html', 'w') as f:
    f.write(fig.to_html(include_plotlyjs='cdn'))

Improving Visualizations of Hierarchical Qualitative Data

Improving Visualizations of Hierarchical Qualitative Data

Visualizing qualitative data can be difficult if care is not taken for hierarchical characteristics. Variables representing levels of feelings can be presented in a horizontal range to improve comprehension. The online bank, Simple, includes a poll in its newsletter to account holders and often asks for levels of confidence with financial topics. Here’s how to present hierarchical qualitative data in a few different ways based on visualizations from Simple’s monthly newsletter.

To represent qualitative data, careful consideration should be given to:

  • Graph Type
  • Logical Order of Data
  • Color Scheme

Original Graphs

Graph 1

In September, Simple’s poll question was: “How confident do you feel making big purchases in today’s financial environment?” Here is the visualization that accompanied it.

A pie graph created by Simple bank shows levels of confidence for account holder confidence making big purchases in today's financial climate.
Simple’s pie chart of its September survey results for: “How confident do you feel making big purchases in today’s financial environment?”

Although the legend is presented in a sensible high-to-low order, this graph is pretty confusing. The choice of a pie chart muddles the range of emotions being presented. The viewer’s eye, if moving clockwise, hits ‘Not at all Confident’ at about the same time as ‘Very Confident’. The color palette has no inherent significance for the survey responses. It does not travel on an easily understood color spectrum of high to low.

Graph 2

In November, Simple’s poll question was: “How do you feel about the money you’ll be spending this holiday season?” Below is the graph that illustrated these results.

A bar chart shows
Simple’s bar chart of November survey results for: “How do you feel about the money you’ll be spending this holiday season?”

Simple’s graph shows various emotions, but does not show them in any particular order, whether by percentage or type of feeling. Similar to the pie chart, the color palette does not have any particular significance.

Improved Graphs

Using Python and matplotlib’s horizontal stacked bar chart, I created different representations of the survey data for big purchase confidence and feelings about holiday spending. A bar chart presents results for viewers to read logically from left to right.

Graph 1

A horizontal bar chart shows Simple's survey results from high to low confidence levels for making big purchases in today's financial climate.
A horizontal stacked bar chart shows a variation of Simple’s September survey results.

I associated the levels of confidence with a green to red spectrum to signify the range of positive to negative feelings. Another variation could have been a monochrome spectrum where a dark shade moving to a lighter shades would signify decreasing confidence.

Graph 2

A horizontal stacked bar chart shows a range of emotions for holiday spending.
A horizontal stacked bar chart shows a variation of Simple’s November survey results.

I arranged the emotions from negative to positive feelings so they could show a spectrum. The color palette reflects the movements from troubled to excited by moving from red to green.

References

The survey data, as mentioned, comes from Simple‘s monthly newsletter.

This article from matplotlib on discrete distribution provided me with the base for these graphs. The main distinction is that I only included one bar to achieve the singular spectrum of survey results. I found variations of tree maps and waffle plots did not divide sections horizontally in rectangles as well as the stacked bar plot would.

Code

Visual #1 – September Survey Data

category_names1 = ['very \nconfident', 'somewhat \nconfident', 'mixed \nfeelings', 'not really \nconfident', 'not at all \nconfident']
results1 = {'': [14,16,30,19,21]}

def survey1(results, category_names):

    labels = list(results.keys())
    data = np.array(list(results.values()))
    data_cum = data.cumsum(axis=1)
    category_colors = plt.get_cmap('RdYlGn_r')(
        np.linspace(0.15, 0.85, data.shape[1]))

    fig, ax = plt.subplots(figsize=(12, 4))
    ax.invert_yaxis()
    ax.xaxis.set_visible(False)
    ax.set_xlim(0, np.sum(data, axis=1).max())

    for i, (colname, color) in enumerate(zip(category_names, category_colors)):
        widths = data[:, i]
        starts = data_cum[:, i] - widths
        ax.barh(labels, widths, left=starts, height=0.5,
                label=colname, color=color)
        xcenters = starts + widths / 2

        r, g, b, _ = color
        text_color = 'white' if r * g * b < 0.5 else 'darkgrey'
        for y, (x, c) in enumerate(zip(xcenters, widths)):
            ax.text(x, y, str(int(c))+'%', ha='center', va='center',
                    color=text_color, fontsize=20, fontweight='bold',
                   fontname='Gill Sans MT')
    ax.legend(ncol=len(category_names), bbox_to_anchor=(0.007, 1),
              loc='lower left',prop={'family':'Gill Sans MT', 'size':'15'})
    ax.axis('off')
    return fig, ax

survey1(results1, category_names1)

plt.suptitle(t ='How confident do you feel making big purchases in today\'s financial environment?', x=0.515, y=1.16, 
    fontsize=22, style='italic', fontname='Gill Sans MT')
#plt.savefig('big_purchase_confidence.jpeg', bbox_inches = 'tight')
plt.show()

Visual #2 – November Survey Data

category_names2 = ['in a pickle','worried','fine','calm','excited']
results2 = {'': [14,32,16,29,9]}


def survey2(results, category_names):

    labels = list(results.keys())
    data = np.array(list(results.values()))
    data_cum = data.cumsum(axis=1)
    category_colors = plt.get_cmap('RdYlGn')(
        np.linspace(0.15, 0.85, data.shape[1]))

    fig, ax = plt.subplots(figsize=(10.5, 4))
    ax.invert_yaxis()
    ax.xaxis.set_visible(False)
    ax.set_xlim(0, np.sum(data, axis=1).max())

    for i, (colname, color) in enumerate(zip(category_names, 
category_colors)):
        widths = data[:, i]
        starts = data_cum[:, i] - widths
        ax.barh(labels, widths, left=starts, height=0.5,
                label=colname, color=color)
        xcenters = starts + widths / 2

        r, g, b, _ = color
        text_color = 'white' if r * g * b < 0.5 else 'darkgrey'
        for y, (x, c) in enumerate(zip(xcenters, widths)):
            ax.text(x, y, str(int(c))+'%', ha='center', va='center',
                    color=text_color, fontsize=20, fontweight='bold', fontname='Gill Sans MT')
    ax.legend(ncol=len(category_names), bbox_to_anchor=(- 0.01, 1),
              loc='lower left', prop={'family':'Gill Sans MT', 'size':'16'})
    ax.axis('off')
    return fig, ax


survey2(results2, category_names2)
plt.suptitle(t ='How do you feel about the money you\'ll be spending this holiday season?', x=0.509, y=1.1, fontsize=22,
            style='italic', fontname='Gill Sans MT')
#plt.savefig('holiday_money.jpeg', bbox_inches = 'tight')
plt.show()

Spotify Web API: How to Pull and Clean Top Song Data using Python

Spotify Web API: How to Pull and Clean Top Song Data using Python

I used the Spotify Web API to pull the top songs from my personal account. I’ll go over how to get the fifty most popular songs from a user’s Spotify account using spotipy, clean the data, and produce visualizations in Python.

Top 50 Spotify Songs

Top 50 songs from my personal Spotify account, extracted using the Spotify API.
 SongArtistAlbumPopularity
1BorderlineTame ImpalaBorderline77
2GroceriesMallratIn the Sky64
3FadingToro y MoiOuter Peace48
4FanfareMagic City HippiesHippie Castle EP57
5LimestoneMagic City HippiesHippie Castle EP59
6High Steppin'The Avett BrothersCloser Than Together51
7I Think Your Nose Is BleedingThe Front BottomsAnn43
8Die Die DieThe Avett BrothersEmotionalism (Bonus Track Version)44
9SpiceMagic City HippiesModern Animal42
10Bleeding WhiteThe Avett BrothersCloser Than Together53
11Prom QueenBeach BunnyProm Queen73
12SportsBeach BunnySports65
13FebruaryBeach BunnyCrybaby51
14Pale Beneath The Tan (Squeeze)The Front BottomsAnn43
1512 Feet DeepThe Front BottomsRose49
16Au Revoir (Adios)The Front BottomsTalon Of The Hawk50
17FreelanceToro y MoiOuter Peace57
18SpacemanThe KillersDay & Age (Bonus Tracks)62
19Destroyed By Hippie PowersCar Seat HeadrestTeens of Denial51
20Why Won't They Talk To Me?Tame ImpalaLonerism59
21FallingwaterMaggie RogersHeard It In A Past Life71
22Funny You Should AskThe Front BottomsTalon Of The Hawk48
23You Used To Say (Holy Fuck)The Front BottomsGoing Grey47
24Today Is Not RealThe Front BottomsAnn41
25FatherThe Front BottomsThe Front Bottoms43
26Broken BoyCage The ElephantSocial Cues60
27Wait a Minute!WILLOWARDIPITHECUS80
28Laugh Till I CryThe Front BottomsBack On Top47
29Nobody's HomeMallratNobody's Home56
30Apocalypse DreamsTame ImpalaLonerism60
31Fill in the BlankCar Seat HeadrestTeens of Denial56
32SpiderheadCage The ElephantMelophobia57
33Tie Dye DragonThe Front BottomsAnn47
34Summer ShandyThe Front BottomsBack On Top43
35At the BeachThe Avett BrothersMignonette51
36MotorcycleThe Front BottomsBack On Top41
37The New Love SongThe Avett BrothersMignonette42
38Paranoia in B MajorThe Avett BrothersEmotionalism (Bonus Track Version)49
39AberdeenCage The ElephantThank You Happy Birthday54
40Losing TouchThe KillersDay & Age (Bonus Tracks)51
41Four of a KindMagic City HippiesHippie Castle EP46
42Cosmic Hero (Live at the Tramshed, Cardiff, Wa...Car Seat HeadrestCommit Yourself Completely34
43Locked UpThe Avett BrothersCloser Than Together49
44Bull RideMagic City HippiesHippie Castle EP49
45The Weight of LiesThe Avett BrothersEmotionalism (Bonus Track Version)51
46Heat WaveSnail MailLush60
47Awkward ConversationsThe Front BottomsRose42
48Baby Drive It DownToro y MoiOuter Peace47
49Your LoveMiddle KidsMiddle Kids EP29
50Ordinary PleasureToro y MoiOuter Peace58

Using Spotipy and the Spotify Web API

First, I created an account with Spotify for Developers and created a client ID from the dashboard. This provides both a client ID and client secret for your application to be used when making requests to the API.

Next, from the application page, in ‘Edit Settings’, in Redirect URIs, I add http://localhost:8888/callback . This will come in handy later when logging into a specific Spotify account to pull data.

Then, I write the code to make the request to the API. This will pull the data and put it in a JSON file format.

I import the following libraries:

  • Python’s OS library to facilitate the client ID, client secret, and redirect API for the code using the computer’s operating system. This will temporarily set the credentials in the environmental variables.
  • Python’s json library to encode the data.
  • Spotipy to provide an authorization flow for logging in to a Spotify account and obtain current top tracks for export.
import os
import json
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
import spotipy.util as util

Next, I define the client ID and secret to what has been assigned to my application from the Spotify API. Then, I set the environmental variables to include the the client ID, client secret, and the redirect URI.

cid ="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" 
secret = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

os.environ['SPOTIPY_CLIENT_ID']= cid
os.environ['SPOTIPY_CLIENT_SECRET']= secret
os.environ['SPOTIPY_REDIRECT_URI']='http://localhost:8888/callback'

Then, I work through the authorization flow from the Spotipy documentation. The first time this code is run, the user will have to provide their Sptofy username and password when prompted in the web browser.

username = ""
client_credentials_manager = SpotifyClientCredentials(client_id=cid, client_secret=secret) 
sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager)
scope = 'user-top-read'
token = util.prompt_for_user_token(username, scope)

if token:
    sp = spotipy.Spotify(auth=token)
else:
    print("Can't get token for", username)

In the results section, I specify the information to pull. The arguments I provide indicate 50 songs as the limit, the index of the first item to return, and the time range. The time range options, as specified in Spotify’s documentation, are:

  • short_term : approximately last 4 weeks of listening
  • medium_term : approximately last 6 months of listening
  • long_term : last several years of listening

For my query, I decided to use the medium term argument because I thought that would give the best picture of my listening habits for the past half year. Lastly, I create a list to append the results to and then write them to a JSON file.

if token:
    sp = spotipy.Spotify(auth=token)
    results = sp.current_user_top_tracks(limit=50,offset=0,time_range='medium_term')
    for song in range(50):
        list = []
        list.append(results)
        with open('top50_data.json', 'w', encoding='utf-8') as f:
            json.dump(list, f, ensure_ascii=False, indent=4)
else:
    print("Can't get token for", username)

After compiling this code into a Python file, I run it from the command line. The output is top50_data.JSON which will need to be cleaned before using it to create visualizations.

Cleaning JSON Data for Visualizations

The top song data JSON file output is nested according to different categories, as seen in the sample below.

 "artists": [
                    {
                        "external_urls": {
                            "spotify": "https://open.spotify.com/artist/5PbpKlxQE0Ktl5lcNABoFf"
                        },
                        "href": "https://api.spotify.com/v1/artists/5PbpKlxQE0Ktl5lcNABoFf",
                        "id": "5PbpKlxQE0Ktl5lcNABoFf",
                        "name": "Car Seat Headrest",
                        "type": "artist",
                        "uri": "spotify:artist:5PbpKlxQE0Ktl5lcNABoFf"
                    }
                ],
                "disc_number": 1,
                "duration_ms": 303573,
                "explicit": true,
                "href": "https://api.spotify.com/v1/tracks/5xy3350chgFfFcdTET4xz3",
                "id": "5xy3350chgFfFcdTET4xz3",
                "is_local": false,
                "name": "Destroyed By Hippie Powers",
                "popularity": 51,
                "preview_url": "https://p.scdn.co/mp3-preview/cd1a18f3f7c8ada17bb54c55524ef42e80719d1f?cid=39e9cdce36dc45e589ce5b564c0594a2",
                "track_number": 3,
                "type": "track",
                "uri": "spotify:track:5xy3350chgFfFcdTET4xz3"
            },

Before cleaning the JSON data and creating visualizations in a new file, I import json, pandas, matplotlib, and seaborn. Next, I load the JSON file with the top 50 song data.

import json
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb

with open('top50_data.json') as f:
  data = json.load(f)

I create a full list of all the data to start. Next, I create lists where I will append the specific JSON data. Using a loop, I access each of the items of interest for analysis and append them to the lists.

list_of_results = data[0]["items"]
list_of_artist_names = []
list_of_artist_uri = []
list_of_song_names = []
list_of_song_uri = []
list_of_durations_ms = []
list_of_explicit = []
list_of_albums = []
list_of_popularity = []

for result in list_of_results:
    result["album"]
    this_artists_name = result["artists"][0]["name"]
    list_of_artist_names.append(this_artists_name)
    this_artists_uri = result["artists"][0]["uri"]
    list_of_artist_uri.append(this_artists_uri)
    list_of_songs = result["name"]
    list_of_song_names.append(list_of_songs)
    song_uri = result["uri"]
    list_of_song_uri.append(song_uri)
    list_of_duration = result["duration_ms"]
    list_of_durations_ms.append(list_of_duration)
    song_explicit = result["explicit"]
    list_of_explicit.append(song_explicit)
    this_album = result["album"]["name"]
    list_of_albums.append(this_album)
    song_popularity = result["popularity"]
    list_of_popularity.append(song_popularity)

Then, I create a pandas DataFrame, name each column and populate it with the above lists, and export it as a CSV for a backup copy.

all_songs = pd.DataFrame(
    {'artist': list_of_artist_names,
     'artist_uri': list_of_artist_uri,
     'song': list_of_song_names,
     'song_uri': list_of_song_uri,
     'duration_ms': list_of_durations_ms,
     'explicit': list_of_explicit,
     'album': list_of_albums,
     'popularity': list_of_popularity
     
    })

all_songs_saved = all_songs.to_csv('top50_songs.csv')

Using the DataFrame, I create two visualizations. The first is a count plot using seaborn to show how many top songs came from each artist represented in the top 50 tracks.

descending_order = top50['artist'].value_counts().sort_values(ascending=False).index
ax = sb.countplot(y = top50['artist'], order=descending_order)

sb.despine(fig=None, ax=None, top=True, right=True, left=False, trim=False)
sb.set(rc={'figure.figsize':(6,7.2)})

ax.set_ylabel('')    
ax.set_xlabel('')
ax.set_title('Songs per Artist in Top 50', fontsize=16, fontweight='heavy')
sb.set(font_scale = 1.4)
ax.axes.get_xaxis().set_visible(False)
ax.set_frame_on(False)

y = top50['artist'].value_counts()
for i, v in enumerate(y):
    ax.text(v + 0.2, i + .16, str(v), color='black', fontweight='light', fontsize=14)
    
plt.savefig('top50_songs_per_artist.jpg', bbox_inches="tight")
A countplot shows artists in descending song counts in total top tracks from Spotify.
A countplot shows the number of songs per artists in the top 50 tracks from greatest to least.

The second graph is a seaborn box plot to show the popularity of songs within individual artists represented.

popularity = top50['popularity']
artists = top50['artist']

plt.figure(figsize=(10,6))

ax = sb.boxplot(x=popularity, y=artists, data=top50)
plt.xlim(20,90)
plt.xlabel('Popularity (0-100)')
plt.ylabel('')
plt.title('Song Popularity by Artist', fontweight='bold', fontsize=18)
plt.savefig('top50_artist_popularity.jpg', bbox_inches="tight")
A graph shows the varying levels of song popularity per artist in top tracks from Spotify.
A boxplot shows the different levels of song popularity per artist in top 50 Spotify tracks.

Further Considerations

For future interactions with the Spotify Web API, I would like to complete requests that pull top song data for each of the three term options and compare them. This would give a comprehensive view of listening habits and could lead to pulling further information from each artist.

Data Exploration with Student Test Scores

Data Exploration with Student Test Scores

I explored a set of student test scores from Kaggle for my Udacity Data Analyst Nanodegree program. The data consists of 1000 entries for students with the following categories: gender, race/ethnicity, parental level of education, lunch assistance, test preparation, math score, reading score, writing score. My main objective was to explore trends through the stages of univariate, bivariate, and multivariate analysis.

Preliminary Data Cleaning

For this project, I used numpy, pandas, matplotlib.pyplot, and seaborn libraries. The original data has all test scores as integer data types. I added a column for a combined average of math, reading, and writing scores and three columns for the test scores converted into letter grade.

# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb

%matplotlib inline
labels = ['gender', 'race/ethnicity', 'par_level_educ', 'lunch', 'test_prep', 'math', 'reading', 'writing']
tests = pd.read_csv('StudentsPerformance.csv', header=0, names=labels)
tests.info()
Output of info() for student test scores.
tests.head(10)
Output of head() for student test scores.

Univariate Analysis

Histograms provide a sense of the spread of test scores across subject. Count plots provide counts for test preparation course attendance and parental level of education.

plt.figure(figsize=[10,4])
plt.subplot(1, 3, 1)
plt.hist(data=tests, x='math', bins=20)
plt.title('Math')
plt.xlim(0,100)
plt.ylim(0,160)
plt.ylabel('Number of Students', fontsize=12)
plt.subplot(1, 3, 2)
plt.hist(data=tests, x='reading', bins=20)
plt.title('Reading')
plt.xlim(0,100)
plt.ylim(0,160)
plt.subplot(1, 3, 3)
plt.hist(data=tests, x='writing', bins=20)
plt.title('Writing')
plt.xlim(0,100)
plt.ylim(0,160)
plt.suptitle('Test Scores', fontsize=16, y=1.0);
Histograms showing the spread of student test scores across all topics.
ed_order = ['some high school', 'high school', 'some college', 
            'associate\'s degree', 'bachelor\'s degree', 'master\'s degree']
base_color = sb.color_palette()[9]
sb.countplot(data=tests, x='par_level_educ', color=base_color, order=ed_order)
n_points = tests.shape[0]
cat_counts = tests['par_level_educ'].value_counts()
locs, labels = plt.xticks()
for loc, label in zip(locs, labels):
    count = cat_counts[label.get_text()]
    pct_string = count
    plt.text(loc, count-35, pct_string, ha='center', color='black', fontsize=12)
plt.xticks(rotation=25)
plt.xlabel('')
plt.ylabel('')
plt.title('Parental Education Level of Student Test Takers');
Bivariate count plots of student test scores across parental levels of education.

Bivariate Analysis

Violin plots illustrate average test scores and test preparation course attendance. Box plots provide visual representation of the quartiles within each subject area. I sorted level of education from the lowest to highest level captured by the data.

base_color=sb.color_palette()[0]
g = sb.violinplot(data=tests, y='test_prep', x='avg_score', color=base_color)
plt.xlabel('')
plt.ylabel('')
plt.title('Average Test Scores and Preparation Course Completion', fontsize=14)
g.set_yticklabels(['Did Not Complete', 'Completed Course'], fontsize=12);
Violin plots that show average student test scores base on level of test preparation.
ed_order = ['some high school', 'high school', 'some college', 
            'associate\'s degree', 'bachelor\'s degree', 'master\'s degree']
sb.boxplot(data=tests, x='reading', y='par_level_educ', order=ed_order, palette="Blues")
plt.xlabel('')
plt.ylabel('')
plt.title('Reading Scores and Parental Level of Education', fontsize=14);
Box plots that show reading scores across varying levels of parental education.

Multivariate Analysis

A swarm plot explores average test scores, parental level of education, and test preparation course attendance. Box plots show test scores for each subject, divided by gender and test preparation course attendance.

ed_order = ['some high school', 'high school', 'some college', 
            'associate\'s degree', 'bachelor\'s degree', 'master\'s degree']
sb.swarmplot(data=tests, x='par_level_educ', y='avg_score', hue='test_prep', order=ed_order, edgecolor='black')
legend = plt.legend(loc=6, bbox_to_anchor=(1.0,0.5))
plt.xticks(rotation=15)
plt.xlabel('')
plt.ylabel('')
legend.get_texts()[0].set_text('Did Not Complete')
legend.get_texts()[1].set_text('Completed')
plt.ylim(0,110)
plt.title('Average Test Scores by Parental Level of Education and Test Preparation Course Participation');
A swarm plot that shows student test scores, test preparation level, and the highest levels of parental education.
plt.figure(figsize=[15,4])
plt.subplot(1, 3, 1)
g = sb.boxplot(data=tests, x='test_prep', y='math', hue='gender')
plt.title('Math')
plt.xlabel('')
plt.ylabel('')
plt.ylim(0,110)
g.set_xticklabels(['Did Not Complete', 'Completed Course'])
plt.subplot(1,3,2)
g = sb.boxplot(data=tests, x='test_prep', y='reading', hue='gender')
plt.title('Reading')
plt.xlabel('')
plt.ylabel('')
plt.ylim(0,110)
g.set_xticklabels(['Did Not Complete', 'Completed Course'])
plt.subplot(1,3,3)
g = sb.boxplot(data=tests, x='test_prep', y='writing', hue='gender')
plt.title('Writing')
plt.xlabel('')
plt.ylabel('')
plt.ylim(0,110)
g.set_xticklabels(['Did Not Complete', 'Completed Course']);
Multivariate box plots showing test scores in Math, Reading, and Writing based on student gender and test preparation level.

Data Visualizations for Spending Habits

Data Visualizations for Spending Habits

I am not a huge fan of bank-generated visuals to analyze my spending habits. My bank breaks up expenses into murky categories such as bills and utilities, shopping, other, and un-categorized. As a result, I began tracking all my expenses in a spreadsheet to better capture data. This includes month, vendor, amount, and expense category for every purchase. To explore this data, I used a heat map and a waffle chart.

Here’s a look at how to use Python to create heat maps and waffle charts for spending habit data.

Setup

Libraries and packages I utilized for my spending data include pandas, NumPy, matplotlib, seaborn and pywaffle. I upload my data from a CSV and turn it into a dataframe using pandas.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
from pywaffle import Waffle
%matplotlib inline

df = pd.read_csv('2018_expenses.csv')

Heat Map of Number of Purchases

Heat maps display numerical trends using a sequential scale of color intensity. Below, I graph number of purchases across spending categories and organize the information by month.

I customize the graph to have annotations on each block and color the area using the seaborn color palette ‘rocket’ set to be reversed in shading by adding ‘_r’. An additional argument (‘annot_kws’) adjusts the text size of the annotations. Other features clean up the axes labels and name the tick labels accordingly.

ct_counts=df.groupby(['category','month']).size()
ct_counts=ct_counts.reset_index(name='count')
ct_counts=ct_counts.pivot(index='month', columns='category', values='count')
plt.figure(figsize=(12, 6))
ax=sb.heatmap(ct_counts, annot=True, cmap='rocket_r', annot_kws={"size":14})
plt.title('Number of Monthly Expenses', fontsize=16)
plt.xticks(rotation=0)
plt.xlabel('')
plt.ylabel('')
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
ax.set_xticklabels(['Business', 'Education', 'Entertainment', 'Food', 'Health', 'Other', 'Transportation'])
ax.set_yticklabels(['April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'], rotation=0);

The heat map shows my most frequent purchases are on food, entertainment, other, and transportation. Categories such as business and health have the lowest frequency of purchases. Across months, purchases are fairly similar with only between 1 and 9 purchases per spending category.

Waffle Chart of Total Spending

Waffle charts display values in congregated squares and are an alternative to pie charts. Formatting options allow for a varied number of columns and rows. I stuck with ten columns with ten rows, with each square to represent one percent out of a total one hundred. This allows for a simple breakdown of where my money went. I created a dictionary of values for this example, rather than using my pandas dataframe.

 data={'Education':63, 'Entertainment':9, 'Other':8, 'Health':7, 'Food':6, 'Transportation':4, 'Business':3}

fig=plt.figure(
FigureClass=Waffle, 
rows=10, 
values=data, 
colors=('indianred', 'lightsalmon', 'peachpuff', 'darkkhaki', 
'cadetblue','paleturquoise', 'lightsteelblue'), 
title={'label': 'Total Spending by Category', 'loc': 'center'}, 
labels=["{0} ({1}%)".format(k, v) for k, v in data.items()],
legend={'loc': 'upper left', 'bbox_to_anchor': (1.1, 1)}
)
fig.set_tight_layout(False)
plt.show();

The waffle chart shows most of my spending was on education and the other categories were all under 10% of total expenses.

Cleaning Data with Pandas

A project for my Udacity Data Analyst Nanodegree Program involved wrangling messy data using pandas. Although my coursework reviewed data cleaning methods, I revisited documentation for specific functions. Here’s a breakdown of the steps I used with pandas to clean the data and complete the assignment.

The examples from my assignment involve a collection of WeRateDogs™ data retrieved from Twitter.

Import Libraries:

Import pandasNumPy, and Python’s regular expression operations library (re).

import pandas as pd
import numpy as np
import re

Import Files:

Use read_csv to load the files you wish to clean.

twt_arc = pd.read_csv('twitter_archive.csv')
img_pred = pd.read_csv('image_predictions.csv')
twt_counts = pd.read_csv('tweet_counts.csv')

Create Copies:

Create copies of the original files using copy before cleaning just in case you need to restore some of the original contents.

twt_arc_clean = twt_arc.copy()
img_pred_clean = img_pred.copy()
twt_counts_clean = twt_counts.copy()

Merge Data:

Combine specific files using the merge function.

In this example, the main data is in the Twitter archive file. I perform a left merge to maintain the original contents of this file and add the image prediction and tweet count files as the original tweet IDs aligned.

df1 = pd.merge(twt_arc_clean, img_pred_clean, how='left')
df2 = pd.merge(df1, twt_counts, how='left')

Drop Columns:

Remove unwanted columns using the drop function. List the columns to remove and specify the axis as ‘columns’.

The Twitter data includes mostly individual tweets, but some of the data is repeated in the form of retweets.

First, I make sure the data only includes tweets where the ‘retweeted_status_id’ was null using the isnull function. Then, I drop the columns related to retweets.

df2_clean = df2_clean[df2_clean['retweeted_status_id'].isnull()]

df2_clean = df2_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 
                          'retweeted_status_id','retweeted_status_user_id',                                                        
                          'retweeted_status_timestamp'], axis='columns')

Change Data Types:

Use astype by listing the preferred data type as the argument.

The Tweet IDs were uploaded as integers, so I convert them to objects.

df2_clean.tweet_id = df2_clean.tweet_id.astype(object)

Use to_datetime to convert a column to datetime by entering the selected column as the argument.

Time stamps were objects instead of datetime objects. I create a new column called ‘time’ and delete the old ‘timestamp’ column.

df2_clean['time'] = pd.to_datetime(df2_clean['timestamp'])

df2_clean = df2_clean.drop('timestamp', 1)

Replace Text:

Use the replace function and list the old value to replace followed by the new value.

Text entries for this data set had the shortened spelling of ampersand instead of the symbol itself.

df2_clean['text'] = df2_clean['text'].replace('&amp;', '&')

Combine and Map Columns:

First, create a new column. Select the data frame, applicable columns to combine, determine the separator for the combined contents, and join the column rows as strings.

Next, use unique to verify all the possible combinations to re-map from the result.

Then, use map to replace row entries with preferred values.

In this case, I had 4 columns called ‘doggo’, ‘floofer’, ‘pupper’ and ‘puppo’ that determine whether or not a tweet contains these words. I change it to a single column of ‘dog type’. Then, I map the values to be shorter versions of the combined column entries.

df2_clean['dog_type'] = df2_clean[df2_clean.columns[6:10]].apply(lambda x:                                                                    
                ','.join(x.dropna().astype(str)), axis=1)

df2_clean['dog_type'].unique()

df2_clean['dog_type'] = df2_clean.dog_type.map({'None,None,None,None': np.nan, 
                'doggo,None,None,None':'doggo',
                'None,None,None,puppo':'puppo', 
                'None,None,pupper,None':'pupper',
                'None,floofer,None,None':'floofer', 
                'doggo,None,None,puppo':'doggo/puppo',
                'doggo,floofer,None,None':'doggo/floofer', 
                'doggo,None,pupper,None':'doggo/pupper'})

Remove HTML Tags:

Write a function to remove HTML tags using re. Compile the tags by specifying ‘<.*?>’, and use sub to replace the compiled tags with empty spaces.

def remove_html_tags(text):
    clean = re.compile('<.*?>')
    return re.sub(clean, '', text)

df2_clean['source'] = df2_clean['source'].apply(remove_html_tags)

 

Project Overview: FoodPact

A fork and knife surround a plate that has earth on it and FoodPact is written below.
Logo for FoodPact program

A few months ago I began a project with my brother to create a calculator for the environmental footprint of food. It’s called FoodPact to merge food and ecological impact. It’s a work in progress and I’m excited to share the code for it.

Data sources to inform the calculator include:

  • Water footprint data for crops from a 2011 study by M.M. Mekonnen and A. Y. Hoekstra.
  • Greenhouse gas emissions data from Business for Social Responsibility (BSR) and Environmental Protection Agency (EPA) documents on transport via boatrail, and freight.
  • Food waste data from the United States Department of Agriculture (USDA) Economic Research Service (ERS)
  • Global food import data from the USDA Foreign Agricultural Service’s Global Agricultural Trade System (GATS).
  • Country centroid data from a President and Fellows of Harvard College 2015 data file.
  • US city locations from  SimpleMaps.

We used a Bootstrap Bootswatch for the web application’s layout and Flask as the microframework.

Python packages used in the program include:

  • Pandas to create more refined dataframes for use within the application
  • NumPy for equations
  • geopy for calculating great circle distance between latitudes and longitudes
  • Matplotlib and pyplot for creating graphs

The whole point of the program is to take a user’s location, food product, and the product’s country of origin to generate the estimated distance the food traveled, the approximate amount of carbon dioxide that travel generated, and the water requirements for the product.

Conversions include: cubic metric tons to gallons of water, tons of crops to pounds, and grams of carbon dioxide per kilometer to pounds per mile.

Selected graphics from FoodPact project:

One limitation of the calculator is that the values for carbon dioxide consider either full travel by ship, train, or truck and not a combination of the three methods. Emissions refer to the amount it takes to ship a twenty-foot equivalent (TEU) container full of the food product across the world. The country of origin considers the centroid and not the exact location of food production. Similarly, the list of cities displays the 5 most populated cities in that given state. The only exception is New York, for which I considered New York City close enough in latitude and longitude to account for Brooklyn, Queens, Manhattan, the Bronx, and Staten Island.

The data referenced in the calculator is meant to give a relative idea of the inputs required to generate and transport food products to give perspective to consumers. Ideally, the calculator will encourage conversations about the food system and inspire people to reduce their personal food waste.

Reducing Plastic Use

Reducing Plastic Use

Various pieces of plastic trash debris are strewn alongside seaweed and rocks on a beach.
Assorted plastic trash on the beach at Pelican Cove Park in Rancho Palos Verdes, CA, 2017.

In the spirit of this year’s Earth Day theme (‘End Plastic Pollution’), I researched the fate of plastic. The Environmental Protection Agency (EPA) prepared a report for 2014 municipal waste stream data for the United States. Plastic products were either recycled, burned for energy production, or sent to landfills. I used pandas to look at the data and Matplotlib to create a graph. I included percentages for each fate and compared the categories of total plastics, containers and packaging, durable goods, and nondurable goods.

A graph compares different types of plastic products and their fate in the municipal waste stream.
Percentages of total plastics and plastic types that get recycled, burned for energy, or sent to a landfill, according to the EPA.

The EPA data shows a majority of plastic products reported in the waste stream were sent to landfills. Obviously, not all plastic waste actually reaches a recycling facility or landfill. Roadsides, waterways, and beaches are all subject to plastic pollution. Decreasing personal use of plastic products can help reduce the overall production of waste.

Here are some ideas for cutting back on plastic use:

  • Bring reusable shopping bags to every store.
    • Utilize cloth bags for all purchases.
    • Opt for reusable produce bags for fresh fruit and vegetables instead of store-provided plastic ones.
  • Ditch party plasticware.
    • Buy an assortment of silverware from a thrift store for party use.
    • Snag a set of used glassware for drinks instead of buying single-use plastic cups.
  • Use Bee’s Wrap instead of plastic wrap.
    • Bee’s Wrap is beeswax covered cloth for food storage. It works exactly the same as plastic wrap, but it can be used over and over.
  • Choose glassware instead of plastic zip-locked bags for storing food.
    • Glass containers like Pyrex can be used in place of single-use plastic storage bags.
  • Say ‘no’ to plastic straws.
    • Get in the habit of refusing a straw at restaurants when you go out.
    • Bring a reusable straw made out of bamboo, stainless steel, or glass to your favorite drink spot.

 

To check out the code for the figure I created, here’s the repository for it.

How Does Environmental Science Relate to Computer Programming?

How Does Environmental Science Relate to Computer Programming?

This is a question I have received quite frequently in recent weeks. Computer programming languages can be used to make scientific analysis much easier. This applies directly to environmental science because there is a wealth of data within the world of ecological studies. Coding offers a way for scientists to automate repetitive  tasks using lines of code, and therefore freeing up time for other work. This can result in the creation of new software that can be utilized by scientists across disciplines.

Statistics and environmental science go hand in hand. Science experiments involve a natural order of determining a hypothesis, establishing test methods, collecting data, analyzing data, and drawing conclusions.

The data analysis portion is where statistical models are important. Oftentimes, scientists want to know whether the results of their experiments hold statistical significance. This means proving that the trends observed in data are not just a result of some sort of mistake in the experiment design or execution. Computer programming languages such as Python can help scientists execute the statistical analysis of data by writing code to analyze their data. Python packages like NumPy provide a basis for computational analysis and Python libraries like SciPy offer modules such as scipy.stats that offer the ability to perform hypothesis tests. These include T-Tests and Analysis of Variance (ANOVA) tests on numerical data and the Chi Square test for categorical data. Packages in R such as car offer a function for ANOVA tables, but R Studio itself includes functions such as t.test to analyze data. Programming languages offer packages for creating graphs and visuals to display analytical tests, such as Matplotlib in Python and ggplot2 in R.

Sections of environmental science, such as conservation biology, can benefit from programming because of different computer models. As a college student, the first software I was introduced to that was created specifically for use in conservation science was a population viability analysis (PVA) software called Vortex. Population viability measures the likelihood of a group of organisms to thrive or decline under a certain set of circumstances. The Vortex software allows users to adjust the circumstances for populations in areas such as genetic diversity, number of organisms, and mortality rate. I used the software in a classroom setting while studying in Peru, and I performed various tests to see what factors would be detrimental to the population of a theoretical species. This tool is one of many that can be of assistance for environmental science professionals who can use PVA to inform management decisions for threatened species.

A treeline in the lowland Amazon rain forest along the Madre de Dios River in Peru, taken February 2015. Advancements in computer programming can provide tools to help increase scientific understanding of biologically rich areas, such as the Amazon.

Within the field of environmental science, computer programming can be a great advantage because it allows scientists to analyze data in efficient ways that can make everyday tasks easier. The utilization of programming languages and modeling software offers opportunities to put computers to use where humans would have otherwise performed repetitive tasks. This can provide scientists with more time to make discoveries and inform decisions to make the world a better place.