To get some web scraping practice, I wanted to obtain a large list of animal names. Colorado State University has a list of links to this data as a part of the Warren and Genevieve Garst Photographic Collection. The data is stored in table format. Here’s how to scrape hypertext data from HTML tables using Beautiful Soup.
Inspect the Data
First, visit the web page and inspect the data you would like to scrape. On Window, this means either right-clicking a desired element and selecting ‘Inspect’ or hitting Ctrl+Shift+I to open up the browser’s developer tools.
After inspecting the element, we see that it is in an HTML table and each row holds an entry for an animal name.
Based on our output, I want to refine the dataframe so the row entries are in a position to be split into two columns. First, I remove rows in index locations 0 through 2.
Then, I drop the escape characters in the front and end of each cell entry using the lstrip() and rstrip() functions. I split the remaining column into two columns based on ‘Animal’ and ‘Genus Species’ by using str.split() to separate the row.
The United States Department of Agriculture PLANTS database provides general information about plant species across the country. Given 3 states, I wanted to visualize which plant families are present in each and which state(s) hold the most species in each family. To accomplish this task, I used Python’s pandas, matplotlib, and seaborn libraries for analysis.
Initial Setup
Before beginning, I import pandas, matplotlib, and seaborn.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
Gathering Data
I pulled data sets from the USDA website for New York, Idaho, and California. The default encoding is in Latin-1 for exported text files. When importing into pandas, the encoding must be specified to work properly.
I double check the files have been loaded correctly into dataframe format using head().
Cleaning Data
The major point of interest in the imported dataframe is the ‘Family’ column. I create a new dataframe organized by this column and returning the count from each row.
ny_fam = ny_list.groupby('Family').count()
Next, I remove the unwanted columns. I’ve chosen only to keep the ‘Symbol’ column as a representation of count because this variable is required for every plant instance.
ny_fam_1 = ny_fam.drop(['Synonym Symbol', 'Scientific Name with Author', 'National Common Name'], axis=1)
Then, I change the column name from ‘Symbol’ to ‘{State} Count’ to lend itself for merging the dataframes without confusion.
I complete the same process for the California and Idaho data.
ca_fam = ca_list.groupby('Family').count()
ca_fam_1 = ca_fam.drop(['Synonym Symbol', 'Scientific Name with Author', 'National Common Name'], axis=1)
ca_fam_1 = ca_fam_1.rename(columns = {"Symbol":"CA Count"})
id_fam = id_list.groupby('Family').count()
id_fam_1 = id_fam.drop(['Synonym Symbol', 'Scientific Name with Author', 'National Common Name'], axis=1)
id_fam_1 = id_fam_1.rename(columns = {"Symbol":"ID Count"})
Reset the index to prepare the data frames for outer merges based on column names. The index is set to ‘Family’ as default, from the initial data frame creation using the count() function. To discourage any unwanted changes, I create a copy of each data frame as I go.
To preserve all the plant species regardless of presence in each individual state, I perform outer merges. This will allow for areas without data to be filled with zeros after the family counts are combined.
I added a column to aid in visualizations. I created a function to return the state with the highest presence of each plant family based on the existing columns.
There are many factors that play into plant family diversity. The comparison of plant families in New York, California, and Idaho was purely out of curiosity. Further investigations should take into account each state’s ecosystem types and land usage and ownership that may influence species diversity.
It can be difficult to transparently present key insights in a world saturated with fake news and click-bait. Study results are often distilled into share-worthy titles that get read and taken as gospel without anyone actually reading further, so be conscious of how you choose to share data findings. Here are a few ways to stay humble while summarizing your data, based on a headline from the Girlgaze newsletter about an article from Gay Times.
Avoid sweeping generalizations that encourage assumptions about entire populations. This headline implies the sexual identity of all young people, instead of a small portion of youth in existence. A quick change would be to add ‘Surveyed’ after ‘Youth’ to reinforce that this was not a large sample size (‘over 2,000 adults‘).
Name the party or parties responsible for sponsoring the collection of data. This communication successfully names the commissioner of the study early in the description. This allows the audience to have full transparency about involved parties who may influence a study’s outcomes.
Avoid the use of definitive language. The last sentence in this quick summary is too precise in saying ‘clear indication‘ and should say what the ‘much more fluid perspective‘ is in comparison to. Studies use a sample from a population to provide meaningful insights. Samples are not meant to determine a definitive stance for every member of a population. Surveys take into account an array of potential variables, and language such as ‘clear indication‘ would imply researchers have explored every possible avenue of bias.
Other ways to communicate transparently with an audience include sharing links to raw data, naming potential sources of error, and making suggestions for future method improvements. Providing an audience with every opportunity to explore your data and understand methods empowers people to consume insights responsibly.
I explored a set of student test scores from Kaggle for my Udacity Data Analyst Nanodegree program. The data consists of 1000 entries for students with the following categories: gender, race/ethnicity, parental level of education, lunch assistance, test preparation, math score, reading score, writing score. My main objective was to explore trends through the stages of univariate, bivariate, and multivariate analysis.
Preliminary Data Cleaning
For this project, I used numpy, pandas, matplotlib.pyplot, and seaborn libraries. The original data has all test scores as integer data types. I added a column for a combined average of math, reading, and writing scores and three columns for the test scores converted into letter grade.
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
Histograms provide a sense of the spread of test scores across subject. Count plots provide counts for test preparation course attendance and parental level of education.
Bivariate count plots of student test scores across parental levels of education.
Bivariate Analysis
Violin plots illustrate average test scores and test preparation course attendance. Box plots provide visual representation of the quartiles within each subject area. I sorted level of education from the lowest to highest level captured by the data.
base_color=sb.color_palette()[0]
g = sb.violinplot(data=tests, y='test_prep', x='avg_score', color=base_color)
plt.xlabel('')
plt.ylabel('')
plt.title('Average Test Scores and Preparation Course Completion', fontsize=14)
g.set_yticklabels(['Did Not Complete', 'Completed Course'], fontsize=12);
Violin plots that show average student test scores base on level of test preparation.
ed_order = ['some high school', 'high school', 'some college',
'associate\'s degree', 'bachelor\'s degree', 'master\'s degree']
sb.boxplot(data=tests, x='reading', y='par_level_educ', order=ed_order, palette="Blues")
plt.xlabel('')
plt.ylabel('')
plt.title('Reading Scores and Parental Level of Education', fontsize=14);
Box plots that show reading scores across varying levels of parental education.
Multivariate Analysis
A swarm plot explores average test scores, parental level of education, and test preparation course attendance. Box plots show test scores for each subject, divided by gender and test preparation course attendance.
ed_order = ['some high school', 'high school', 'some college',
'associate\'s degree', 'bachelor\'s degree', 'master\'s degree']
sb.swarmplot(data=tests, x='par_level_educ', y='avg_score', hue='test_prep', order=ed_order, edgecolor='black')
legend = plt.legend(loc=6, bbox_to_anchor=(1.0,0.5))
plt.xticks(rotation=15)
plt.xlabel('')
plt.ylabel('')
legend.get_texts()[0].set_text('Did Not Complete')
legend.get_texts()[1].set_text('Completed')
plt.ylim(0,110)
plt.title('Average Test Scores by Parental Level of Education and Test Preparation Course Participation');
A swarm plot that shows student test scores, test preparation level, and the highest levels of parental education.
plt.figure(figsize=[15,4])
plt.subplot(1, 3, 1)
g = sb.boxplot(data=tests, x='test_prep', y='math', hue='gender')
plt.title('Math')
plt.xlabel('')
plt.ylabel('')
plt.ylim(0,110)
g.set_xticklabels(['Did Not Complete', 'Completed Course'])
plt.subplot(1,3,2)
g = sb.boxplot(data=tests, x='test_prep', y='reading', hue='gender')
plt.title('Reading')
plt.xlabel('')
plt.ylabel('')
plt.ylim(0,110)
g.set_xticklabels(['Did Not Complete', 'Completed Course'])
plt.subplot(1,3,3)
g = sb.boxplot(data=tests, x='test_prep', y='writing', hue='gender')
plt.title('Writing')
plt.xlabel('')
plt.ylabel('')
plt.ylim(0,110)
g.set_xticklabels(['Did Not Complete', 'Completed Course']);
Multivariate box plots showing test scores in Math, Reading, and Writing based on student gender and test preparation level.
Recently, I had to brush up on statistics terms for a data analyst exam. I had trouble pulling together old course notes to create a quick, cohesive study guide. Below, overarching concepts are from the test’s public posting and my notes are derived from a quantitative statistics textbook.
Central Tendency
Mean = the average of a distribution
Median = a distribution’s midpoint
Mode = the variable which occurs most often in a distribution
Variability
The distribution of data, also known as spread
Five-number summary: Minimum, Q1, Median (Q2), Q3, Maximum
Represented through boxplots graphically
Summarized through quartiles:
Q1: median of all values to left of Q2
Q2: median (50th percentile) of all values in distribution
Q3: median of all values to right of Q2
Variance: s^2 = SUM((value minus mean)^2 for all values)) / (number of values-1)
Standard deviation: square root of the variance (s^2)
Normal Distribution: bell-curve distribution of data
Hypothesis Testing: Examine evidence against a null hypothesis, hypotheses referring to populations or models and not a certain outcome
Compare claims
Null hypothesis: statement challenged in significance testing
Example: There is not a difference between means.
Alternative hypothesis: statement suspected as true instead of the null hypothesis
Example: The means are not the same.
Accept or reject null hypothesis based on a certain p-value.
p-value: the likelihood that the test statistic would be a value equal or higher than what is observed
Smaller p-values signify stronger evidence against the null hypothesis in question. Often, an alpha value of 0.05 is used. Evidence would be so strong that something outside the p-value should only occur 5 out of every 100 times.
Statistical Significance Testing: Achieved at the level where the p-value is equal or less than alpha.
Probability: The proportion of times an outcome would occur given many repeated tests.
Correlation
A measure of the linear relationship between two quantitative variables, based on direction and strength.
Examples: strong, weak, or no correlation; positive or negative
Represented by r
r = (1/n-1)*SUM((all x-values minus mean summed/standard deviation of all x-values),(all y-values minus mean summed/standard deviation of all y-values))
Regression
Simple linear: statistical model where the means of y occur on a line when plotted against x for one explanatory variable
Multiple linear: statistical model with more than one explanatory variable
Parametric Statistics: Use numerical data because this assumes data has a normal distribution.
Nonparametric statistics: Use ordinal or categorical data because this does not assume a normal distribution.
Analysis of Variance (ANOVAs)
One-way: Compare population means based on 1 independent variable
Two-way: Compare population means classified based on 2 independent variables
Source:
Moore, D. S., McCabe, G. P., & Craig, B. A. (2012). Introduction to the practice of statistics. Seventh edition/Student edition. New York: W.H. Freeman and Company, a Macmillan Higher Education Company.
I am not a huge fan of bank-generated visuals to analyze my spending habits. My bank breaks up expenses into murky categories such as bills and utilities, shopping, other, and un-categorized. As a result, I began tracking all my expenses in a spreadsheet to better capture data. This includes month, vendor, amount, and expense category for every purchase. To explore this data, I used a heat map and a waffle chart.
Here’s a look at how to use Python to create heat maps and waffle charts for spending habit data.
Setup
Libraries and packages I utilized for my spending data include pandas, NumPy, matplotlib, seaborn and pywaffle. I upload my data from a CSV and turn it into a dataframe using pandas.
import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as sbfrom pywaffle import Waffle%matplotlib inline
df = pd.read_csv('2018_expenses.csv')
Heat Map of Number of Purchases
Heat maps display numerical trends using a sequential scale of color intensity. Below, I graph number of purchases across spending categories and organize the information by month.
I customize the graph to have annotations on each block and color the area using the seaborn color palette ‘rocket’ set to be reversed in shading by adding ‘_r’. An additional argument (‘annot_kws’) adjusts the text size of the annotations. Other features clean up the axes labels and name the tick labels accordingly.
The heat map shows my most frequent purchases are on food, entertainment, other, and transportation. Categories such as business and health have the lowest frequency of purchases. Across months, purchases are fairly similar with only between 1 and 9 purchases per spending category.
Waffle Chart of Total Spending
Waffle charts display values in congregated squares and are an alternative to pie charts. Formatting options allow for a varied number of columns and rows. I stuck with ten columns with ten rows, with each square to represent one percent out of a total one hundred. This allows for a simple breakdown of where my money went. I created a dictionary of values for this example, rather than using my pandas dataframe.
data={'Education':63, 'Entertainment':9, 'Other':8, 'Health':7, 'Food':6, 'Transportation':4, 'Business':3}
fig=plt.figure(FigureClass=Waffle, rows=10, values=data, colors=('indianred', 'lightsalmon', 'peachpuff', 'darkkhaki', 'cadetblue','paleturquoise', 'lightsteelblue'), title={'label': 'Total Spending by Category', 'loc': 'center'},
labels=["{0} ({1}%)".format(k, v) for k, v in data.items()],legend={'loc': 'upper left', 'bbox_to_anchor': (1.1, 1)})fig.set_tight_layout(False)plt.show();
The waffle chart shows most of my spending was on education and the other categories were all under 10% of total expenses.
A project for my Udacity Data Analyst Nanodegree Program involved wrangling messy data using pandas. Although my coursework reviewed data cleaning methods, I revisited documentation for specific functions. Here’s a breakdown of the steps I used with pandas to clean the data and complete the assignment.
The examples from my assignment involve a collection of WeRateDogs™ data retrieved from Twitter.
Import Libraries:
Import pandas, NumPy, and Python’s regular expression operations library (re).
In this example, the main data is in the Twitter archive file. I perform a left merge to maintain the original contents of this file and add the image prediction and tweet count files as the original tweet IDs aligned.
Remove unwanted columns using the drop function. List the columns to remove and specify the axis as ‘columns’.
The Twitter data includes mostly individual tweets, but some of the data is repeated in the form of retweets.
First, I make sure the data only includes tweets where the ‘retweeted_status_id’ was null using the isnull function. Then, I drop the columns related to retweets.
First, create a new column. Select the data frame, applicable columns to combine, determine the separator for the combined contents, and join the column rows as strings.
Next, use unique to verify all the possible combinations to re-map from the result.
Then, use map to replace row entries with preferred values.
In this case, I had 4 columns called ‘doggo’, ‘floofer’, ‘pupper’ and ‘puppo’ that determine whether or not a tweet contains these words. I change it to a single column of ‘dog type’. Then, I map the values to be shorter versions of the combined column entries.
A few months ago I began a project with my brother to create a calculator for the environmental footprint of food. It’s called FoodPact to merge food and ecological impact. It’s a work in progress and I’m excited to share the code for it.
Data sources to inform the calculator include:
Water footprint data for crops from a 2011 study by M.M. Mekonnen and A. Y. Hoekstra.
Greenhouse gas emissions data from Business for Social Responsibility (BSR) and Environmental Protection Agency (EPA) documents on transport via boat, rail, and freight.
Food waste data from the United States Department of Agriculture (USDA) Economic Research Service (ERS)
The whole point of the program is to take a user’s location, food product, and the product’s country of origin to generate the estimated distance the food traveled, the approximate amount of carbon dioxide that travel generated, and the water requirements for the product.
Conversions include: cubic metric tons to gallons of water, tons of crops to pounds, and grams of carbon dioxide per kilometer to pounds per mile.
Selected graphics from FoodPact project:
United States food import data for 2017 for food categories reported in millions of dollars.
United States imports in 2017 from top 10 countries and cost in millions of dollars.
Water footprint for global crop categories for blue and total water input in metric tons of water to produce 1 ton of crops.
Food categories and corresponding food waste on the consumer level in 2010 in billions of pounds.
One limitation of the calculator is that the values for carbon dioxide consider either full travel by ship, train, or truck and not a combination of the three methods. Emissions refer to the amount it takes to ship a twenty-foot equivalent (TEU) container full of the food product across the world. The country of origin considers the centroid and not the exact location of food production. Similarly, the list of cities displays the 5 most populated cities in that given state. The only exception is New York, for which I considered New York City close enough in latitude and longitude to account for Brooklyn, Queens, Manhattan, the Bronx, and Staten Island.
The data referenced in the calculator is meant to give a relative idea of the inputs required to generate and transport food products to give perspective to consumers. Ideally, the calculator will encourage conversations about the food system and inspire people to reduce their personal food waste.
Open access research provides an opportunity for the public to learn and use data as needed for free, but it is not overwhelmingly common. For researchers outside of academia, trying to pull together useful data can be difficult when considering accessibility barriers.
About two months ago, I began looking for data to create a model of biological inputs and energy requirements in the United States food system. Open data resources such as FAOSTAT, the Economic Research Service, and Bureau of Transportation Statistics provided helpful figures on land use, food imports, and food transportation values. Aside from these resources, a lot of information I wanted to reference in building a model came from scientific papers that require journal subscriptions or charge a per-article fee.
Three articles that may have been helpful in my research illustrate the cost of access:
Upon closer investigation, Appetite claims it ‘supports open access’ but charges authors $3000 to make articles available to everyone, according to publisher Elsevier. Clearly, providing affordable open access options doesn’t seem like a priority for publishers.
There may have been useful data in the articles mentioned above. However, I won’t find out because I’m sticking with open access resources for my food systems project.
Public government databases are great, but specific science studies may hold more value to independent researchers. Journals like PLOS ONE lead the way in open access articles for those looking for specific research to compliment information from public databases. A 2016 article by Paul Basken in The Chronicle of Higher Education called ‘As an Open-Access Megajournal Cedes Some Ground, a Movement Gathers Steam’ shows a rise in open access papers, but I got the figures via Boston College because the article itself is ‘premium content for subscribers.’
Rise of published open access articles over time between 2008 and 2015. Data from: Basken, P. 2016. As an open-access megajournal cedes some ground, a movement gathers steam. The Chronicle for Higher Education, 62(19), 5-5.
Charging fees for accessibility can create an elitist barrier between academia and those who want to learn more about certain topics. I’m not proposing that everyone would take advantage of open access research articles if there were cheaper publishing options, or no access fees. If more studies were open access, it would create more opportunities for members of the public to digest scientific studies on their own terms.
There’s immense value in the open-source, collaborative culture of the tech community that I hope spills over into academia. I’m optimistic about a continued increase in open access publications in the science community. For now, I’m looking forward to creating open source projects that take advantage of public data.
Data analysis and unidentified flying object (UFO) reports go hand-in-hand. I attended a talk by author Cheryl Costa who analyzes records of UFO sightings and explores their patterns. Cheryl and her wife Linda Miller Costa co-authored a book that compiles UFO reports called UFO Sightings Desk Reference: United States of America 2001-2015.
Records of UFO sightings are considered citizen science because people voluntarily report their experiences. This is similar to wildlife sightings recorded on websites like eBird that help illustrate bird distributions across the world. People report information about UFO sighting events including date, time, and location.
Night sky along the roadside outside Wayquecha Biological Field Station in Peru, taken April 2015.
Cheryl spoke about gathering data from two main online databases, MUFON (Mutual UFO Network) and NUFORC (National UFO Reporting Network). NUFORC’s database is public and reports can be sorted by date, UFO shape, and state. MUFON’s database requires a paid membership to access the majority of their data. This talk was not a session to discuss conspiracy theories, but a chance to look at trends in citizen science reports.
The use of data analysis on UFO reports requires careful consideration of potential bias and reasonable explanations for numbers in question. For example, a high volume of reports in the summer could be because more people are spending time outside and would be more likely to notice something strange in the sky.
This talk showed me that conclusions may be temptingly easy to draw when looking at UFO data as a whole, but speculations should be met with careful criticism. The use of the scientific method when approaching ufology, or the study of UFO sightings, seems key for a field often met with overwhelming skepticism.
I have yet to work with any open-source data on UFO reports, but this talk reminded me of the importance of a methodical approach to data analysis. Data visualization for any field of study starts with asking questions, being mindful of outside factors, and being able to communicate messages within large data sets to any audience.