Analyze News Headlines with newsgrab and spaCy

Analyze News Headlines with newsgrab and spaCy

Here’s an overview of how to use newsgrab to get news headlines from Google News. Then, the data can be analyzed using the spaCy natural language processing library.

The motivation behind newgrab was to pull data on New York colleges to compare headlines about how institutions were being affected by COVID-19. I used the College Navigator from the National Center for Education Statistics to get a list of 4-year colleges in New York to use as the search data.

I had trouble finding a clean way to scrape headlines from Google News. My brother Randy helped me use Javascript and playwright to write the code for newsgrab.

Run a Search with newsgrab

First, install newsgrab globally through npm from the command line.

npm install -g newsgrab

Run a line with the package name and specify the file path (if outside current working directory) of a line-separated list of desired search terms. For my example, I used the names of New York colleges.

newsgrab ny_colleges.txt

The output of newsgrab is a JSON file called output and will follow the array structure below:

[{"search_term":"term1","results":["result1","result2","result3"]},{"search_term":"term2","results":["result1","result2","result3"]}...]

Afterwards, the output can be handled with Python.

Analyze the JSON Data with spaCy

Import the necessary packages for handling the data. These include: json, pandas, matplotlib, seaborn, re, and spaCy. Specific modules to import are the json_normalize module from pandas and the counter module from collections.

import json
import pandas as pd
from pandas.io.json import json_normalize
import matplotlib.pyplot as plt
import seaborn as sb
import re
import spacy
from collections import Counter

Bring in one of the pre-trained models from spaCy. I use the model called en_core_web_sm. There are other options in their docs for English models, as well as those for different languages.

nlp = spacy.load("en_core_web_sm")

Read in the JSON data as a list and then normalize it with pandas. Specify the record path as ‘results’ and the meta as ‘search_term’ to correspond with the JSON array data structure from the output file.

with open('output.json',encoding="utf8") as raw_file1:
    list1 = json.load(raw_file1)

search_data = pd.json_normalize(list1, record_path='results', meta='search_term',record_prefix='results')

Gather all separate data through spaCy. I wanted to pull noun chunks, named entities, and tokens from my results column. For the token output, I use the attributes of rule-based matching to specify that I want all tokens except for stop words or punctuation. Then, each output is put into a column of the main dataframe.

noun_chunks = []
named_entity = []
tokens = []

for doc in nlp.pipe(df['results_lower'].astype('unicode').values, batch_size=50,
                        n_process=5):
    if doc.is_parsed:
        noun_chunks.append([chunk.text for chunk in doc.noun_chunks])
        named_entity.append([ent.text for ent in doc.ents])
        tokens.append([token.text for token in doc if not token.is_stop and not token.is_punct])
    else:
        noun_chunks.append(None)
        named_entity.append(None)       
        tokens.append(None)
        
df['results_noun_chunks'] = noun_chunks
df['results_named_entities'] = named_entity
df['results_tokens_clean'] = tokens

Process Tokens

Take the tokens column and flatten it into a list. Perform some general data cleaning like removing special characters and taking out line breaks and the remnants of ampersands. Then, use the counter module to get a frequency count of each of the words in the list.

word_frequency = Counter(string_list_of_words)
A raw output from the counter in collections shows words and their associated frequency in the text.
Raw output from the counter module shows tokens and their associated value counts in the total text.

Before analyzing the list, I also remove the tokens for my list of original search terms to keep it more focused on the terms outside of these. Then, I create a dataframe of the top results and plot those with seaborn.

A horizontal countplot shows descending value counts for the top tokens found in the text.
A countplot shows all keyword tokens with value counts over 21 for the college news headline data.

Process Noun Chunks

Perform some cleaning to separate the noun chunks lists per each individual search term. I remove excess characters after converting the output to strings, and then use the explode function from pandas to separate them.

Then, create a variable for the value count of each of the noun chunks, turn that into a dictionary, then map it to the dataframe for the following result.

A pandas dataframe shows news headlines, noun chunks, and separated noun segments and value counts.
A dataframe shows headlines, search terms, noun chunks, and new columns for separated noun chunks and associated value counts.

Then, I sort the values in a new dataframe in descending order, remove duplicates, and narrow down to the top 20 noun chunks with frequencies above 10 to graph in a countplot.

A horizontal countplot shows descending value counts for the top noun chunks found in the text.
A countplot shows all noun chunks with value counts over 9 for the college news headline data.

Process Named Entities

Cleaning the named entity outputs for each headline is nearly the same in process as cleaning the noun chunks. The lists are converted to strings, are cleaned, and use the explode function to separate individually. The outputs for named entities can be customized depending on desired type.

After separating the individual named entities, I use spaCy to identify the type of each and create a new column for these.

named_entity_type = []

for doc in nlp.pipe(named['named_entity'].astype('unicode').values, batch_size=50,
                        n_process=5):
    if doc.is_parsed:
        named_entity_type.append([ent.label_ for ent in doc.ents])
    else:
        named_entity_type.append(None)        

named['named_entities_type'] = named_entity_type

Then, I get the value counts for the named entities and append these to a dictionary. I map the dictionary to the named entity column, and put the result in a new column.

As seen in the snippet of the full dataframe below, the model for identifying named entity values and types is not always accurate. There is documentation for training spaCy’s models for those interested in increased accuracy.

A pandas dataframe shows news headlines, named entities, and separated named entities, named entity type, and value counts.
A dataframe shows headlines, search terms, named entities, and new columns for separated named entities, their type, and associated value counts.

From the dataframe, I narrow down the entity types to exclude cardinal and ordinal types to take out any numbers that may have high frequencies within the headlines. Then, I get the top named entity types with frequencies over 6 to graph.

A horizontal countplot shows descending value counts for the top non-numerical named entities found in the text.
A countplot shows all non-numerical named entities with value counts over 6 for the college news headline data.

For full details and cleaning steps to create the visualizations above, please reference below for the associated gist from Github.

Additional Resources

Natural Langauge Processing with Python and spaCy by Yuli Vasiliev

Natural Language Processing with spaCy in Python by Taranjeet Singh

Spotify Web API: How to Pull and Clean Top Song Data using Python

Spotify Web API: How to Pull and Clean Top Song Data using Python

I used the Spotify Web API to pull the top songs from my personal account. I’ll go over how to get the fifty most popular songs from a user’s Spotify account using spotipy, clean the data, and produce visualizations in Python.

Top 50 Spotify Songs

Top 50 songs from my personal Spotify account, extracted using the Spotify API.
SongArtistAlbumPopularity
1BorderlineTame ImpalaBorderline77
2GroceriesMallratIn the Sky64
3FadingToro y MoiOuter Peace48
4FanfareMagic City HippiesHippie Castle EP57
5LimestoneMagic City HippiesHippie Castle EP59
6High Steppin'The Avett BrothersCloser Than Together51
7I Think Your Nose Is BleedingThe Front BottomsAnn43
8Die Die DieThe Avett BrothersEmotionalism (Bonus Track Version)44
9SpiceMagic City HippiesModern Animal42
10Bleeding WhiteThe Avett BrothersCloser Than Together53
11Prom QueenBeach BunnyProm Queen73
12SportsBeach BunnySports65
13FebruaryBeach BunnyCrybaby51
14Pale Beneath The Tan (Squeeze)The Front BottomsAnn43
1512 Feet DeepThe Front BottomsRose49
16Au Revoir (Adios)The Front BottomsTalon Of The Hawk50
17FreelanceToro y MoiOuter Peace57
18SpacemanThe KillersDay & Age (Bonus Tracks)62
19Destroyed By Hippie PowersCar Seat HeadrestTeens of Denial51
20Why Won't They Talk To Me?Tame ImpalaLonerism59
21FallingwaterMaggie RogersHeard It In A Past Life71
22Funny You Should AskThe Front BottomsTalon Of The Hawk48
23You Used To Say (Holy Fuck)The Front BottomsGoing Grey47
24Today Is Not RealThe Front BottomsAnn41
25FatherThe Front BottomsThe Front Bottoms43
26Broken BoyCage The ElephantSocial Cues60
27Wait a Minute!WILLOWARDIPITHECUS80
28Laugh Till I CryThe Front BottomsBack On Top47
29Nobody's HomeMallratNobody's Home56
30Apocalypse DreamsTame ImpalaLonerism60
31Fill in the BlankCar Seat HeadrestTeens of Denial56
32SpiderheadCage The ElephantMelophobia57
33Tie Dye DragonThe Front BottomsAnn47
34Summer ShandyThe Front BottomsBack On Top43
35At the BeachThe Avett BrothersMignonette51
36MotorcycleThe Front BottomsBack On Top41
37The New Love SongThe Avett BrothersMignonette42
38Paranoia in B MajorThe Avett BrothersEmotionalism (Bonus Track Version)49
39AberdeenCage The ElephantThank You Happy Birthday54
40Losing TouchThe KillersDay & Age (Bonus Tracks)51
41Four of a KindMagic City HippiesHippie Castle EP46
42Cosmic Hero (Live at the Tramshed, Cardiff, Wa...Car Seat HeadrestCommit Yourself Completely34
43Locked UpThe Avett BrothersCloser Than Together49
44Bull RideMagic City HippiesHippie Castle EP49
45The Weight of LiesThe Avett BrothersEmotionalism (Bonus Track Version)51
46Heat WaveSnail MailLush60
47Awkward ConversationsThe Front BottomsRose42
48Baby Drive It DownToro y MoiOuter Peace47
49Your LoveMiddle KidsMiddle Kids EP29
50Ordinary PleasureToro y MoiOuter Peace58

Using Spotipy and the Spotify Web API

First, I created an account with Spotify for Developers and created a client ID from the dashboard. This provides both a client ID and client secret for your application to be used when making requests to the API.

Next, from the application page, in ‘Edit Settings’, in Redirect URIs, I add http://localhost:8888/callback . This will come in handy later when logging into a specific Spotify account to pull data.

Then, I write the code to make the request to the API. This will pull the data and put it in a JSON file format.

I import the following libraries:

  • Python’s OS library to facilitate the client ID, client secret, and redirect API for the code using the computer’s operating system. This will temporarily set the credentials in the environmental variables.
  • Python’s json library to encode the data.
  • Spotipy to provide an authorization flow for logging in to a Spotify account and obtain current top tracks for export.
import os
import json
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
import spotipy.util as util

Next, I define the client ID and secret to what has been assigned to my application from the Spotify API. Then, I set the environmental variables to include the the client ID, client secret, and the redirect URI.

cid ="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" 
secret = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

os.environ['SPOTIPY_CLIENT_ID']= cid
os.environ['SPOTIPY_CLIENT_SECRET']= secret
os.environ['SPOTIPY_REDIRECT_URI']='http://localhost:8888/callback'

Then, I work through the authorization flow from the Spotipy documentation. The first time this code is run, the user will have to provide their Sptofy username and password when prompted in the web browser.

username = ""
client_credentials_manager = SpotifyClientCredentials(client_id=cid, client_secret=secret) 
sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager)
scope = 'user-top-read'
token = util.prompt_for_user_token(username, scope)

if token:
    sp = spotipy.Spotify(auth=token)
else:
    print("Can't get token for", username)

In the results section, I specify the information to pull. The arguments I provide indicate 50 songs as the limit, the index of the first item to return, and the time range. The time range options, as specified in Spotify’s documentation, are:

  • short_term : approximately last 4 weeks of listening
  • medium_term : approximately last 6 months of listening
  • long_term : last several years of listening

For my query, I decided to use the medium term argument because I thought that would give the best picture of my listening habits for the past half year. Lastly, I create a list to append the results to and then write them to a JSON file.

if token:
    sp = spotipy.Spotify(auth=token)
    results = sp.current_user_top_tracks(limit=50,offset=0,time_range='medium_term')
    for song in range(50):
        list = []
        list.append(results)
        with open('top50_data.json', 'w', encoding='utf-8') as f:
            json.dump(list, f, ensure_ascii=False, indent=4)
else:
    print("Can't get token for", username)

After compiling this code into a Python file, I run it from the command line. The output is top50_data.JSON which will need to be cleaned before using it to create visualizations.

Cleaning JSON Data for Visualizations

The top song data JSON file output is nested according to different categories, as seen in the sample below.

 "artists": [
                    {
                        "external_urls": {
                            "spotify": "https://open.spotify.com/artist/5PbpKlxQE0Ktl5lcNABoFf"
                        },
                        "href": "https://api.spotify.com/v1/artists/5PbpKlxQE0Ktl5lcNABoFf",
                        "id": "5PbpKlxQE0Ktl5lcNABoFf",
                        "name": "Car Seat Headrest",
                        "type": "artist",
                        "uri": "spotify:artist:5PbpKlxQE0Ktl5lcNABoFf"
                    }
                ],
                "disc_number": 1,
                "duration_ms": 303573,
                "explicit": true,
                "href": "https://api.spotify.com/v1/tracks/5xy3350chgFfFcdTET4xz3",
                "id": "5xy3350chgFfFcdTET4xz3",
                "is_local": false,
                "name": "Destroyed By Hippie Powers",
                "popularity": 51,
                "preview_url": "https://p.scdn.co/mp3-preview/cd1a18f3f7c8ada17bb54c55524ef42e80719d1f?cid=39e9cdce36dc45e589ce5b564c0594a2",
                "track_number": 3,
                "type": "track",
                "uri": "spotify:track:5xy3350chgFfFcdTET4xz3"
            },

Before cleaning the JSON data and creating visualizations in a new file, I import json, pandas, matplotlib, and seaborn. Next, I load the JSON file with the top 50 song data.

import json
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb

with open('top50_data.json') as f:
  data = json.load(f)

I create a full list of all the data to start. Next, I create lists where I will append the specific JSON data. Using a loop, I access each of the items of interest for analysis and append them to the lists.

list_of_results = data[0]["items"]
list_of_artist_names = []
list_of_artist_uri = []
list_of_song_names = []
list_of_song_uri = []
list_of_durations_ms = []
list_of_explicit = []
list_of_albums = []
list_of_popularity = []

for result in list_of_results:
    result["album"]
    this_artists_name = result["artists"][0]["name"]
    list_of_artist_names.append(this_artists_name)
    this_artists_uri = result["artists"][0]["uri"]
    list_of_artist_uri.append(this_artists_uri)
    list_of_songs = result["name"]
    list_of_song_names.append(list_of_songs)
    song_uri = result["uri"]
    list_of_song_uri.append(song_uri)
    list_of_duration = result["duration_ms"]
    list_of_durations_ms.append(list_of_duration)
    song_explicit = result["explicit"]
    list_of_explicit.append(song_explicit)
    this_album = result["album"]["name"]
    list_of_albums.append(this_album)
    song_popularity = result["popularity"]
    list_of_popularity.append(song_popularity)

Then, I create a pandas DataFrame, name each column and populate it with the above lists, and export it as a CSV for a backup copy.

all_songs = pd.DataFrame(
    {'artist': list_of_artist_names,
     'artist_uri': list_of_artist_uri,
     'song': list_of_song_names,
     'song_uri': list_of_song_uri,
     'duration_ms': list_of_durations_ms,
     'explicit': list_of_explicit,
     'album': list_of_albums,
     'popularity': list_of_popularity
     
    })

all_songs_saved = all_songs.to_csv('top50_songs.csv')

Using the DataFrame, I create two visualizations. The first is a count plot using seaborn to show how many top songs came from each artist represented in the top 50 tracks.

descending_order = top50['artist'].value_counts().sort_values(ascending=False).index
ax = sb.countplot(y = top50['artist'], order=descending_order)

sb.despine(fig=None, ax=None, top=True, right=True, left=False, trim=False)
sb.set(rc={'figure.figsize':(6,7.2)})

ax.set_ylabel('')    
ax.set_xlabel('')
ax.set_title('Songs per Artist in Top 50', fontsize=16, fontweight='heavy')
sb.set(font_scale = 1.4)
ax.axes.get_xaxis().set_visible(False)
ax.set_frame_on(False)

y = top50['artist'].value_counts()
for i, v in enumerate(y):
    ax.text(v + 0.2, i + .16, str(v), color='black', fontweight='light', fontsize=14)
    
plt.savefig('top50_songs_per_artist.jpg', bbox_inches="tight")
A countplot shows artists in descending song counts in total top tracks from Spotify.
A countplot shows the number of songs per artists in the top 50 tracks from greatest to least.

The second graph is a seaborn box plot to show the popularity of songs within individual artists represented.

popularity = top50['popularity']
artists = top50['artist']

plt.figure(figsize=(10,6))

ax = sb.boxplot(x=popularity, y=artists, data=top50)
plt.xlim(20,90)
plt.xlabel('Popularity (0-100)')
plt.ylabel('')
plt.title('Song Popularity by Artist', fontweight='bold', fontsize=18)
plt.savefig('top50_artist_popularity.jpg', bbox_inches="tight")
A graph shows the varying levels of song popularity per artist in top tracks from Spotify.
A boxplot shows the different levels of song popularity per artist in top 50 Spotify tracks.

Further Considerations

For future interactions with the Spotify Web API, I would like to complete requests that pull top song data for each of the three term options and compare them. This would give a comprehensive view of listening habits and could lead to pulling further information from each artist.