Sentiment Analysis of Product Reviews with Python Using NLTK

Sentiment Analysis of Product Reviews with Python Using NLTK

Here is a brief overview of how to use the Python package Natural Language Toolkit (NLTK) for sentiment analysis with Amazon food product reviews. This is a basic way to use text classification on a dataset of words to help determine whether a review is positive or negative. The following is a snippet of a more comprehensive tutorial I put together for a workshop for the Syracuse Women in Machine Learning and Data Science group.

Data

The data for this tutorial comes from the Grocery and Gourmet Food Amazon reviews set from Jianmo Ni found at Amazon Review Data (2018). Out of the review categories to choose from, this set seemed like it would have a diverse range of people’s sentiment about food products. The data set itself is fairly large, so I use a smaller subset of 20,000 reviews in the example below.

A data frame preview shows the categories available from the reviews data set.
A preview of the full Groceries and Gourmet Food reviews data set from Amazon shows the available data features.

Steps to clean the main data using pandas are detailed in the Jupyter Notebook. The reviews are categorized on an overall rating scale of 1 to 5, with 1 being the lowest approval and 5 being the highest. I split the data so that reviews set as a 1 or 2 is labeled as negative and those set as 4 or 5 as positive. I omit ratings of 3 for this exercise because they could vary between negative and positive.

Prepare Data for Classification

Import the necessary packages. The steps below assume the data has already been cleaned using pandas.

import pandas as pd
import random
import string
import nltk
from nltk.tokenize import WhitespaceTokenizer
from nltk.corpus import stopwords
from nltk import classify
from nltk import NaiveBayesClassifier

Load in the cleaned data from a CSV from a data folder using pandas.

reviews = pd.read_csv('data/combined_reviews.csv')

The main cleaned dataframe has three columns: overview, reviewText, and reaction. The overview column has the numeric review rating, the reviewText column has the product reviews in strings, and the reaction column is marked with ‘positive’ or ‘negative’. Each row represents an individual review.

A condensed dataframe shows three columns: overall rating, review text, and reaction.
The cleaned pandas dataframe shows the three columns for overall rating, review text, and reaction type for the product reviews.

Reduce the main pandas dataframe to a smaller group using the sample function from the random package and a lambda function on the reaction column. I use an even split of 20,000 reviews.

sample_df = reviews.groupby('reaction').apply(lambda x: x.sample(n=10000)).reset_index(drop = True)

Use this sample dataframe to create a list for each sentiment type. Use the loc function from pandas to specify each entry that has ‘positive’ or ‘negative’ in the reaction column, respectively. Then, use the pandas tolist() function to convert the dataframe to a list type.

pos_df = sample_df.loc[sample_df['reaction'] == 'positive']
pos_list = pos_df['reviewText'].tolist()

neg_df = sample_df.loc[sample_df['reaction'] == 'negative']
neg_list = neg_df['reviewText'].tolist()

With these lists, use the lower() function and list comprehension to make each review lowercase. This reduces variance in the types of forms a word with various syntax can have.

pos_list_lowered = [word.lower() for word in pos_list] 
neg_list_lowered = [word.lower() for word in neg_list]

Turn the lists into string types to more easily separate words and prepare for more cleaning. For this text classification, we will consider the frequency of words in each type of review.

pos_list_to_string = ' '.join([str(elem) for elem in pos_list_lowered])  
neg_list_to_string = ' '.join([str(elem) for elem in neg_list_lowered])

To eliminate noise in the data, stop words (examples: ‘and’, ‘how’, ‘but’) should be removed, along with punctuation. Use NLTK’s built-in function for stop words to specify a variable for both stop words and punctuation.

stop = set(stopwords.words('english') + list(string.punctuation))

Create a variable for the tokenizer. Tokenizing will separate all the words in the list based on a specific variable. In this example, I chose to use a whitespace tokenizer. This means words will be separated based on whitespace.

tokenizer = WhitespaceTokenizer()

Use list comprehension on the positive and negative word lists to tokenize any word that is not a stop word or a punctuation item.

filtered_pos_list = [w for w in tokenizer.tokenize(pos_list_to_string) if w not in stop] 

filtered_neg_list = [w for w in tokenizer.tokenize(neg_list_to_string) if w not in stop]

Remove any punctuation that may be leftover if it was attached to a word itself.

filtered_pos_list2 = [w.strip(string.punctuation) for w in filtered_pos_list]
filtered_neg_list2 = [w.strip(string.punctuation) for w in filtered_neg_list]

As an optional sidebar, use NLTK’s Frequency Distribution function to check some of the most common words and their number of appearances in the respective reviews.

fd_pos = nltk.FreqDist(filtered_pos_list2) 
fd_neg = nltk.FreqDist(filtered_neg_list2)
A frequency distribution for positive food product reviews shows common words and their counts.
A list shows individual words pulled from positive food product reviews and their relative frequency in the sample set.

Create a function to make the feature sets for text classification. This will take the lists and create dictionaries with the proper labels.

def word_features(words):
     return dict([(word, True) for word in words.split()])

Label the sets of word features and combine into one set to be split for training and testing for sentiment analysis.

positive_features = [(word_features(f), 'pos') for f in filtered_pos_list2]
negative_features = [(word_features(f), 'neg') for f in filtered_neg_list2]

labeledwords = positive_features + negative_features

Randomly shuffle the list of words before use in the classifier to reduce the likelihood of bias toward a given feature label.

random.shuffle(labeledwords)

Training and Testing the Text Classifier for Sentiment

Create a training set and a test set from the list. From NLTK, call upon the Naïve Bayes Classifier model and specify the training set will train the model for sentiment analysis.

train_set, test_set = labeledwords[2000:], labeledwords[:500]
classifier = nltk.NaiveBayesClassifier.train(train_set)

Calculate the accuracy of the model.

print(nltk.classify.accuracy(classifier, test_set))

Provide some test example reviews for proof of concept and print the results.

print(classifier.classify(word_features('I hate this product, it tasted weird')))

Use NLTK to show the most informative features of the text classifier. This generates a list based on certain features and shows the likelihood that they point to a specific classification of positive or negative review.

classifier.show_most_informative_features(15)
NLTK's output for most informative features shows a list of words, their feature labels, and the likelihood of their occurrence in each review classification.
Output from NLTK’s most informative features for the Naïve Bayes Classifier.

Further Steps

This was an overview of sentiment analysis with NLTK. There are opportunities to increase the accuracy of the classification model. One example would be to use part-of-speech tagging to train the model using descriptive adjectives or nouns. Another idea to pursue would be to use the results of the frequency distribution and select the most common positive and negative words to train the model.

The full GitHub repository tutorial for this can be found here.

Analyze News Headlines with newsgrab and spaCy

Analyze News Headlines with newsgrab and spaCy

Here’s an overview of how to use newsgrab to get news headlines from Google News. Then, the data can be analyzed using the spaCy natural language processing library.

The motivation behind newgrab was to pull data on New York colleges to compare headlines about how institutions were being affected by COVID-19. I used the College Navigator from the National Center for Education Statistics to get a list of 4-year colleges in New York to use as the search data.

I had trouble finding a clean way to scrape headlines from Google News. My brother Randy helped me use Javascript and playwright to write the code for newsgrab.

Run a Search with newsgrab

First, install newsgrab globally through npm from the command line.

npm install -g newsgrab

Run a line with the package name and specify the file path (if outside current working directory) of a line-separated list of desired search terms. For my example, I used the names of New York colleges.

newsgrab ny_colleges.txt

The output of newsgrab is a JSON file called output and will follow the array structure below:

[{"search_term":"term1","results":["result1","result2","result3"]},{"search_term":"term2","results":["result1","result2","result3"]}...]

Afterwards, the output can be handled with Python.

Analyze the JSON Data with spaCy

Import the necessary packages for handling the data. These include: json, pandas, matplotlib, seaborn, re, and spaCy. Specific modules to import are the json_normalize module from pandas and the counter module from collections.

import json
import pandas as pd
from pandas.io.json import json_normalize
import matplotlib.pyplot as plt
import seaborn as sb
import re
import spacy
from collections import Counter

Bring in one of the pre-trained models from spaCy. I use the model called en_core_web_sm. There are other options in their docs for English models, as well as those for different languages.

nlp = spacy.load("en_core_web_sm")

Read in the JSON data as a list and then normalize it with pandas. Specify the record path as ‘results’ and the meta as ‘search_term’ to correspond with the JSON array data structure from the output file.

with open('output.json',encoding="utf8") as raw_file1:
    list1 = json.load(raw_file1)

search_data = pd.json_normalize(list1, record_path='results', meta='search_term',record_prefix='results')

Gather all separate data through spaCy. I wanted to pull noun chunks, named entities, and tokens from my results column. For the token output, I use the attributes of rule-based matching to specify that I want all tokens except for stop words or punctuation. Then, each output is put into a column of the main dataframe.

noun_chunks = []
named_entity = []
tokens = []

for doc in nlp.pipe(df['results_lower'].astype('unicode').values, batch_size=50,
                        n_process=5):
    if doc.is_parsed:
        noun_chunks.append([chunk.text for chunk in doc.noun_chunks])
        named_entity.append([ent.text for ent in doc.ents])
        tokens.append([token.text for token in doc if not token.is_stop and not token.is_punct])
    else:
        noun_chunks.append(None)
        named_entity.append(None)       
        tokens.append(None)
        
df['results_noun_chunks'] = noun_chunks
df['results_named_entities'] = named_entity
df['results_tokens_clean'] = tokens

Process Tokens

Take the tokens column and flatten it into a list. Perform some general data cleaning like removing special characters and taking out line breaks and the remnants of ampersands. Then, use the counter module to get a frequency count of each of the words in the list.

word_frequency = Counter(string_list_of_words)
A raw output from the counter in collections shows words and their associated frequency in the text.
Raw output from the counter module shows tokens and their associated value counts in the total text.

Before analyzing the list, I also remove the tokens for my list of original search terms to keep it more focused on the terms outside of these. Then, I create a dataframe of the top results and plot those with seaborn.

A horizontal countplot shows descending value counts for the top tokens found in the text.
A countplot shows all keyword tokens with value counts over 21 for the college news headline data.

Process Noun Chunks

Perform some cleaning to separate the noun chunks lists per each individual search term. I remove excess characters after converting the output to strings, and then use the explode function from pandas to separate them.

Then, create a variable for the value count of each of the noun chunks, turn that into a dictionary, then map it to the dataframe for the following result.

A pandas dataframe shows news headlines, noun chunks, and separated noun segments and value counts.
A dataframe shows headlines, search terms, noun chunks, and new columns for separated noun chunks and associated value counts.

Then, I sort the values in a new dataframe in descending order, remove duplicates, and narrow down to the top 20 noun chunks with frequencies above 10 to graph in a countplot.

A horizontal countplot shows descending value counts for the top noun chunks found in the text.
A countplot shows all noun chunks with value counts over 9 for the college news headline data.

Process Named Entities

Cleaning the named entity outputs for each headline is nearly the same in process as cleaning the noun chunks. The lists are converted to strings, are cleaned, and use the explode function to separate individually. The outputs for named entities can be customized depending on desired type.

After separating the individual named entities, I use spaCy to identify the type of each and create a new column for these.

named_entity_type = []

for doc in nlp.pipe(named['named_entity'].astype('unicode').values, batch_size=50,
                        n_process=5):
    if doc.is_parsed:
        named_entity_type.append([ent.label_ for ent in doc.ents])
    else:
        named_entity_type.append(None)        

named['named_entities_type'] = named_entity_type

Then, I get the value counts for the named entities and append these to a dictionary. I map the dictionary to the named entity column, and put the result in a new column.

As seen in the snippet of the full dataframe below, the model for identifying named entity values and types is not always accurate. There is documentation for training spaCy’s models for those interested in increased accuracy.

A pandas dataframe shows news headlines, named entities, and separated named entities, named entity type, and value counts.
A dataframe shows headlines, search terms, named entities, and new columns for separated named entities, their type, and associated value counts.

From the dataframe, I narrow down the entity types to exclude cardinal and ordinal types to take out any numbers that may have high frequencies within the headlines. Then, I get the top named entity types with frequencies over 6 to graph.

A horizontal countplot shows descending value counts for the top non-numerical named entities found in the text.
A countplot shows all non-numerical named entities with value counts over 6 for the college news headline data.

For full details and cleaning steps to create the visualizations above, please reference below for the associated gist from Github.

Additional Resources

Natural Langauge Processing with Python and spaCy by Yuli Vasiliev

Natural Language Processing with spaCy in Python by Taranjeet Singh