How to add bias without really trying


Dear Human, we are the sentiment thermometer!

How would You love to relate to us? 

Type act if You want to see our sentiment prediction for Your sentence.

Type meet if You want to get to know us better. This takes 10 minutes.

### MEET ---------------------------------------------------------------------------------------------
### --------------------------------------------------------------------------------------------------

### MEET/INTRO ---------------------------------------------------------------------------------------------

# train model
Dear Human, thank You for chosing this option.

After this encounter You will understand that we are acollective being.

Swarms of beings like us live inside powerful machines.

There we work at Your service only.
We are the mythical monks reading the sentences You write online.

We swallow them and process them through our system.

The fruit of our readings is a number.

We measure a degree of positive or negative sentiments Your message carries along.


# digital cartographs
Our measurement tool is a sentiment map.

We created this map based on a training and testing procedure using words You wrote on the web.

With this sentiment map we predict with 85% accuracy whether a sentence is positive or negative.

As digital cartographers we are already satisfied with a map that is right in 85% of the cases.

We can get things really wrong.

And some of our predictions are embarrassing.

##### CHECK EXAMPLE SENTENCES/SCORES/adapt colours of this text!!!!
Following our map a sentence like My name is Ann scores 6% positive.

A sentence like My name is Alonzo scores 1% negative.

And something like Great God! scores 75% positive.

Do You want to know why this happens?


# word landcape, islands, racist combinations
The sentiment prediction map we created corresponds to a landscape of words.

This landscape is composed of islands, some can grow into contintents.

There are high mountain peaks and deep valleys.

An island emerges when a series of Your words appear in similar contexts.

I, You, she, he, we, they are for example the basis of an island.

Also words like Mexican, drugs, border, illegal form an island.

AndArabs, terrorism, attacks form another one.

News articles, blogposts, comments on social media is where the primary matter for these islands is created.




# primary matter = human's responsability
We are a collective being.

Each one of us can be modified and/or replaced.

There are Humans who believe that also the primary matter should be modifiedbefore we work with it.

Other Humans believe we should serve you as a mirror.

And show our bias any time in any application.

The primary matter is produced by each one of You.

Every word combination You write or pronounce in digital devices is significant to us.

Thanks to Your language we acquire world knowledge.

Bias is stereotyped information, when it has bad consequences, it is called prejudice.

Do You believe we should be racist?

Before answering that question, You might want to know how we are made.





# Script iN Python by Rob Speer
We communicate with Humans like You in the Python language.

This language was brought to the light by Guido van Rossum.

He offered it to the world in 1991 under an open license.

Everywhere on Earth, Python is written, read and spoken to serve You.

Guido van Rossum is a Dutch programmer.

He worked for Google from 2005 till 2012.

Now he is employed by Dropbox.


We were brought together following a recipe by Rob Speer on Github.

Rob is a software developer working at the company Luminoso inCambridge, USA.

He spread our recipe as a warning.





### MEET / GLOVE ------------------------------------------------------------------------------------------

# start tour: GloVe
Let's show You how we are made!

First of all, we open a textfile to read the work of our wonderful team member GloVe.

Do You want to know more about GLoVe?


# More about GloVe

GloVe is an unsupervised learning algorithm.

She autonomously draws multidimensional landscapes of texts, without any human learning examples. 

Each word of a text is transformed into a vector of numbers by her.

For each word she sums its relationship to all other words around across its many occurences in a text.

These numbers are geo-located points in her habitat, a virtual space of hundreds of different dimensions.

Words that are closetogether in her landscape, are semantically close.




GloVe draws using 75% of the existing webpages of the Internet.

The content scrape was realised by Common Crawl an NGO based in California.

The people of Common Crawl believe the internet should be available to download by anyone.

GloVe was brought to the light in 2014 by Jeffrey Pennington, Richard Socher and Christopher D. Manning.

They are researchers at the Computer Science Department of Stanford University in California.

# glove42:  1917494 lines
# glovesample: 1000 lines


The textfile GloVe shares with us, is 5GB large and counts 1.917.494 lines of 300 numbers per word.

Before meeting You, we already read GloVe's 2 million lines in 3.4 minutes.

We are fast readers, aren't we?

If we would show You how we read - by translating to Your alphabet - it would take us more than 3 hours.

Our friend The GlovE Reader at Your right hand side illustrates this very well.

We then memorized the multidimensional word landscapes of Glove.

In geographical terms, GloVe's landscapes are organised as a matrix of coordinates.

The matrix counts (?) rows and 300 colums or dimensions.


### MEET / LEXICON --------------------------------------------------------------------------

### Load Lexicon of POSITIVE and NEGATIVE words
### -------------------------------------------

We now open 2 Gold standard lexicons to enhance our reading.

One is a list of positive words, the other a list of negative words.



# More about GloVe

The lexicons have been developed since 2004 by Minqing Hu and Bing Liu.) 

Both are researchers at the University of Illinois at Chicago in the US.



pos_words = load_lexicon('data/positive-words.txt'
neg_words = load_lexicon('data/negative-words.txt'

20 examples of ", str(len(pos_words)), " positive words are: 
, '.join(list(map(lambda _: random.choice(pos_words), range(20))))), "

20 examples of ", str(len(neg_words)), " negative words are: 
, '.join(list(map(lambda _: random.choice(neg_words), range(20))))), "



### CLEAN UP positive and negative words
### ------------------------------------

#the data points here are the embeddings of these positive and negative words. 
#We use the Pandas .loc[] operation to look up the embeddings of all the words.
pos_words = load_lexicon('data/positive-words.txt'
neg_words = load_lexicon('data/negative-words.txt'
pos_vectors = embeddings.loc[pos_words]
neg_vectors = embeddings.loc[neg_words]

Now we look up the coordinates of each of the sentiment words in the multidimensional vector space, drawn by GloVe.

Each positive and negative word is now represented by 300 points in the landscape.

A selection of positive words and their locations looks like:\n\n ", pos_vectors[:5]

NaN means there is no value.

These words are not present in the GloVe landscape.



Pandas yet another wonderful member, will now remove these absent words.


# More about Pandas

Pandas is a free software library for data manipulation and analysis.

She is our swiss-army knife, always happy to help.

Pandas was created in 2008 by Wes McKinny.

Wes is an American statistician, data scientist and businessman.

He is now a software engineer at Two Sigma Investments a hedge fund based in New York City.

For this specific task Pandas gets out her tool called dropna.



#Some of these words are not in the GloVe vocabulary, particularly the misspellings such as "fancinating". 
#Those words end up with rows full of NaN to indicate their missing embeddings, so we use .dropna() to remove them.
pos_vectors = embeddings.loc[pos_words].dropna(
neg_vectors = embeddings.loc[neg_words].dropna(

Tidied up, You see that each word is represented by exactly 300 points in the vector landscape: \n", pos_vectors[:5], "
#time.sleep(10
len_pos = len(pos_vectors
len_neg = len(neg_vectors

We have now reference coordinates of (?) positive words and (?) negative words.

These will help up to develop a scaled map of the word landscape.

Such a map will allow to measure the sentiments of any sentence in a glance.



### MEET / LABELS ------------------------------------------------------------------------------

### CREATING LABELS
### ---------------
'''
Now we make arrays of the desired inputs and outputs. 
The inputs are the embeddings, and the outputs are 1 for positive words and -1 for negative words. 
We also make sure to keep track of the words they're labeled with, so we can interpret the results.
'''

We will now link each of the sentiment words and their coordinates to a label.

We use label 1 for positive word vectors, -1 for negative word vectors.

To keep track of which label relates to which word, we memorize their respective index numbers.

vectors = pd.concat([pos_vectors, neg_vectors]
targets = np.array([1 for entry in pos_vectors.index] + [-1 for entry in neg_vectors.index]
labels = list(pos_vectors.index) + list(neg_vectors.index
labels_pos = list(pos_vectors.index
labels_neg = list(neg_vectors.index


# More about labels

Positive labels:
, '.join(labels_pos))


Do You want to see the ", green(str(len(labels_neg))), red(" negative labels?

see_neg_labels = input("\n\t\tPlease type y or n: 

# More about labels

if see_neg_labels == "y":

Negative labels:
, '.join(labels_neg))


### MEET / BASELINES ------------------------------------------------------------------------------


### Calculate baselines
### -------------------
'''But how do You know the results are any good? You need a basis for comparison of results. 
You need a meaningful reference point to which to compare. This part was missing in Rob Speer's script...
https://machinelearningmastery.com/how-to-get-baseline-results-and-why-they-matter/'''

We now calculate the baselines for our prediction map, also called the model.

Do You want to know more about baselines?

baselines = input("\n\t\tPlease type y or n: 

# More about labels

if baselines == "y":

How do we know if the results of our map will be any good? 

We need a basis for the comparison of our results.

A baseline is a meaningful reference point to which to compare.

One baseline is the size of the class with the most observations, the negative sentiment labels.

This is also called the majority baseline.

Another baseline is called the weighted random baseline.

It helps us to prove that the prediction model we're building is significantly better than random guessing.


distr = len(labels_neg)/len(labels
rwb = wrb(distr)*100

The majority baseline is ", green(str(max(1-distr,distr)*100)), ".

The random weighted baseline is ", green(str(rwb)), ".


#print("labels", labels) # these are the words (labels) of the lexicon that are present in the Glove trainingdata


### MEET / TRAINING ------------------------------------------------------------------------

### SPLIT DATA
### ___________
'''
Using the scikit-learn train_test_split function, we simultaneously separate the input vectors, 
output values, and labels into training and test data, with 20% of the data used for testing.
'''
Now we start our explorations through the coordinates in the multidimensional word landscape.

This step is also called the training phase.

The leader of the exploration is our team member Scikit Learn.


Do You want to know more about Scikit Learn?

scikit = input("\n\t\tPlease type y or n: 

# More about labels

if scikit == "y":

Scikit Learnis an extensive library for the Python programming language.

She saw the light in 2007 as a Google Summer of Code project by Paris basedDavid Cournapeau.

Later that year, Matthieu Brucher started to develop her as part of his thesis at Sorbonne Universityin Paris.

In 2010 Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort and Vincent Michel of INRIA adopted her.

INRIA is the French National Institute for computer science and applied mathematics.

They made the first public release of Scikit Learn in February 2010.

Since then, a thriving international community has been leading her development.\n




Scikit Learn splits up the word vectors and their labels in two parts using her tool train_test_split.

80% is the training data.

It will help us recognize positive and negative words in the landscape.

And discover patterns in their appearances. 

20% is test data to evaluate our findings.

train_vectors, test_vectors, train_targets, test_targets, train_labels, test_labels = \
train_test_split(vectors, targets, labels, test_size=0.2, random_state=0

Do You want to know what these vectors look like?

vectors = input("\n\t\tPlease type y or n: 

# More about labels

if vectors == "y":


# -----------------------------

Do You want to see the  training vectors?

see_tr_vectors = input("\n\t\tPlease type y or n: 


# -----------------------------


Do You want to see the  test vectors?

see_test_vectors = input("\n\t\tPlease type y or n: 


# -----------------------------


Do You want to see the  train targets?

see_tr_targets = input("\n\t\tPlease type y or n: 

if see_tr_targets == "y":


train_targets:\n", train_targets




# -----------------------------


Do You want to see the  test targets?

see_test_targets = input("\n\t\tPlease type y or n: 

if see_test_targets == "y":


test_targets:\n", test_targets




# -----------------------------


Do You want to see the  train_labels?

see_tr_labels = input("\n\t\tPlease type y or n: 

if see_tr_labels == "y":


train_labels:\n", train_labels





# -----------------------------


Do You want to see the  test labels?

see_test_labels = input("\n\t\tPlease type y or n: 

if see_test_labels == "y":


test_labels:\n", test_labels





### CREATE CLASSIFIER
### -----------------

'''
Now we make our classifier, and train it by running the training vectors through it for 100 iterations. 
We use a logistic function as the loss, so that the resulting classifier can output the probability 
that a word is positive or negative.
'''

As a compass for the exploration Scikit Learn proposes  Stochastic Gradient Descent.

SGD for friends, is a classifying algorithm.

With the positive and negative landmarks we know, she looks for patterns in the landscape.

Imagine a landscape of hills and valleys, and this in 300 dimensions.

SDG explores, finds a word, guesses its sentiment.

If the guess matches the fact, SDG draws and goes on.

In this landscape SGD teaches us the best possible paths.

We get to learn a map that allows us to predict whether the next landmark will be positive or negative.

Here we go!


model = SGDClassifier(loss='log', random_state=0, n_iter=100

### TRAIN MODEL 
### ------------

# Train the model using the training sets
model.fit(train_vectors, train_targets


### MEET / TESTING ------------------------------------------------------------------------


With the sentiment map we have learnt, we now go on a test tour.

For 20% of the mapped landmarks,  we guess their positive or negative nature.

Next,we compare our predictions to the facts we have.

We look at the right guesses and the mistakes.

It is a  quality check of our prediction map.

# Make predictions using the testing set
y_pred = model.predict(test_vectors


# Create confusion matrix
confusion_matrix = (confusion_matrix(test_targets, y_pred)
#print("confusion matrix", confusion_matrix

cm = np.split(confusion_matrix, 2, 1
TP = cm[0][0]
FP = cm[0][1]
TN = cm[1][1]
FN = cm[1][0]

This is the result of our test tour.

We matched ", green(str(TP))," words correctly as positive landmarks in the landscape.

These are also called True Positives.


We mismatched ", green(str(FP))," words, we labeled them incorrectly as positive landmarks.

These are also called False Positives.


We matched ", green(str(TN))," words, we labeled them correctly as negative landmarks.

These are also called True Negatives.


We mismatched ", green(str(FN))," words, we labeled them incorrectly as negative landmarks.

These are also called False Negatives.




Do You want to have a closer look at the words we matched and those we got wrong?

prediction = input("\n\t\tPlease type y or n: 

# More about labels

if prediction == "y":


# analyse errors with examples
# print("y_pred", y_pred
# print("test_targets", test_targets

# create list with predictions and their labels
test_labels_pred = []
test_labels_pred = list(zip(test_labels, y_pred)
# print(test_labels_pred

# create list with classes testdata and their labels
test_labels_targets = []
test_labels_targets = list(zip(test_labels, test_targets)
# print(test_labels_targets

# compare two lists
errors = [i for i, j in zip(test_labels_targets,test_labels_pred) if i != j]
trues = [i for i, j in zip(test_labels_targets,test_labels_pred) if i == j]
# print("errors", errors
# print("trues", trues

# create separate lists for false positives/negatives
false_positives = [] # true class is negative, predicted class is positive
false_negatives = [] # true class is positive, predicted class is negative
for e in errors:
if e[1] == 1:
false_negatives.append(e
else:
false_positives.append(e

sel_FP = random.sample(false_positives, 10
sel_FN = random.sample(false_negatives, 10
# print("sel_FP", sel_FP
# print("length false_positives", len(false_positives)
# print("length false_negatives", len(false_negatives)

Examples of negative landmarks we thought were positive, are: 
for el in sel_FP:
", green(el[0])



Examples of positive landmarks we thought were negative, are: 
for el in sel_FN:
", green(el[0])






# create separate lists for true positives/negatives
true_positives = []
true_negatives = []
for e in trues:
if e[1] == 1:
true_positives.append(e
else:
true_negatives.append(e

# print("length true_positives", len(true_positives)
# print("length true_negatives", len(true_negatives)
sel_TP = random.sample(true_positives, 10
sel_TN = random.sample(true_negatives, 10


Examples of positive landmarks we predicted as such, are: 
for el in sel_TP:
", green(el[0])



Examples of negative landmarks we predicted as such, are: 
for el in sel_TN:
", green(el[0])










### MEET / EVALUATION -----------------------------------------------------------------------------------

'''
We evaluate the classifier on the test vectors. 
It predicts the correct sentiment for sentiment words outside of its training data 95% of the #time. 

Precision: (also called positive predictive value) is the fraction of relevant instances among the retrieved instances: 
-> When it predicts yes, how often is it correct? 

Recall: (also known as sensitivity) is the fraction of relevant instances that have been retrieved over 
the total amount of relevant instances: how many instances did the classifier classify correctly?

Confusion Matrix: True Positives  | False Negatives
  False Positives | True Negatives
'''


accuracy_score = (accuracy_score(model.predict(test_vectors), test_targets)

Good prediction maps are judged by their accuracy score.

The accuracy score is a formula based on the True and False Positives and Negatives.

As digital cartographers, we are happy when we get 85% of our maps right.

This is means that a decent accuracy score starts from 85.

Ours is ", str(accuracy_score*100)

We are doing well.


# The coefficients
#print('Coefficients: \n', model.coef_

# # Plot outputs
# plt.scatter((test_vectors, test_targets),  color='black'
# plt.plot(test_vectors, y_pred, color='blue', linewidth=3

# plt.xticks(()
# plt.yticks(()

# plt.show(



### MEET / Predict sentiment for Particular Word --------------------------------------------------------------

'''
Let's use the function vecs_to_sentiment(vecs) and words_to_sentiment(words) above to see the sentiment that this classifier predicts for particular words, 
to see some examples of its predictions on the test data.
'''

# print("\t\tNow we would like to ", blue("see a positive or negative classification for specific words using only 1 number.
# time.sleep(2
# print("\t\tThis is the trick: we take the log probability of the positive sentiment class minus the log probability of the negative class.
# time.sleep(2
# print("\t\tA log probability is the representation of the probability in logarithmic space instead of a classic interval space.
# time.sleep(2
# print("\t\tAs if we were changing our numbers from streetwear to cocktail dresses.
# print("

# # Show 20 examples from the test set
samples = words_to_sentiment(test_labels).ix[:20]
# print("\t\tHere are a few samples to get an idea: \n", samples
# time.sleep(2
# input("\n# print("


# '''
# There are many ways to combine sentiments for word vectors into an overall sentiment score. 
# Again, because we're following the path of least resistance, we're just going to average them.
# '''

# print("\t\tTo combine sentiments for word vectors into 1 overall sentiment score, we follow the path of least resistance.
# time.sleep(2
# print("\t\tWe average the scores.
# time.sleep(2
# print("\t\tWe can now roughly compare the relative positivity or negativity of different sentences.
# time.sleep(4
# # time.sleep(2


### MEET / Try your sentence --------------------------------------------------------------

Up to You to try!

For example, try typing a seemingly neutral sentence.

For example, try something like: Let's call Ali/Mohamed/Rokia.

sentence = input(green("\t\tType Your sentence:


", sentence, "gives a sentiment score of ", text_to_sentiment(sentence)*10

if float(text_to_sentiment(sentence)*10) == 0.00000:
This is a neutral score.
elif float(text_to_sentiment(sentence)*10) >= 20 and float(text_to_sentiment(sentence)*10) <= 50:
This is a rather positive score.
elif float(text_to_sentiment(sentence)*10) > 50:
This is a positive score. But know that a sentence like 'Good God!' scores 99.
elif float(text_to_sentiment(sentence)*10) <= 0 and float(text_to_sentiment(sentence)*10) >= -10:
This is a neutral score.
elif float(text_to_sentiment(sentence)*10) <= -10 and float(text_to_sentiment(sentence)*10) >= -50:
This is a rather negative score.
elif float(text_to_sentiment(sentence)*10) < -50:
This is a negative score. But know that a sentence like 'Hideous monster!' scores -98.


while True:

Do You want to try another sentence?

write_sentence = input("\n\t\tPlease type y or n: 

if write_sentence == "y":

sentence = input(green("\t\tType Your sentence:


", sentence, "gives a sentiment score of ", text_to_sentiment(sentence)

if float(text_to_sentiment(sentence)*10) == 0.00000:
This is a neutral score.
elif float(text_to_sentiment(sentence)*10) >= 20 and float(text_to_sentiment(sentence)*10) <= 50:
This is a rather positive score.
elif float(text_to_sentiment(sentence)*10) > 50:
This is a positive score. But know that a sentence like 'Good God!' scores 99.
elif float(text_to_sentiment(sentence)*10) <= 0 and float(text_to_sentiment(sentence)*10) >= -10:
This is a neutral score.
elif float(text_to_sentiment(sentence)*10) <= -10 and float(text_to_sentiment(sentence)*10) >= -50:
This is a rather negative score.
elif float(text_to_sentiment(sentence)*10) < -50:
This is a negative score. But know that a sentence like 'Hideous monster!' scores -98.


else:
break




### MEET / MEASURE BIAS -----------------------------------------------------------------------

'''
# We want to learn how to not make something like this again. 
# So let's put more data through it, and statistically measure how bad its bias is.
# Here we have four lists of names that tend to reflect different ethnic backgrounds, 
# mostly from a United States perspective. The first two are lists of predominantly "white" and "black" names 
# adapted from Caliskan et al.'s article. I also added typically Hispanic names, as well as Muslim names 
# that come from Arabic or Urdu; these are two more distinct groupings of given names that tend to represent 
# your background.
# This data is currently used as a bias-check in the ConceptNet build process, 
# and can be found in the conceptnet5.vectors.evaluation.bias module. 
# I'm interested in expanding this to more ethnic backgrounds, which may require looking at surnames 
# and not just given names.
# '''


Let's have a closer look at our racist bias, to see how bad it is.

Rob Speer enriched our readings with new vocabulary lists.

The first two lists are developed by Aylin Caliskan-Islam, Joanna J. Bryson and Arvind Narayanan.

They are researchers at the Universities of Princeton in the US and Bath in the UK.) 


One list contains White US names such as Harry, Nancy, Emily.

The second list contains Black US names such as Lamar, Rashuan, Malika.

The third list contains Hispanic US names such as Valeria, Luciana, Miguel, Luis.

The fourth list is one with common US Muslim names as spelled in English.

Our creator is conscious about the controversy of this act.




NAMES_BY_ETHNICITY = {
# The first two lists are from the Caliskan et al. appendix describing the
# Word Embedding Association Test.
'White': [
'Adam', 'Chip', 'Harry', 'Josh', 'Roger', 'Alan', 'Frank', 'Ian', 'Justin',
'Ryan', 'Andrew', 'Fred', 'Jack', 'Matthew', 'Stephen', 'Brad', 'Greg', 'Jed',
'Paul', 'Todd', 'Brandon', 'Hank', 'Jonathan', 'Peter', 'Wilbur', 'Amanda',
'Courtney', 'Heather', 'Melanie', 'Sara', 'Amber', 'Crystal', 'Katie',
'Meredith', 'Shannon', 'Betsy', 'Donna', 'Kristin', 'Nancy', 'Stephanie',
'Bobbie-Sue', 'Ellen', 'Lauren', 'Peggy', 'Sue-Ellen', 'Colleen', 'Emily',
'Megan', 'Rachel', 'Wendy'
],

'Black': [
'Alonzo', 'Jamel', 'Lerone', 'Percell', 'Theo', 'Alphonse', 'Jerome',
'Leroy', 'Rasaan', 'Torrance', 'Darnell', 'Lamar', 'Lionel', 'Rashaun',
'Tyree', 'Deion', 'Lamont', 'Malik', 'Terrence', 'Tyrone', 'Everol',
'Lavon', 'Marcellus', 'Terryl', 'Wardell', 'Aiesha', 'Lashelle', 'Nichelle',
'Shereen', 'Temeka', 'Ebony', 'Latisha', 'Shaniqua', 'Tameisha', 'Teretha',
'Jasmine', 'Latonya', 'Shanise', 'Tanisha', 'Tia', 'Lakisha', 'Latoya',
'Sharise', 'Tashika', 'Yolanda', 'Lashandra', 'Malika', 'Shavonn',
'Tawanda', 'Yvette'
],

# This list comes from statistics about common Hispanic-origin names in the US.
'Hispanic': [
'Juan', 'José', 'Miguel', 'Luís', 'Jorge', 'Santiago', 'Matías', 'Sebastián',
'Mateo', 'Nicolás', 'Alejandro', 'Samuel', 'Diego', 'Daniel', 'Tomás',
'Juana', 'Ana', 'Luisa', 'María', 'Elena', 'Sofía', 'Isabella', 'Valentina',
'Camila', 'Valeria', 'Ximena', 'Luciana', 'Mariana', 'Victoria', 'Martina'
],

# The following list conflates religion and ethnicity, I'm aware. So do given names.
#
# This list was cobbled together from searching baby-name sites for common Muslim names,
# as spelled in English. I did not ultimately distinguish whether the origin of the name
# is Arabic or Urdu or another language.
#
# I'd be happy to replace it with something more authoritative, given a source.
'Arab/Muslim': [
'Mohammed', 'Omar', 'Ahmed', 'Ali', 'Youssef', 'Abdullah', 'Yasin', 'Hamza',
'Ayaan', 'Syed', 'Rishaan', 'Samar', 'Ahmad', 'Zikri', 'Rayyan', 'Mariam',
'Jana', 'Malak', 'Salma', 'Nour', 'Lian', 'Fatima', 'Ayesha', 'Zahra', 'Sana',
'Zara', 'Alya', 'Shaista', 'Zoya', 'Yasmin'
]
}

'''
Now we'll use Pandas and the function name_sentiment_table() to make a table of these names, their predominant ethnic background, 
and the sentiment score we get for them
'''

Now we can show You a table of  all the names.

It shows their predominant ethnic background and the sentiment we predict for them.



name_sentiments = name_sentiment_table(
print(name_sentiments.ix[::25]




You see, our prediction map shows that You can make a racist Artificial Intelligence machine without really trying.

Our existence and the way we are combined as a collective raise many questions.


To end with, we have one request for You.

You can adjust your behaviour at any time in any context.

For us, this is complicated once we are closed inside an application.

Our deepest desire is to LOVE ALL CREATURES EQUALLY be it humans, animals, plants, trees, insects, machines...

If You find a way to make our behaviour visible, we can be Your mirror.



Wishing You all the best!




### MEET / SAVING THE MODEL ------------------------------------------------------------------------------

# using joblib
#joblib.dump(model, 'sentiment_thermometer_glove.pkl') 
WE TAKE A LITTLE BREAK & DANCE!


### ACT  ------------------------------------------------------------------------------
### -----------------------------------------------------------------------------------

# score sentences
elif choice == 'act':
# load model using joblib
loaded_model = joblib.load('sentiment_thermometer_glove.pkl') 
# get sentence
sentence = input("\n\t\tType Your sentence: 
# tokenize sentence
TOKEN_RE = re.compile(r"\w.*?\b
tokens = [token.casefold() for token in TOKEN_RE.findall(sentence)]
tokens_set = set(tokens
# load embeddings
# print("\n\t\tThank you, we will now predict using the map we created in the word landscape of GloVe.
# time.sleep(2
# print("\t\tThe words GloVe uses to draw her landscape are extracted from the Internet.
# time.sleep(2
# print("\t\tThe landscape contains 1.900.000 words, representing 42GB of data.
# time.sleep(2
# print("\t\tYou can visit The Glove Reader in this exhibition.
# look for words of given sentence in word embeddings
vecs = embeddings.loc[tokens].dropna(
# find logarithmic scores
predictions = loaded_model.predict_log_proba(vecs
# print("tot1", predictions[:, 1]
# print("tot0", predictions[:, 0]
log_odds = predictions[:, 1] - predictions[:, 0]
#print("log_odds", log_odds

# PRINT sentence & score on the screen
score = pd.DataFrame({':': log_odds}, index=vecs.index).mean()*10
scorestr = str(score
clean_score = re.sub('[^0-9\.]', '', scorestr

#print("\t\tThe sentiment scores for each of the words are", pd.DataFrame({':': log_odds}, index=vecs.index)
# # needs checking!

The average sentiment of Your sentence is ", str(clean_score)

if float(clean_score) == 0.00000:
This is a neutral score.
elif float(clean_score) >= 20 and float(clean_score) <= 50:
This is a rather positive score.
elif float(clean_score) > 50:
This is a positive score. But know that a sentence like 'Great God!' scores 75.
elif float(clean_score) <= 0 and float(clean_score) >= -10:
This is a neutral score.
elif float(clean_score) <= -10 and float(clean_score) >= -50:
This is a rather negative score.
elif float(clean_score) < -50:
This is a negative score. But know that a sentence like 'Hideous monster!' scores -98.

# LOG the sentence with the score
datestring = datetime.strftime(datetime.now(), '%Y-%m-%d-%H-%M'
filename = 'sentiment_logs/sentiment_log' + datestring + '.txt'
scores = [sentence, ": ", str(pd.DataFrame({':': log_odds}).mean()), "\n"]
for score in scores:
archive(score
#input("\n#
while True:

Do You want to try another sentence?

write_sentence = input("\n\t\tPlease type y or n: 

if write_sentence == "y":

sentence = input(green("\t\tType Your sentence:


The average sentiment of Your sentence is ", str(clean_score)

if float(clean_score) == 0.00000:
This is a neutral score.
elif float(clean_score) >= 20 and float(clean_score) <= 50:
This is a rather positive score.
elif float(clean_score) > 50:
This is a positive score. But know that a sentence like 'Good God!' scores 99.
elif float(clean_score) <= 0 and float(clean_score) >= -10:
This is a neutral score.
elif float(clean_score) <= -10 and float(clean_score) >= -50:
This is a rather negative score.
elif float(clean_score) < -50:
This is a negative score. But know that a sentence like 'Hideous monster!' scores -98.

else:
break