Algolit extensions
Pads
general pad: http://pad.constantvzw.org/public_pad/neural_networks_maisondulivre
Pad for the algolit extensions: http://pad.constantvzw.org/public_pad/neural_networks_algolit_extensions
logistics (tech materials):http://pad.constantvzw.org/public_pad/neural_networks_maisondulivre_logistics
workshop: http://pad.constantvzw.org/public_pad/neural_networks_maisondulivre_workshop
Racist AI: http://pad.constantvzw.org/public_pad/neural_networks_algolit_extensions_racistAI
Catalog: http://pad.constantvzw.org/public_pad/neural_networks_maisondulivre_catalogue
Catalog css: http://pad.constantvzw.org/public_pad/neural_networks_maisondulivre_catalogue_css
Option:
present different steps of NN process in an itinerary of the exhibition
Algolit extensions that we looked at
word2vec - http://www.algolit.net/scripts/word2vec_annotated/
*visualizations
*word swarms gif producer, to visualize the development of the training process, using the training batches that are made
*> Olivier's demo gif
*similar words
*extended the example script by giving a particular word from the trainingset.
*Also the dictionary with (word,count) items that is created is quite nice
making one-hot-vectors
> script Gijs
> script Hans
> excercise doing it by hand
a way to generate vectors with features
-> check other vector methods???
softmax excercise
annotated python code example from wikipedia
possible extensions to explore
word2vec GloVe visualizations
slide 35 in this slideshow https://cs224d.stanford.edu/lectures/CS224d-Lecture2.pdf
how are visual relationships created, what do these distances mean if we retrace it back to words?
2d visualizations of multi-dimensional spaces
responding to Hans comments on how one dimension is selected to create an image
can we create an image for every dimension and compare them?
statistic formula excercises
implementing a statistical formula in python, to see what is needed to calculate a probability, and what happens statistically (norms, maximizing etc)
perhaps we could do the softmax?
exploring the relation between vectors and graphs
how do word vectors translate into graphs?
how are the numbers represented by a coordinate on a graph?
How are the distances calculated? And how are they categorized as being "this is the male/female distance"?
vector multiplication
see course 3
from text to multi-dimensional spaces
can we make a counting excercise, where we count different dimensions? And take a moment for one dimension?
Referentie:
for inspiration: NN implemented in Javascript (Tensorflow playground):
https://github.com/tensorflow/playground/blob/master/src/nn.ts
- l’installation « Painted by Numbers », de Konrad Becker et Felix Stalder, http://world-information.net/virtual/painted-by-numbers/ « Painted by Numbers » compile des interviews de chercheurs, activistes et artistes en six thématiques (rationalité, prédiction, pouvoir, régulation, politique et culture) qui éclairent les stratégies algorithmiques à l’œuvre et proposent des visions alternatives à la passivité ambiante.
-https://gist.github.com/rspeer/ef750e7e407e04894cb3b78a82d66aed#file-how-to-make-a-racist-ai-without-really-trying-ipynb
showing racist bias in datasets
Enron mail correspondence, art project:
-http://www.newmuseum.org/exhibitions/view/sam-lavigne-and-tega-brain-the-good-life
-http://rhizome.org/editorial/2017/sep/26/guys-with-spikes/
Google's own platform for NLP ML tools
https://cloud.google.com/natural-language/docs/getting-started
Visualisation of word embeddings:
http://nlp.yvespeirsman.be/blog/visualizing-word-embeddings-with-tsne/
https://github.com/JasonKessler/scattertext -> A tool for finding distinguishing terms in small-to-medium-sized corpora, and presenting them in a sexy, interactive scatter plot with non-overlapping term labels.
Visualization of High dimensional data:
http://projector.tensorflow.org/
https://experiments.withgoogle.com/ai/visualizing-high-dimensional-space
Installing tensorflow v 0.12.1 (in a virtual environment):
*run this command to see all the versions of tensorflow:
$ curl -s https://storage.googleapis.com/tensorflow |xmllint --format - |grep whl
*for mac:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/ + mac/cpu/tensorflow-0.12.1-py2-none-any.whl
*for linux, choose which ever fits your system best, then run the same command as in the case of mac OS:
* linux/cpu/tensorflow-0.12.1-cp27-none-linux_x86_64.whl
* linux/cpu/tensorflow-0.12.1-cp34-cp34m-linux_x86_64.whl
* linux/cpu/tensorflow-0.12.1-cp35-cp35m-linux_x86_64.whl
**run:
* $ sudo pip install --upgrade $TF_BINARY_URL
*
Selected literature on word embeddings:
https://aclweb.org/anthology/D/D15/
https://www.gavagai.se/blog/2015/09/30/a-brief-history-of-word-embeddings/ very nice article going through the history of word embeddings and drawing a parallel to linguistics philosophy studies
https://groups.google.com/forum/#!forum/word2vec-toolkit
https://arxiv.org/pdf/1301.3781.pdf word2vec academic paper
http://blog.aylien.com/overview-word-embeddings-history-word2vec-cbow-glove/
https://www.deeplearningweekly.com/blog/demystifying-word2vec Spotify using "song embeddings", an abstracted form of word embeddings