How I Learned to Stop Worrying and Love the Algorithm.
-
-
This track is co-organised in close collaboration with the Training common sense-project and will partially overlap. ( http://pad.constantvzw.org/p/commonsense )
- Language practices are often ritualistic, what makes them predictable and open for automation. Legal, administrative or political language have magic-like qualities. Their words create realities or make things happening. Do they retain this quality when they get automated and become algorithmically reproducible? What qualitative change undergoes such language practices when automated?
-
- This project is about constructing our own text generators. We look at existing text generators, their code and how language has been modelled in it. An interesting example is the generator of academic computer science articles SCIgen (http://pdos.csail.mit.edu/scigen/). Another example is the Dada Engine http://dev.null.org/dadaengine/, used for postmodern articles in http://www.elsewhere.org/pomo/ and erotic texts in http://xwray.com/fiftyshades. http://botpoet.com/ shows automated poetry made by a range of text generators. Based on the methods used in these text generators, we will develop our own generators and/or explore their uses. This project allows creative coding, but also people without coding background can participate. E.g. by constructing corpora of texts, drafting templates and text structures for these generators or developing projects with them. And the Dada Engine can be used without knowledge of a programming language.
-
- The text generators can be used directly for literary texts or as a building block for artistic and activist intervention in political, administrative and social practices. From political speech writing, automated filing of all sorts of requests, over automated artistic responses on social media to a qualitative upgrade of the noble art of spamming. When you have to talk as expected and to behave normally, why not let the computer do it for you?
More extensive version:
Language practices are in a large part ritualistic. That aspect makes them also predictable and open for automation. An interesting question is what happens to such language practices when they indeed get automated. Before they have some magic-like qualities, in the sense that words can create realities or make things happening. Do they retain this quality when they become algorithmically reproducible? What qualitative change undergo such language practices when automated?
- Does it become nonsense? Famous is the Sokal-Bricmont article that got published in an academic journal, but was produced with automated means. Aim was to show that post-modern language was nonsense, as it was impossible to discern between meaningful content and nonsense. But nowadays also articles on computer science or mathematics can get automatically produced and sometimes get published in academic journals.
- In fact a lot of legal and administrative language practices are already automated. E.g. when you buy something on the internet, sign a license agreement, and so on. The financial markets thrive on automated trade contracting, like with flash trading. Examples of administrative practices you can find in all sorts of e-government. So, what happens when you start to automate your side of the relation and automatically produce grant and tender applications, make demands for permissions, and so on?
Ideological language is as well very ritualistic. Alexei Yurchak wrote in Everything was forever, until it was no more : the last Soviet generation about the production of political speeches, which were constructed from a set of citations of earlier official texts en developed in a speech industry of its own. Any meaningful content was avoided in favour of a hegemony of the form. We can question how far the hegemony of the frame does create a similar situation in some of our current political speech. Can we investigate this through automating it? What will an attempt to automate this form of speech learn us about this language practice? And in reverse, does the normalizing of language allows its automation and which effect does it have on the language practice itself.
This project is about constructing our own text generator(s). Idea is to look at several existing text generators, their code and how language has been modelled in it. An interesting example is the generator of academic computer science articles SCIgen. On http://pdos.csail.mit.edu/scigen/ you can try it out, find links to code and to other text generators. An other example is the Dada Engine http://dev.null.org/dadaengine/, used for postmodern articles in http://www.elsewhere.org/pomo/ and erotic texts in http://xwray.com/fiftyshades. http://botpoet.com/ shows automated poetry made by a range of text generators. Further examples can be found in http://thatsmathematics.com/mathgen/, http://projects.haykranen.nl/markov/, http://rubberducky.org/cgi-bin/chomsky.pl, https://twitter.com/letkanyefinish, ...
Based on the methods used in these text generators and other proposed methods in literature, we can try to develop our own generators and/or explore their uses. This project has a strong coding part, but also people without non-coding background can participate. E.g. by constructing corpora of texts or drafting templates and text structures for use with these generators or by developing uses and projects with such text generators. And it is possible to use the Dada Engine without knowledge of a programming language.
The results of experiments in automated text generation are an artistic research in itself and can raise a lot of questions on the status of text and language. Further artistic use of text generators can be aimed directly at literary texts (cfr http://botpoet.com), automated theatre, … It can also be used as a building block for artistic and activist intervention in political, administrative and social practices. From automated filing of all sorts of requests, over automated artistic responses on social media to a qualitative upgrade of the noble art of spamming. It can be integrated with other code like for text analysis of social media to guide responses, text-to-speech for automated speeches, …
These wider uses are probably too ambitious to develop on Relearn. But more realistic goals are to develop a simple framework which can be used by a lot of people, some basic experiments and further develop and share a lot of ideas for its use.