An interactive installation that visualizes how a machine can learn to ‘read’ and ‘write’
READ/WRITE/REWRITE was an interactive installation exhibited at Typojanchi 2017 in Seoul, South Korea, that visualizes how a machine can learn to ‘read and write’ by using machine learning applied to natural language in the form of written text.
Ongoing research in machine learning was transformed into a tangible interactive installation able to reconfigure itself through different contexts and contents.
With this project we explored how principles of machine learning could be applied from the perspective of the cultural field, the perspective of artists and designers. ReadWriteRewrite is an interactive installation that visualises how a machine can learn to ‘read and write’ by using machine learning applied to natural language in the form of written text. We reworked our ongoing research in machine learning into a tangible interactive installation able to reconfigure itself through different context and content.
Computational models for natural language primarily regard language as a sequence of symbols, as such the meaning of words can only be described as a product of word context, that is, to which other words a given word appears. This is in contrast to Chomsky’s linguistic theory, which holds that the principles underlying the structure of language are biologically determined in the human mind and hence genetically transmitted. The key difference between the perception of language by machine and by man lies in how language is embodied; For machines, language is a process of computational operations on symbols stored in computer memory, for man, language is an experience of the body. Here is where the origin of Read/Write/Rewrite lies:
If both machine and man read and write the same word, do they mean the same thing?
The work visualises how machine learning algorithms organize roughly 3 million english words on the basis of their meaning and semantics. The end result of this process is a landscape of words in which words with a similar context are placed in near proximity. The word contexts are learned from a large body of digital texts of that publicly available such as news and Wikipedia articles.
What is the meaning of word? We start by thinking that natural language is a product from the human and its environment, for man every word can be associated to sensory experiences. A word is a complex association of actions, memories and so on.
Since most software runs on computers, often without any sensors, natural language is reduced to written words only. A word is just a series of symbols. Thus the meaning of a word can only be explained through other words.
Turning words into space
A word embedding is a numerical representation for a set of words in which each word is a point in a high dimensional space. Such an embedding can be constructed using a neural network and a large body of texts. The neural network is trained to guess the blinded center word in a short text fragment by repeatedly feeding it fragments from example texts. When the neural network fails to guess the blinded word correctly it updates the representation of all the words in the fragment such that it is more likely to successfully guess the word. By applying the guess- and-update steps a great many times all the word-points in the space organise into an embedding that contains semantic qualities.
Once a word embedding is found we can query the model to find words that are close to a given word.
A quality of word embedding is the ability to somehow learn word analogies. For example we can find a direction by taking the line between the words Italy and Rome. Instead from Italy we now leave from France and we end up at the word Paris. So the language model has implicitly learned a land to capital relationship, just from reading a lot of texts. The language model is surprisingly good at some relations, like directions, gender-based word forms and the stereotypical food for a given country. We also see that it is somewhat limited in certain other things.
We use the Word2vec model, a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space.
Since Word2Vec constructs such high-dimensional spaces, and we can’t really look beyond 3 dimensions we have to find a way to fit all that information into 2 dimensions. In order to “flatten” this high-dimensional space into two dimensions we used the t-SNE algorithm.
The installation
The installation takes the shape of a four sided cube. The various visual perspectives make it possible to explore word similarities and interrelations that are encoded in the language model in different manners. The examples in this section are simplified illustrations of core visualisation principles applied to each side of the cube.
The installation aimed to make abstract concepts as machine learning and neural networks accessible and understandable for a wider audience, meanwhile addressing themes as design, future of typography, context vs meaning, etc.. Machine learning principles will be applied more and more, and it’s important to give insight into how these principles work, to give a counter voice to ‘empty’ terms as Artificial Intelligence. By addressing this, the project aims to load these themes with cultural significance.
Interaction
The interaction of the work is based on visitor proximity. When nobody is near the work it will show and ”work” on the organisation of the word data. In this mode, words are visualised as anonymous data points. As a visitor approaches the cube the work will switch to a language appropriated to man. By moving closer to the cube the visitor zooms in on the language model. Through three visual perspectives it is possible to explore word similarities and interrelations that are encoded in the language model.
A more technical article we wrote about the machine learning part can be found on Medium.
About Typojanchi
Typojanchi 2017 – the 5th International Typography Biennale was a combination of exhibitions, talks and publications with contributions from local and international artists, designers and typographers. The theme was Body and Typography. In comparison to most art biennials and art fairs Typojanchi is less self-regarding: it tries to interpret the current era and its socio-cultural environments. All different components create a vast range of intersections between visual languages and perspectives - literature, music, movie, city, politics and economy.
Location
Seoul, South Korea
Typojanchi Director
Ahn Byunghak
Curators
An Hyoijn,Kim Namoo
Media/Technical support
Multi Tech
Construction, Build up
Gom Design