One of the core projects of RNDR is OPENRNDR, an open source framework for creative coding —written in Kotlin and Java 8— with over six years of development, that simplifies writing real-time audio-visual interactive software. 

The framework allows for the creation of real-time audio-visual applications that run on Windows, macOS and Linux based platforms.OPENRNDR is designed and developed with two goals in mind: prototyping and the development of robust performant audio-visual applications.

Ease of use is a first class citizen. We have developed extensive documentation, tutorials and examples to provide an easy way to get you started. 

Visit or get the source code on Github

We build on proven open source and open standards:

  • Hardware accelerated rendering using OpenGL
  • Positional audio using OpenAL
  • Hardware accelerated computation using OpenCL
  • Video playback using FFMPEG


OPENRNDR is the core software technology for a number of commissioned interactive media installations for both temporary and permanent works:

Running on OPENRNDR

Willem-II passage

A passage with LED embedded walls in Tilburg that changes according to the local weather and the patterns of passers-by. 

Architecture by Civic Architects, media design by LUSTlab


A bin-packing algorithm running on OPENRNDR, part of the visual identity of the Museum of the Future 2017 based on adaptability, scalability, resilience and recursiveness of elements.


A Kinect sensor can map a space around three meters wide. n-Track fuses data from multiple depth sensors to track people in large spaces. n-Track merges the observable spaces from each of the Kinects into a single large space. 


You control the control room. In Hyperlocator you get a sense of the city in one single space. Who is observing who? Every day a new hyperlocation is created, exploring a new space and unlocking its local palette.

Concept and design by LUSTlab


Running on 1 computer that drives 8 HD projectors. The tracking setup consists of 4 small computers, each of which is connected to 2 Kinect sensors. The data is sent to a processing unit that fuses the data into a single observation of the space.

Design by LUST, implementation by LUSTlab


Camera Postura matches your body language to scenes in a movie. Imagine searching for similar films scenes as Rocky's victory dance on top of the steps just by raising your arms. Each pose results in different matched scenes.

Concept and design by LUSTlab

Icon viewer

A dynamic and interactive implementation of the famous ASCII icons for Karel Martens, running on OPENRNDR. 

Application design by LUSTlab


Visualisations of how a machine can learn to “read and write” by using machine learning applied to natural language. The data set visualised consists of 4 million English words, that over thousands of steps are grouped together. 


Reversed Streetview

Creating a new type of map by scraping the Google Streetview images from a city, and stitching them back together, forming tubes that reveal the city through the eye of the car.