Rodrigo Carvalho [VISIOPHONE], is a Designer & Interactive New Media artist from Porto/Portugal. His work on live visuals, coding and interactive art involves a range of different outputs, from screen digital work, interactive installations, audiovisual live acts, or interactive visuals for stage performance.
His research is focused in the real-time relations between sound, image and movement in audiovisual interactive spaces.
Vimeo / Twitter / Behance / Flickr / Pinterest / Academia
Anna Monteverdi: Can you describe in brief your experience in the sector of interactivity and performance/installations? Why did you start in this field?
Rodrigo Carvalho: I started on this field around 2008/2009 after a Master Degree in Digital Arts, at U. Pompeu Fabra Barcelona. Since then, I have been exploring real-time audiovisuals and interactivity for performance and installations.
My education is on Graphic Design. But after some time working as graphic designer I felt a bit bored, and that I needed something more. Around 2006/2007 I attended to OFFF Festival, and I watched a couple of presentations that changed everything, and led me to think “This is what I want to do”. With no special order the presentation where: Robert Hodgin showing his work with particles; Casey Reas talk about Processing; AV performance by Golan Levin and Zach Lieberman; Raster Noton Live AV showcase).
After, I started to dig and learn about the interactive art field. I was living in Madrid at the time, so I started to attend to some Medialab’s events (http://medialab-prado.es/), I also remember to assist to a epic and huge exhibition at Reina Sofia called “Maquinas y Almas” (Machines and Souls) curated by Jose Luis Vicente. Epic exhibition (http://www.museoreinasofia.es/sites/default/files/exposiciones/folletos/2008011-fol_es-001-maquinas-y-almas.pdf)
After some digging I discovered the Master on Digital Arts at U.Pompeu Fabra, and I moved to Barcelona in 2008.
Anna Monteverdi Can you tell something about the relationship between music interaction and performance either in your specific works et in others’?
Rodrigo Carvalho This is an interesting topic, as I am currently writing a thesis for my PhD about the relations between Sound, Visuals and Movement in realtime systems. Where I try to catalogue and systematize all the possible relations between these three domains.
My focus is on the processes of transformations and interactions that occur across the domains (movement into sound, sound into visualizations, movement into visualizations, etc.) in each different system. Each of these domains can be either an input and/or an output of data on the system, and different articulations between the three can be explored, determining how the data flows trough the system, the type of interaction and the expressivity of the artwork. Some examples of interactions between Sound/Visuals/Movement:
(in my projects) Floating Satellites (2014), where data from Satellites’ Movement is used to generate Sound and Visuals. (Sound to Visual and Movement)
http://visiophone-lab.com/wp/?portfolio=floating-satellites
With Oui (2015), where a device on stage (an undercover wiimote, sending rotation and accelerometer data) is used to send inputs to sound, and then visuals react to sound.
(Movement to Sound. Sound to Visual)
http://visiophone-lab.com/wp/?portfolio=with-oui-2015
(in project from others)
Manual Input Sessions (Lieberman and Levin 2004) hand gestures create visual shapes, that are then translated into sound. (Movement to Visual. Visuals to Sound)
http://www.flong.com/projects/mis/
Clouds Piano (Bowen 2014). Pictures from clouds in the sky are mapped to Piano keys. In real-time a camera captures sky pictures, and a robotic device presses the corresponding keys on the piano. (From Visual to Movement, to Sound).
http://www.dwbowen.com/cloud-piano/
Soft Revolvers (Bleau 2014). Motion data from spinning tops is used to generate Sound. LED lights’ color an intensity react to sound. (From Movement to Sound. from Sound to Visual)
http://www.myriambleau.com/soft_revolvers.html
I came from a visual background (graphic design and visual arts),so in my work my mainly focus is on the visual output. I don’t make music (and I don’t dance either!), so I often collaborate with others for the sounds and movement inputs.
Anna Monteverdi. You have a very interesting blog SVM in which you show the experimentation in the field of interactivity and performance/installations: which are your idea about this giant panorama of digital art?
Rodrigo Carvalho: SVM blog started as a notebook for my PhD research where in each post I collect projects with similar relations. It works as an informal list of projects with the main goal of compiling and organizing interactive audiovisual projects. It is divided in six main categories, grouping projects with similar characteristics and technologies, in which the main focus is on the transmutability of digital data and the input/output relations between Sound, Movement and Visual domain on interactive systems.
The main categories are: (1) Motion Sculptures where motion is transformed into visual shapes and forms; (2) Sound Sculptures where sound is mapped into three dimensional physical shapes; (3) Graphic Sound Visualizers where sound is visualized graphically; (4) Shaping Sound where sound is generated from shapes and graphical features; (5) Movement Sonification where sound is generated from movement and (6) Kinetic Structures where movement is generated, setting in motion digitally controlled mechanisms and performing robotic agents.
Later on, other categories were added by collecting not only cases related to transformations of data but also by covering topics that in some way are connected or overlapped with audiovisual interactive systems: Early Computer Graphics for pioneers on computer art; Color Organs for visual music and color organs; Sensitive Interfaces for experimental interfaces for audiovisual creation and Immersive Audiovisual Environments for responsive and immersive spaces. The goal was to merge under the same categories projects that share similar interactions between Sound, Visual and Movement domains.
Anna Monteverdi Can you tell something in specifi about your astonish embody project A WEARABLE ELECTRONIC DRESS ” in which the interaction happens through a costume?
Rodrigo Carvalho “WARNING: A WEARABLE ELECTRONIC DRESS PROTOTYPE” was an explorative project made by me and Kristen Weller (Costume Designer).The goal was to explore wearable interactive technologies to be used on stage performances. The output was a costume prototype that could act as an extension of the performer’s body and as an expressive tool. The costume was composed by a shape shifter collar, controlled by servo motors that react to a proximity sensor, and an interface painted on the costume’s fabric that allow the interaction in real-time with sound.
The costume’s fabric had six black areas painted with conductive paint (http://www.bareconductive.com). These areas work as an interface made of capacitive sensors, where the human body capacitance is used as input. When the performer touches this conductive areas, an Arduino detects the variations on the capacitive sensor’s values, which are then translated into MIDI messages and sent to Ableton Live, where a setup previously prepared triggers notes, controls audio channels volumes and modulate filters over a background soundtrack.
The collar had on the inside four servo motors attached to the collar’s articulations. The rotation of the four Servo’s arms, controlled by an Arduino, defined the shape of the collar. An ultrasonic sensor, placed on the performer’ costume was used as input to interact with the collar, when a determined proximity is detected, the servos start react by moving and/or their motion pattern, modifying this way the collar’s shape.
Project Link: http://visiophone-lab.com/wp/?portfolio=wearable-dress-prototype
Anna Monteverdi Lev Manovich speaks about the new software culture: can you describe from your point of view what it means?
Rodrigo Carvalho Yes, software is everywhere, shaping our daily lives, and influencing every decision we make. (a little off topic but very interesting about this topic is the talk, “How algorithms shape our world” by Kevin Slavin https://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world)
In my work I am more interested not in the software available, but in creation of my own tools. That was what attracted me the most when I discover the world of Processing, Max, Openframeworks, Arduino and etc.
I was no more limited to the functions and parameters designed by the creators of a specific software or hardware, and for each project I was able to create a specific tool for that specific project.
Anna Monteverdi. As interactive designer for installation and performance, which are the best and update solutions for the stage today in terms of technics and do you think that Kinect is the best and practice solution for interaction performance/dance?
Rodrigo Carvalho: I guess that solutions for interaction on performance and dance varies depending on situation. One of the options is using motion tracking with cameras. For motion camera we have different possibilities with different price ranges. Kinect is the easy and cheap solution, but it has a problem related to the tracking area that is a bit small. (around 4 meters wide, maximum). I have used Kinect on different projects like Breakdown (http://visiophone-lab.com/wp/?portfolio=breakdown) or Dancing with Swarming Particles. (http://visiophone-lab.com/wp/?portfolio=dancing-swarming-particles).
Apart from Kinect there are other 3D camera alternatives with similar functionalities like the Asus Xtion, IntelRealSense, the recent Orbbec, or the Leap Motion (only to track hands and gestures). A good option to work with this type of cameras and easily extract and broadcast motion data is the NI-MATE software (https://ni-mate.com/).
Besides the 3D cameras there are also the infra-red ones(like it was done in most cases before the Kinect era). This will allow to track a much bigger area, but is it much more difficult to set up, and to perform skeleton tracking).
Then there are other options, much more expensive, from the motion capture industry, like the Vicon system (https://www.vicon.com/), or similar. Used for example on “Shadow – Elevenplay x Rhizomatiks”http://www.creativeapplications.net/maxmsp/shadow-elevenplay-x-rhizomatiks/
There are also laser systems used for motion tracking. Like the one used on the Deepspace at Ars Electronica Future Labs (http://www.aec.at/c/en/deepspace-anatta/)
Apart from the motion tracking there are also other strategies like using sensors on the performers body. There are a huge list of sensors which we can use on the performers body to interact, like orientation, flex, force, heart rate, or muscle sensors, among many others. A good example is the work of Marco Donnarumma (http://marcodonnarumma.com/works/#performance) with biophysical sensors.
Another strategy is the use of “normal” devices (consumers electronics devices like smartphones, gaming controller or others). Smartphones can be very useful in many situations, they are powerful devices with many integrated technologies (like orientation sensors, GPS, bluetooth and wifi communication,etc).
As an example on “Ad Mortuos” we used an iPhone on the performers’ body. the iPhone was broadcasting the data from the gyroscope and accelerometer and we used it to interact with the visuals and sound. In a way that when visuals were following the angle of the performers’ body orientation.
(http://visiophone-lab.com/wp/?portfolio=ad-mortuos)
Another example was “With Oui” (http://visiophone-lab.com/wp/?portfolio=with-oui-2015) where we used a Nintendo Wiimote attached to the ceiling on the middle of the stage. The dancers were manipulating the device during the performance, and it was broadcasting orientation data that interacted with the real-time sound and visuals.
Other options available are as example the “Curie-enabled wristbands” from Intel (https://www.youtube.com/watch?v=AN4XHaBBVww), or Brain Interfaces like the one used on “Biomediation” by Joao Beira and Yago de Quay (https://vimeo.com/90687541).
Related to software/hardware, usually my weapons of choice are Processing (for visuals and processing data), Quartz Composer (for visuals), MAX (mostly for processing data, and to make the bridge between different softwares and devices), and Arduino to built interfaces. But I really believe that this is a personal choice, and there are many more good options. Like Touchdesigner, Openframeworks, Cinder, and many more.
Also softwares coming from the VJ world can be very useful and are becoming more and more flexible. As an example VDMX (the one I use), Resolume, Madmapper, among many others.
To learn more about the use of technology on stage performance I really recommend to watch:
– Klaus Obermaier talk on Resonate 2013 (https://vimeo.com/73663471)
– Kirk Woolford talk on Corporeal Computing Conference 2013 (https://vimeo.com/77526865)
– Frieder Weiss on Scope Session 2012 (https://vimeo.com/52208994)
– Mark Coniglio, Phoenix Leicester 2016 (https://vimeo.com/157646215)
Some further readings
– Scott deLahunta’s thesis “Shifting Interfaces: art research at the intersections of live performance and technology” (http://www.sdela.dds.nl/)
– Mark Dowie’ thesis “CHOREOGRAPHING THE EXTENDED AGENT:performance graphics for dance theater”
(http://www.media.mit.edu/cogmac/prosem2007/downie_proposal.pdf)
Anna Monteverdi Are you experienced in videomapping? Which are you idea about this form of arts even applied to the theatre not only for urban context?
Rodrigo Carvalho I did some explorations with videomapping, but I don’t have much experience on that particular field. But I think that it has a huge potential in the designing of the theatrical space and in the creation of illusions and immersivity.
It is a very strong tool to merge the virtual with physical world, construct new narratives and explore audience’s perception and states of consciousness. Some examples of video-mapping for stage performance/theaters:
“SIM / NEBULA”, The Macula (https://vimeo.com/138894725)
“Visions of America Ameriques”, Refik Anadol (http://www.refikanadol.com/works/visions-of-america-ameriques/)
“3D Embodied”, Joao Beira (https://vimeo.com/68168265)
“Nikola Tesla in Sound and Light”, Marco Tempest (https://vimeo.com/42402467)
Anna Monteverdi: Which are your favorite artists/project in the field of interaction?
Rodrigo Carvalho: I will divide this answer in 3 different topics.
1 – Historical artists
Manfred Morh (http://www.emohr.com/ww4_out.html)
Larry Cuba (https://www.youtube.com/watch?v=HcvN1dt0yJo)
Rutt Etra by Steve Rutt & Bill Etra (https://youtu.be/De4DWMyQBfU)
Myron Krueger (https://www.youtube.com/watch?v=dmmxVA5xhuo)
David Rokeby (http://www.davidrokeby.com/vns.html)
2 – The ones that influence me the most when I started to get interested in field, and that pushed me to learn and explore more.
Robert Hodgin (http://roberthodgin.com/)
Memo Akten (http://memo.tv/)
Zach Lieberman (http://thesystemis.com/)
Golan Levin (http://www.flong.com/)
John Maeda (http://www.maedastudio.com/)
Klaus Obermier’s Appariotion (http://www.exile.at/apparition/video.html)
United Visual Artist (https://uva.co.uk/)
Universal Everything (http://universaleverything.com/)
Kyle Mcdonald (http://kylemcdonald.net/)
AntiVJ (http://antivj.com/)
1024architecture (http://www.1024architecture.net/)
3 – Recent favorite projects
“Onion Skin”, Olivier Ratsi (http://www.ratsi.com/works/echolyse/onion-skin/)
“Reading My body”, VTOL (http://vtol.cc/filter/works/reading-my-body)
“Wave is my Nature”, VTOL (http://vtol.cc/filter/works/wave-is-my-nature)
“Luxate” Joao Beira (https://vimeo.com/147072640)
“Drones vs BodyTracking experiments”, Daito Manabe/Rhizomatiks(https://www.youtube.com/watch?v=SPcZmDTW8KU)
“Pathfinder”, Christian Mio Loclair, (http://waltzbinaire.com/work/pathfinder/)
“Drawing Operations”, Sougwen Chung (http://sougwen.com/Drawing-Operations-D-O-U-G)