Resonance Ecology
Flute: Linda Jenkins, Anne Maker
Clarinet: Brooke Miller, Justin Sales
Piano: Garrison Gerard
Performed in the Merrill Ellis Intermedia Theater
Violin: Mia Detwiler
Viola: Kathleen Crabtree
Cello: Colin Stokes
Double Bass: Kory Reeder, Connor Simmons
Resonance Ecology is a system that combines and processes field recordings from Patagonia, Iceland, and Texas. These disparate locations are woven together into a new surreal ecosystem as the piece moves from one place to another, highlighting (dis)connections between locations in their weather, culture, and topology. Each realization of this piece is different: bringing the listener on a unique journey through real and imagined soundscapes that may never be heard in the same way again.
The piece is essentially a system of systems—the live electronics are constantly analyzing the materials of the performers, who are listening and responding to the sounds around them, which in turn triggers different video recordings and visual effects. All of these systems are interacting in a complex web of reaction and connection.
Technichal Information
Because of the flexible nature of the piece, the tech requirements will vary depending on the degree that each system is employed. As such, exact materials and deployment details may vary, but these instructions are designed to provide a blueprint for the technical requirements of the work.
Equipment:
Computer (ideally with 32 GB of RAM)
Software
Max/MSP (Patch downloaded from: https://www.garrisongerard.com/resonance-ecology)
SpatGRIS (Download: http://gris.musique.umontreal.ca/)
If using ambisonics or a large number of speakers, a software such as Dante Virtual Soundcard should be used.
If using video or lighting, a program such as qLab should be used to receive OSC information from the Max/MSP patch and organize the video projection and lighting cues. Sample qLab session download at: https://www.garrisongerard.com/resonance-ecology
Audio Interface
As many inputs as needed (one per instrument, unless a large ensemble with more than eight performers is used, then as many inputs as is needed to reasonably mic the ensemble)
As many outputs as speakers—minimum of two
As many microphones as performers (one per instrument, unless a large ensemble with more than eight performers is used, then as many inputs as is needed to reasonably mic the ensemble)
One XLR Cable per Microphone
One to five screens and projection surfaces
Signal Flow and Setting up the Max/MSP Patch:
Audio Settings: Input should be your audio interface, output should be “blackhole 128ch” which is the digital audio interface used to send sound to SpatGris.
There are two main signal flows within the Max/MSP Patch: One for the field recordings and one for the live instrumental material.
The field recordings begin in a polybuffer~ that is loaded when the patch is opened. When a location node is triggered, a message is sent to “poly~ locationRev” which contains all of the information on what sounds from the polybuffer~ to play and when to begin playing them. From the poly, sound is sent to the mc.dac~ which sends sound through SpatGRIS to the speakers in the performance space.
The live instrument sound comes in through the first 8 inputs of your audio device (this can be changed if different channels need to be used). Each channel is processed and then sent to the mc.dac~
Analysis:
Every sound is sent through a stereo send that leads to an analysis engine. The analysis engine uses multiple zsa.descriptors~ and outlier detectors to determine rates of change in variables such as noisiness, amplitude, spectral standard deviation, and other parameters
Video Projection and Lighting:
Video projection and lighting are both optional. If employing a software such as qLab that can receive OSC data, a patch should be used with cues matching the locations used in the Max/MSP patch.
By default, the patch sends the following OSC messages:
/cue/*/go
/cue/*/opacity xyz
= the location that is triggered, in the default patch this is 1-30, but can be higher if more location nodes are created.
The opacity message randomly sets the transparency of each video cue with a range of 0.1-0.5. As such, the video for each location should be given the cue number if using a software such as qLab, then other actions such as fades and lighting cues can “auto-continue” from the video cue.
Setting up SpatGRIS for Spatialization:
The system for Resonance Ecology is designed to run with any number of speakers. The patch sends location data to SpatGRIS using ControlGRIS plugins in Max/MSP. SpatGRIS then handles routing sound to the proper speakers. In order for this to work correctly, the speaker array of the performance space should be entered into SpatGRIS. This is accomplished by launching the “Speaker Setup Edition” within the SpatGRIS program.
Use the “Cube” algorithm for proper spatialization (even if the speakers are arranged in a dome).
If using a stereo setup, any speaker preset can be loaded in SpatGRIS, then use the stereo reduction feature (see below).