I’ve decided to write up the technical aspects of Superheroes Don’t Give a Shit!, as documentation of the project, and for the enjoyment of my geekier friends. I hope that the performance came across as very audience-centric and performer-driven rather than relying on the spectacle of technical tricks to make it work. That being said, the setup was reasonably complex for a small black box theatre, and it’s interesting for me to discuss how it was done.
To start with, there was nobody behind the scenes, and all technical control was down to the actors in the presence of the audience. We had the lighting computer and the main sound/video computer both in the main room. There was a secondary computer in the healing school room, running the ‘Confucius Quest’ game.
There were projectors placed in the main room and healing school, as well as a large panel television in the telekinesis school room. All three were running from the main video computer, with the healing school projector being switched back and forth to the secondary computer as necessary. There were speakers in each room, all linked to the main computer, with an additional speaker running from the secondary ‘Confucius’ computer to run the sound effects as though they were coming from the screen location rather than the projector location.
Because the audience is split into three groups who rotate between the schools, we used the main computer to play prerecorded video announcements on a timer (using QLab) that would play simultaneously to all screens and all speakers. These announced the ends of ‘classes’ so that the audience could rotate to the next school simultaneously without too much difficulty. The ‘surprise’ news flash that happened during the break was run in the same way, with a timer in QLab to pop up throughout the building, so people were able to watch it wherever they happened to be during the break.
The emergency audiovisual announcement following that was made through the sound-reactive animation properties in Resolume. The school logo was cut into two separate elements – Alfie’s head and the background logo.
His head as then animated to resize (within certain parameters) according to the audio signal coming from the real Alfie’s head mic – which was sent to all speakers as well. Resolume was activated automatically by QLab using midi signals mapped to Resolume columns. A similar trick was used for the phone call segment, in which the two school logos appear, and the heads of the schools’ headmasters are animated with their speech. This wouldn’t have been possible without QLab running Resolume via midi, to time the change between live performer and prerecorded audio (Alfie live through head mic, Ronald’s responses prerecorded). One ‘Go’ in QLab triggered a Resolume column with Ronald’s head animated and Alfie’s not, as well as Ronald’s audio. As soon as the audio finished, it would automatically switch back to the column with Alfie’s head animated and Ronald’s not – meaning I only had to cue the ends of Alfie’s lines.
Another use of Resolume was for the sequence in which Alfie appears on the screen, flying through space. In this case the video probably deserves its own post about how we created it, but in the end I split the After Effects project into two layers – The main animation of Alfie flying through space, and the coloured glows created by the schools sending him energy. These glows had to respond to audience interaction, so could not be prerecorded, and were rendered as a video with an alpha channel for layering on top in Resolume. I was then able to control the strength of the glow in response to the audience by using sliders in Touch OSC on my iPad (placed on top of the keyboard, which I was playing at the same time).
After the preview run, the photographer for the show, Carmen So, suggested that we incorporate a group photo with the audience into the performance. I loved the idea and decided it would have to happen in spite of having only one night make it possible before the show opened the next day. The light in the room was nowhere near enough to allow for a group photo with the whole audience – the show was lit by a limited number of LEDs. It would have to be done with flashes (luckily I had two with me just in case – I like lighting), so I spent the night finding a good position for them and the audience. I placed the camera in the centre of the projection screen pointing into the middle of the room, pointing one flash directly at the screen from the side to use it as a large reflector for fill light. The main light came from a bare flash up by the roof on the opposite side – hidden next to a speaker and pointing at the centre of the room, where I marked out the group position on the floor. The camera was set on a tripod with the position taped, so that it could be brought out quickly into position just for the photo.
This is one of the test shots once I had got everything in place. In the background Ray and Michelle are fixing the lighting – which is a whole other story.
The group photo was to take place after the opening ceremony, just before the audience was split into three – it was then displayed during the five minute recess after the three group rotations – nothing fancy with instant display over wifi, just transferring the file and displaying it later. With more time and patience I’d have shot tethered into Lightroom, automatically applying a preset to the RAW file, exporting it to a directory and having a Quartz composition in QLab pick up the photo from that folder and display it immediately. However, it certainly wasn’t worth me doing that when immediately seeing the photo would only have delayed people from going in the right directions – it worked well to display it during the break. Here’s one of the photos:
I managed to fit in some silliness with the camera timer and getting in the way of the lens before the proper photo.
Confucius Quest deserves a little more discussion as well, because if I did it again I would definitely find another way of making it (and running it). The game has a reasonably simple structure of binary choices, which they audience gets to decide on. There are four different endings and five or six different ways to get the necessary information to win.
The game was built in (horror) powerpoint – mainly due to time constraints and the limitations of the second computer. If I did it again I would probably use a locally hosted HTML site (Alfie actually suggested this to me – great idea) but then I would have to find a better way of displaying it to the projector (virtual camera software plus Resolume I think). As it was, powerpoint 2008 was sufficient, as well as providing some wonderfully blocky/laggy transitions and retro sound effects. The choices were just invisible buttons that I was manually pressing in response to audience choices. The audience choices took place via a bunch of switches, wired to lightbulbs in a board (hence the choice between an illuminated or dark lightbulb in the options picture).
I don’t have a photo of the finished board – a lot of late nights and photo documentation went out of the window a bit – I hope we’ll have some photos of it from the show when we get those back. With a little more budget I’d have liked to do this directly with an arduino, so people could play the game without me counting the lightbulbs to click – just have it directly trigger the answer in the computer. I’d also like to make the game a little more complex, but it worked out perfectly for the amount of time we had planned, and the audience generally finished the game in about six or seven minutes.