Cutting video and sound is by far, one of my favorite things to do. While spending three years working at an editing company, I learned to make decisions based on rhythm, and asking the question, “what is this shot about”? A 30 second brand spot boiled down to the frame – “nickle-and-diming” fractions of a second to maximize time, while a feature was about stepping back and driving the emotional arc. In either case, there were countless hours in the studio and emails exchanges spanning months upon months over the slightest decision made by the editor and director.
During one of our sleepless sessions, I googled “VR editing”, having no idea what that even meant. The first and only relevant search return I got was Jessica Brillhart’s Medium posts, which completely blew my mind. Her thoughts on making bets on the user’s gaze seemed like an extension of traditional editing techniques, but her first post stating, “We are the builders of worlds, the makers of storytellers. What an amazing concept”, pushed against everything I thought I knew about making content. She was attempting to formalize research in uncharted editing territory.
I went to Jessica’s talk at Tribeca Interactive the following month, and based on what I saw and learned about there, I shot a 360 video on a trip to Israel that summer. I cut it up – experimenting with fade transitions, speed and reverse, jump cuts, a mashed up soundscape, and weird workarounds to create shots that were both vertically and horizontally flipped (pre skybox plugins). Here’s the result (skipping the first uneventful minute or so):
After having made this, I realized that spinning around in a chair for fear of missing out on the action, with a Cardboard up to your face, is probably a really annoying and intense way to take in content. I experimented further with shooting with a 180 camera instead and projecting it on one half of a giant sphere helmet, as well as having the user’s blinking trigger edits. I’d like to further explore the relationship between VR editing and blinking, especially since the theory came up in class that edits actually trigger blinks. When the headsets are integrated with gaze detection, it would be interesting to experiment with new ways of navigating content in that way.
For now, I’d like to explore more of this 3D space between the spherical 360 layer and the headset display by bringing this into Unity and stepping into an avatar that can rhythmically move through multiple worlds. As an extension of asking what the shot is about, I want to question, “what is this world about?”
**Here’s a 360 hyperimage I pulled from the video above during class that is running locally through python and into my browser: