Open Source Cinema


Concept / Background:

VR has so far largely been treated as an extension of film – a solitary experience offered up at exclusive festivals by acclaimed directors with a point of view. But by extending current platforms, such as Gaming and Social Media, we could cultivate virtual communities that can teleport to connect with one another, and conjure up real-world tools, impossible spaces, and even bodies to inhabit.  

I’ve been spending a lot of time in the hyperreal landscape of VRChat as an exploration of social VR.  After a few hours there, I started feeling its effects on at least part of my conscious and subconscious.

I looked down at my legs at home one day and felt a bizarre, numb, disconnected sensation that these legs were not actually “real” and were somehow replaceable (or interchangeable for another pair of literally anything else). There were also dreams that felt like they took place in VR.  I found some interesting threads from the last few months that seem to correlate:

If my VRChat experiences have already begun affecting how I relate to my own body, I wonder how continued use would influence our notion of identities, which could perhaps become more fluid or multidimensional.  And in eliminating barriers of “real space”, our virtual interactions could change the nature of our personal connections in the digital universe.

What I made:

I built a world called MaiSpace, my first virtual place and home, and it’s uploaded to VRChat, a current social VR platform. By using the VRChat SDK for Unity, I tried experimenting with mechanics for navigating between three levels, including a moving chair and a teleportation portal. Each level reveals itself as a new layer – with the last being a giant sphere that encompasses all.

Here is a link to an edit of me running around by myself in my space (more documentation of social interactions in there to follow):


In attempt to think through possible ways in which we might Search through these worlds and their atmospheric elements, I went back to my initial idea that worlds could function as containers. I could imagine Machine Learning algorithms identifying objects, music search algorithms scraping through similar soundscapes, and certain triggers or functions structuring types of experiences, such as games or expeditions, “hang out” zones, mystery narratives, work areas, or other functional spaces. Looking through the built-in VRChat “Scene Descriptor” in Unity, there is a list of “Dynamic Materials” the program automatically generates, which already include elements like 3D objects, materials, and sound. This would still rely on search terms made up of words, but perhaps advancements in object detection and AI could help categorize the contents of worlds in a more efficient way.

I also started playing around with the 3rd person “streaming camera” function that was just added in the latest update. It would be an amazing experiment to cut together a sort of experience film from having my friends hang out in there, with multiple cameras / perspectives to record from.

I’m looking forward to continuing down this path and creating more impossible spaces, and of course fixing this one up for public access. With most of my family overseas, my ultimate dream for VR has been to be able to meet my mum for a drink on the beach in Tel Aviv, our place of refuge. Whether this happens through a crazy realtime Google Earth VR SDK / API / etc or 360 volumetric webcam or abstract space on a social platform, it feels like I’m a headset and PC laptop away from achieving at least a rough, elementary version of that.

mapping hyper worlds, realities, and evolving identities

It feels as though I’m taking a naïve approach to social VR, focusing on its potential for positive transformation and brushing past the fact that my experience there is heavily peppered with deplorable commentary aimed at my feminine voice and avatar, and that I was faced with a Russian-speaking Hitler and an army of nazis just yesterday at a virtual bar.

But I’m hanging onto my mission to explore how we can shape our experience to enable more connection, shared stories, and common language for an inclusive global community.

I’ve been perusing around VRChat reddit posts during some desperate attempts to debug, and have been coming across some interesting themes.

How to look for, and finding worlds is a common thread, with people describing environments they want to share with others. Here’s one attempt to formalize this process using search terms: (spider man)

Another hugely prevalent situation is people building up the courage to voice themselves through the freedom of an anonymous community:


[Discussion] I just used VR chat to practice my trans voice. self.VRchat

“Hi! I’m transgender (male-to-female) but am not yet presenting as female in public. I’ve practiced my girl voice in private, but never with others around to hear it. Recently my group of gaming buddies decided to try out this program, mostly to explore memes and other dumb stuff. I picked out a cute anime girl avatar and, after hanging around silently while my friends explored the server we joined, I muted myself in the discord call to talk only in-game. From there I used my girl voice to talk to people for the rest of the night. I am not a social person, so I didn’t talk much, but it was a nice thing to try. Most people didn’t comment on my voice at all, though I got one “oh shit it’s a real girl” and one “it’s a trap” (which doesn’t bother me too much) so I’m not really sure how well I passed, but it was fun to try. 🙂

This probably won’t be the last time I do this. I imagine others have used this game to express themselves in ways they can’t in real life? Feel free to discuss!”


and also, finding common language:


“I’m french and I understand english very well but when it comes to having an oral conversation, it’s like I know 4 words of english. VRChat helped me practising my oral english and i’m a bit better so far! I’m very glad it helped you and don’t worry about the “it’s a trap” thing it’s mostly a meme I’ve seen a lot of people say it to real girls just for fun.”


I see a breakdown of identity happening, at least on the surface layer, and on a deeper layer, a discovery of identity and search for community and acceptance. In the future,  I don’t think we’ll cling onto such a strong sense of “this is who I am”, but rather, have a more fluid sense of our potential selves and our spaces. like this commercial about Las Vegas:

You don’t have to watch a movie to feel what it might be like to be the character, you can be the character if you desire. You don’t have to travel to the place, it can be brought into your space. I went to a talk this past week during which the VR/AR creator for the NYTimes brought up the complexities of bringing AR objects into an audience’s personal “safe” space as something to consider.


Elements of Representation

Some explorations from this week:

– VRChat SDK for Unity

– Twitch Users on VRChat

– Scanning myself into my World

I started looking into the VRChat Developer tools and downloaded the SDK here:

I loaded up an example scene for “actions” in VRChat to see what does what and found some great tools to get started with- many of which I recognized from some other user-generated VRChat rooms I visited. I’m particularly interested in the “teleportPlayer” function and elements like “VideoSync” and “VRCPortalMaker”, which asks for arguments for things like “Room ID”, “Search Term”, and “Tag”. I’m not quite sure how the PortalMaker works yet, but it seems like an interesting system:

<still for actions>

<still for PortalMaker>

I also jumped into VRChat/Twitch land as username: “maizsakat” and here are some highlights of what I gathered (some deep convos about watching “Ready Player One” in VR, the convergence of “IRL” events and scheduling VRChat appearances, a procession of Mariachi cats traversing worlds together):

<still of Mariachi cats>

Twitch was essentially acting as a bridge between VRChat World and my real world living room screen, and I really enjoyed being able to feedback messages in realtime to the user/performer during their experience.

I started thinking about avatars as it relates to representation and/or anonymity, and thought it could be interesting to try to create a custom VRChat avatar from a real scan of myself.  I used the structure sensor app to do a rough scan, and imported it into my prototype world:

<still of Unity World with scan>

I’m excited to dig deeper into this process and hone in on what I’d like to make of these tools.  It seems that there’s a turnaround time for VRChat to approve user-generated worlds to become public and searchable, but if possible, I’d like to attend the VRC developers meetup in in VRC this Sunday so I hope to find out more there.

Transforming Setting

Cutting video and sound is by far, one of my favorite things to do. While spending three years working at an editing company, I learned to make decisions based on rhythm, and asking the question, “what is this shot about”?  A 30 second brand spot boiled down to the frame – “nickle-and-diming” fractions of a second to maximize time, while a feature was about stepping back and driving the emotional arc. In either case, there were countless hours in the studio and emails exchanges spanning months upon months over the slightest decision made by the editor and director.

During one of our sleepless sessions, I googled “VR editing”, having no idea what that even meant. The first and only relevant search return I got was Jessica Brillhart’s Medium posts, which completely blew my mind.  Her thoughts on making bets on the user’s gaze seemed like an extension of traditional editing techniques, but her first post stating, “We are the builders of worlds, the makers of storytellers. What an amazing concept”, pushed against everything I thought I knew about making content.  She was attempting to formalize research in uncharted editing territory.

I went to Jessica’s talk at Tribeca Interactive the following month, and based on what I saw and learned about there, I shot a 360 video on a trip to Israel that summer.  I cut it up – experimenting with fade transitions, speed and reverse, jump cuts, a mashed up soundscape, and weird workarounds to create shots that were both vertically and horizontally flipped (pre skybox plugins). Here’s the result (skipping the first uneventful minute or so):

After having made this, I realized that spinning around in a chair for fear of missing out on the action, with a Cardboard up to your face, is probably a really annoying and intense way to take in content.  I experimented further with shooting with a 180 camera instead and projecting it on one half of a giant sphere helmet, as well as having the user’s blinking trigger edits.  I’d like to further explore the relationship between VR editing and blinking, especially since the theory came up in class that edits actually trigger blinks. When the headsets are integrated with gaze detection, it would be interesting to experiment with new ways of navigating content in that way.

For now, I’d like to explore more of this 3D space between the spherical 360 layer and the headset display by bringing this into Unity and stepping into an avatar that can rhythmically move through multiple worlds. As an extension of asking what the shot is about, I want to question, “what is this world about?”

**Here’s a 360 hyperimage I pulled from the video above during class that is running locally through python and into my browser:


Social Cinema & Virtual Universes

As we move beyond the screen and into space, media might expand around some key features that VR affords us – embodiment and teleportation through telepresence.  There will be multiple universes of virtual worlds, and we’ll be able to tell and share a story in real-time as we’re experiencing it.  We could shape-shift to embody characters and jump through portals together through a giant map that navigates our virtual universe.  We could conjure up screens and browsers, which would be like an AR layer within a virtual world. We could actualize our own virtual ideas in virtual reality, creating VR in VR, just as we produce strokes in Tiltbrush.  This excerpt on “Post-symbolic communication” in Virtual Reality comes to mind as a direction that we might be looking towards.

In thinking about how media or cinema in the future could be unitized, I break it down to “worlds” that make up a system of parallel virtual universes, with the “camera” being the first-person view of an avatar. Worlds are a container for all elements, including physics, objects, skybox, avatars, animated sequences, AI systems, and portals. By embodying an avatar, we can create our own experiential narratives, unique to the way we live our virtual lives.  Through portals, we can traverse space and go on “world-hopping” journeys that produce narratives. We can each summon virtual cameras that enable us to record whatever we want- everyone can potentially be the camera man and everyone else could be an actor.

The ability to record each other adds another dimension of experiential media that can be shared in VR, or exported to the real world. We can livestream our “stories”, whether candid or performative, to broadcast to a network of “followers”, just as we do on mobile social media platforms. This sharing of experiences and their ability to feedback in realtime, expands and closes the loop of experiences. Today, Twitch is a platform that allows gamers to build a community that converges around an individual gamer’s persona and experience of a game, thereby creating a “channel”.

VRChat is an available tool and platform that currently allows us to upload virtual worlds to a server, for anyone (with the proper hardware) to access and experience. I want to experiment with uploading a cluster of public worlds that are geo-spatially connected through portals within VRChat, and allow anyone to visit and interact within them.