About maï

Posts by maï :


Concept / Background:

VR has so far largely been treated as an extension of film – a solitary experience offered up at exclusive festivals by acclaimed directors with a point of view. But by extending current platforms, such as Gaming and Social Media, we could cultivate virtual communities that can teleport to connect with one another, and conjure up real-world tools, impossible spaces, and even bodies to inhabit.  

I’ve been spending a lot of time in the hyperreal landscape of VRChat as an exploration of social VR.  After a few hours there, I started feeling its effects on at least part of my conscious and subconscious.

I looked down at my legs at home one day and felt a bizarre, numb, disconnected sensation that these legs were not actually “real” and were somehow replaceable (or interchangeable for another pair of literally anything else). There were also dreams that felt like they took place in VR.  I found some interesting threads from the last few months that seem to correlate:


If my VRChat experiences have already begun affecting how I relate to my own body, I wonder how continued use would influence our notion of identities, which could perhaps become more fluid or multidimensional.  And in eliminating barriers of “real space”, our virtual interactions could change the nature of our personal connections in the digital universe.

What I made:

I built a world called MaiSpace, my first virtual place and home, and it’s uploaded to VRChat, a current social VR platform. By using the VRChat SDK for Unity, I tried experimenting with mechanics for navigating between three levels, including a moving chair and a teleportation portal. Each level reveals itself as a new layer – with the last being a giant sphere that encompasses all.

Here is a link to an edit of me running around by myself in my space (more documentation of social interactions in there to follow):


In attempt to think through possible ways in which we might Search through these worlds and their atmospheric elements, I went back to my initial idea that worlds could function as containers. I could imagine Machine Learning algorithms identifying objects, music search algorithms scraping through similar soundscapes, and certain triggers or functions structuring types of experiences, such as games or expeditions, “hang out” zones, mystery narratives, work areas, or other functional spaces. Looking through the built-in VRChat “Scene Descriptor” in Unity, there is a list of “Dynamic Materials” the program automatically generates, which already include elements like 3D objects, materials, and sound. This would still rely on search terms made up of words, but perhaps advancements in object detection and AI could help categorize the contents of worlds in a more efficient way.

I also started playing around with the 3rd person “streaming camera” function that was just added in the latest update. It would be an amazing experiment to cut together a sort of experience film from having my friends hang out in there, with multiple cameras / perspectives to record from.

I’m looking forward to continuing down this path and creating more impossible spaces, and of course fixing this one up for public access. With most of my family overseas, my ultimate dream for VR has been to be able to meet my mum for a drink on the beach in Tel Aviv, our place of refuge. Whether this happens through a crazy realtime Google Earth VR SDK / API / etc or 360 volumetric webcam or abstract space on a social platform, it feels like I’m a headset and PC laptop away from achieving at least a rough, elementary version of that.

mapping hyper worlds, realities, and evolving identities

It feels as though I’m taking a naïve approach to social VR, focusing on its potential for positive transformation and brushing past the fact that my experience there is heavily peppered with deplorable commentary aimed at my feminine voice and avatar, and that I was faced with a Russian-speaking Hitler and an army of nazis just yesterday at a virtual bar.

But I’m hanging onto my mission to explore how we can shape our experience to enable more connection, shared stories, and common language for an inclusive global community.

I’ve been perusing around VRChat reddit posts during some desperate attempts to debug, and have been coming across some interesting themes.

How to look for, and finding worlds is a common thread, with people describing environments they want to share with others. Here’s one attempt to formalize this process using search terms:

https://steamcommunity.com/app/438100/discussions/11/1693785035831241515/ (spider man)

Another hugely prevalent situation is people building up the courage to voice themselves through the freedom of an anonymous community:


[Discussion] I just used VR chat to practice my trans voice. self.VRchat

“Hi! I’m transgender (male-to-female) but am not yet presenting as female in public. I’ve practiced my girl voice in private, but never with others around to hear it. Recently my group of gaming buddies decided to try out this program, mostly to explore memes and other dumb stuff. I picked out a cute anime girl avatar and, after hanging around silently while my friends explored the server we joined, I muted myself in the discord call to talk only in-game. From there I used my girl voice to talk to people for the rest of the night. I am not a social person, so I didn’t talk much, but it was a nice thing to try. Most people didn’t comment on my voice at all, though I got one “oh shit it’s a real girl” and one “it’s a trap” (which doesn’t bother me too much) so I’m not really sure how well I passed, but it was fun to try. 🙂

This probably won’t be the last time I do this. I imagine others have used this game to express themselves in ways they can’t in real life? Feel free to discuss!”


and also, finding common language:


“I’m french and I understand english very well but when it comes to having an oral conversation, it’s like I know 4 words of english. VRChat helped me practising my oral english and i’m a bit better so far! I’m very glad it helped you and don’t worry about the “it’s a trap” thing it’s mostly a meme I’ve seen a lot of people say it to real girls just for fun.”


I see a breakdown of identity happening, at least on the surface layer, and on a deeper layer, a discovery of identity and search for community and acceptance. In the future,  I don’t think we’ll cling onto such a strong sense of “this is who I am”, but rather, have a more fluid sense of our potential selves and our spaces. like this commercial about Las Vegas:

You don’t have to watch a movie to feel what it might be like to be the character, you can be the character if you desire. You don’t have to travel to the place, it can be brought into your space. I went to a talk this past week during which the VR/AR creator for the NYTimes brought up the complexities of bringing AR objects into an audience’s personal “safe” space as something to consider.


Elements of Representation

Some explorations from this week:

– VRChat SDK for Unity

– Twitch Users on VRChat

– Scanning myself into my World

I started looking into the VRChat Developer tools and downloaded the SDK here: https://docs.vrchat.com/docs

I loaded up an example scene for “actions” in VRChat to see what does what and found some great tools to get started with- many of which I recognized from some other user-generated VRChat rooms I visited. I’m particularly interested in the “teleportPlayer” function and elements like “VideoSync” and “VRCPortalMaker”, which asks for arguments for things like “Room ID”, “Search Term”, and “Tag”. I’m not quite sure how the PortalMaker works yet, but it seems like an interesting system:

<still for actions>

<still for PortalMaker>

I also jumped into VRChat/Twitch land as username: “maizsakat” and here are some highlights of what I gathered (some deep convos about watching “Ready Player One” in VR, the convergence of “IRL” events and scheduling VRChat appearances, a procession of Mariachi cats traversing worlds together):

<still of Mariachi cats>

Twitch was essentially acting as a bridge between VRChat World and my real world living room screen, and I really enjoyed being able to feedback messages in realtime to the user/performer during their experience.

I started thinking about avatars as it relates to representation and/or anonymity, and thought it could be interesting to try to create a custom VRChat avatar from a real scan of myself.  I used the structure sensor app to do a rough scan, and imported it into my prototype world:

<still of Unity World with scan>

I’m excited to dig deeper into this process and hone in on what I’d like to make of these tools.  It seems that there’s a turnaround time for VRChat to approve user-generated worlds to become public and searchable, but if possible, I’d like to attend the VRC developers meetup in in VRC this Sunday so I hope to find out more there.

Transforming Setting

Cutting video and sound is by far, one of my favorite things to do. While spending three years working at an editing company, I learned to make decisions based on rhythm, and asking the question, “what is this shot about”?  A 30 second brand spot boiled down to the frame – “nickle-and-diming” fractions of a second to maximize time, while a feature was about stepping back and driving the emotional arc. In either case, there were countless hours in the studio and emails exchanges spanning months upon months over the slightest decision made by the editor and director.

During one of our sleepless sessions, I googled “VR editing”, having no idea what that even meant. The first and only relevant search return I got was Jessica Brillhart’s Medium posts, which completely blew my mind.  Her thoughts on making bets on the user’s gaze seemed like an extension of traditional editing techniques, but her first post stating, “We are the builders of worlds, the makers of storytellers. What an amazing concept”, pushed against everything I thought I knew about making content.  She was attempting to formalize research in uncharted editing territory.

I went to Jessica’s talk at Tribeca Interactive the following month, and based on what I saw and learned about there, I shot a 360 video on a trip to Israel that summer.  I cut it up – experimenting with fade transitions, speed and reverse, jump cuts, a mashed up soundscape, and weird workarounds to create shots that were both vertically and horizontally flipped (pre skybox plugins). Here’s the result (skipping the first uneventful minute or so):

After having made this, I realized that spinning around in a chair for fear of missing out on the action, with a Cardboard up to your face, is probably a really annoying and intense way to take in content.  I experimented further with shooting with a 180 camera instead and projecting it on one half of a giant sphere helmet, as well as having the user’s blinking trigger edits.  I’d like to further explore the relationship between VR editing and blinking, especially since the theory came up in class that edits actually trigger blinks. When the headsets are integrated with gaze detection, it would be interesting to experiment with new ways of navigating content in that way.

For now, I’d like to explore more of this 3D space between the spherical 360 layer and the headset display by bringing this into Unity and stepping into an avatar that can rhythmically move through multiple worlds. As an extension of asking what the shot is about, I want to question, “what is this world about?”

**Here’s a 360 hyperimage I pulled from the video above during class that is running locally through python and into my browser:


Social Cinema & Virtual Universes

As we move beyond the screen and into space, media might expand around some key features that VR affords us – embodiment and teleportation through telepresence.  There will be multiple universes of virtual worlds, and we’ll be able to tell and share a story in real-time as we’re experiencing it.  We could shape-shift to embody characters and jump through portals together through a giant map that navigates our virtual universe.  We could conjure up screens and browsers, which would be like an AR layer within a virtual world. We could actualize our own virtual ideas in virtual reality, creating VR in VR, just as we produce strokes in Tiltbrush.  This excerpt on “Post-symbolic communication” in Virtual Reality comes to mind as a direction that we might be looking towards.

In thinking about how media or cinema in the future could be unitized, I break it down to “worlds” that make up a system of parallel virtual universes, with the “camera” being the first-person view of an avatar. Worlds are a container for all elements, including physics, objects, skybox, avatars, animated sequences, AI systems, and portals. By embodying an avatar, we can create our own experiential narratives, unique to the way we live our virtual lives.  Through portals, we can traverse space and go on “world-hopping” journeys that produce narratives. We can each summon virtual cameras that enable us to record whatever we want- everyone can potentially be the camera man and everyone else could be an actor.

The ability to record each other adds another dimension of experiential media that can be shared in VR, or exported to the real world. We can livestream our “stories”, whether candid or performative, to broadcast to a network of “followers”, just as we do on mobile social media platforms. This sharing of experiences and their ability to feedback in realtime, expands and closes the loop of experiences. Today, Twitch is a platform that allows gamers to build a community that converges around an individual gamer’s persona and experience of a game, thereby creating a “channel”.

VRChat is an available tool and platform that currently allows us to upload virtual worlds to a server, for anyone (with the proper hardware) to access and experience. I want to experiment with uploading a cluster of public worlds that are geo-spatially connected through portals within VRChat, and allow anyone to visit and interact within them.

Video Book

For my final, I deconstructed an old project from exactly 10 years ago to create a video album of a documentary that never was. I wanted to capture the feeling of being in college at that time- the music we listened to with the aux cord in the car, our flip phones, the quality of the video, all capturing the nostalgic atmosphere of a bunch of 19 year olds having fun, pre-social media.

Over the break I plan on getting these photos printed and putting them in a real photo album, here it is so far:

Video Album

For my final project I would like to expand on one of my projects from the past few weeks.

One of the ideas I had was to continue experimenting with playing videos from still photos as an exploration of stories content and memories in physical spaces rather than in digital folders on a desktop – I’d like to continue de-archiving the hard drive I found with footage from 10 years ago to create a physical photo album to house all these memories in a non linear way (or perhaps, somewhat linear because the pages of a photo album are usually turned from left to right).  I would like to finesse the way the video starts playing on actual photos that are developed (fading up, mapping accurately, etc), and perhaps use virtual buttons to begin playback or sound.

I’d ultimately like this to be something like an interactive documentary, the adventure of a few wide-eyed 18-year-old college kids traveling down to this forgotten town of Centralia in Pennsylvania in 2007, shooting social media style “stories” on a camcorder with our Magnetic Fields and Beirut songs playing throughout our car rides. I believe there are only 7 inhabitants left in the town today, but there was also a time capsule that the town buried in 1966 that was uncovered after we shot all this footage in 2014, so in a way I think the format of this video content could mimic the idea of preserving lost memories of this town in physical artifacts.

Another separate idea I wanted to expand on is the 360 video orb. I’d ideally like to have the orb floating in space and when the user touches it, it expands to a larger semi-translucent 360 video sphere that encompasses the user’s immediate area. For this next week though, this might be a bit too tricky, as from my previous experiments trying to do this with Vuforia, the tracking kept breaking, and between my personal device and what’s available at the ER, I don’t have access to a device that can run ARKit. I would also ideally prefer to build this for the hololens, since I think the effect of looking around and feeling immersed in this sphere would be much much more effective without holding a screen up on a device to view it.

Midterm in progress

I started collaborating with Terrick for our midtern, and so far we have a general geometrical sketch laid out of our world. We really want to work with videos and make an experience that’s on rails that slowly travels through a video corridor alongside a cat.  At the end of the corridor, we want to push past a 360 video sphere into an actual skybox.

Of The Four Types of Stories in VR, this experience would be Ghost without Impact, as the user is led through this hallway with the ability to look around, but without any local or global agency.

In hindsight, we were likely inspired by Passage.

Mood Board

This week I started exploring different tools in tiltbrush to see if I could create dream-like structures to walk be able to walk through. I like the aesthetic of warm colors that pop and lights that are emitted in strange and captivating spaces; I’m very inspired by the works of artists such as Pierre Huyghe and Philippe Parreno for their use of lighting, Pipilotti Rist for color and video textures, and Ryan Trecartin for general experimental narratives.

I want to explore what editing in VR means, by using a soundtrack as a base off which to create events, or perhaps reveal parts of the world to guide the experience. The goal would be to create an environment that doesn’t make much sense linearly, but evokes a chaotic hum with a playground feel, with objects and structures that emit nostalgic content.