Keeping in line with my video theme, I wanted to start exploring 360 video in AR to try to evoke a sensation of being elsewhere. I had some 360 footage I shot in the desert of Israel and I used it as the texture on a sphere with inverted normals in Unity, with some opacity to be able to see the real world environment through it.
I initially wanted to make it a giant 360 video that surrounds the viewer by using the ground plane, but apparently my iPhone6 doesn’t support “SmartTerrain”. So I kept my image target, but I’m happy I was pushed to explore virtual buttons because by adding a rotating animation to the sphere, it made the experience better so the viewer can “spin” the video a full 360 degrees around the Y axis to see the whole environment without having to move all around it. I tried to apply extended tracking to a giant sphere that surrounds the viewer but it kept breaking, so the end result is a scaled down version, which has a different effect – kind of like a snow globe or crystal ball.
Ultimately I would love to try to have a live 360 video feed shown in this way, so the viewer can feel more connected in realtime to another location.
Augmentable Object Fantasy
The biggest obstacle I encounter every day when I’m trying to get out the door is getting dressed. This has been a problem in my life ever since I can remember, and so if I could augment any object, it would be an entire closet system that has all my clothing items and accessories scanned and integrated with an AR mirror. I realize this isn’t the most original idea; there are probably many companies developing this already, but I first came across the idea in “Clueless”, and have been dreaming about it ever since.
To expand on this idea, I think it would be amazing to have an augmentable wardrobe- clothing items that fully allow you to express yourself in AR. People could follow or subscribe to your “AR channel” and see your crazy outfit, hair style, or anything really.
I expanded on my world from last week, tweaking some of my design choices and adding interactive sound. I changed the material of the floor and walls to be all glass instead, as I found the Ocean material to be a bit too dark and distracting. I wanted to make different sound tracks start playing throughout different locations in the scene, so I added sound cues to the three different thresholds of the doors. I made sure the First Person pawn starts off facing these doors, which are aligned and meant to draw the user in, straight ahead. I personally don’t love when an experience starts off with crucial elements being in the “curiosity zone” (behind the user), so I also placed the pawn pretty far back against the front facing wall. Here’s my walkthrough of the scene!
I’ve been really interested in integrating video content into this spectrum of extended reality, so I decided to use still images that would come to life in AR to play a video recording I took almost exactly 10 years ago – to imbue memories into physical photographs as a sort of time capsule. At the time, I was a sophomore an an undergraduate student at NYU, and I went with a group of my friends on a roadtrip around Pennsylvania for spring break. I had a video art assignment I had to fulfill, and while I didn’t end up finishing the project, I was left with a hard drive I uncovered recently, which had a wealth of these recordings that feel like a mix between homemade videos and snapchat stories.
There are two image targets, one is a still of a title sequence, which explain the narrative a bit:
“Centralia: An ancient municipality located in the outskirts of society in rural Pennsylvania. Population: 9 due to coal mining fire circa 1962. The remaining 991 residents (approx.) are expected to return in 2016 to open a buried time capsule.”
The other image target is a rough trailer I had made at the time, the audio didn’t come through on this screen recording (I’ll have to re-upload this), but it’s meant to be a dream-like glimpse into this time and space.
I linked the video player within Unity to each of the image targets using simple planes. Initially, the videos started playing automatically as soon as I hit play, so it was a bit tricky to figure out how to make the video start playing only when the image target was detected in order to create the illusion that the photo became animated. I still need to work on making it stop playing when the image target is no longer detected.
I definitely want to keep going with this and incorporate all this footage into a larger mixed reality project, perhaps to explore relationships with time, memories, and archival content within extended reality.
Here’s a few stills from my floating Unreal nightclub!
Three women dance in absolute sync to the Donna Summer track, “I Feel Love” on a water dance floor under a series of doorways, while fireworks go off into the water ceiling and floating windows illuminate different colors through the thick fog.
Between the Starter Content and Example Content packages, I had a lot to play around with, and the only thing I made was my hip-hop dancing lady in Mixamo/Fuse. This was the first time I worked with particle systems, and I adjusted the smoke one to give it a more foggy effect, which helped bring out the lights more.
The Lab was extremely immersive physically, to the point where certain characters made me step back a bit, and one popped out in way that was too scary and real. The bow and arrow game was great with the haptic feedback, and I liked how the experience played with scale.
VRChat!!! completely blew my mind. Unlike The Lab, it was more emotionally immersive- it’s is a game-changer for me. I’ve always wanted to try social VR and have been really interested in its future applications, and the first time trying it was beyond everything I had imagined and hoped it’d be. My sense of embodiment wasn’t perfect, but good enough to feel present and lose track of time in this vortex. The subject of anonymity is particularly interesting, as it’s not actally truly anonymous with its use of real mics (although I’m sure users could blend their voices). It was jarring and somewhat terrifying at the beginning, but it started to feel somehow empowering in a strange way- encountering trolls and knowing that they could say whatever they wanted without being able to actually see me or physically affect me. I hear and read a lot about the potential harm and detrimental dynamics of social VR harassment, and yet I came out of the experience feeling more in control somehow. And then there were users who were actually great to talk to and explore other worlds with. Even in my memory, these experiences and encounters feel very real, and whether that’s something scary or exciting, it’s clear after trying VRChat that very soon, these virtual spaces will be exponentially populated with more and more users.
I want to be up here with you when it’s pink and it’s orange and cooling from the sinking sun. It’s too chilly to know but the days will be long and the night will be warm and the ice cream truck somewhere around with that song. I won’t tell you to look at the stars because the Chrysler’s brighter, the Empire State, and FDR drive. The first thing I remember, I was up there on that really tall building, couldn’t fall asleep, looking out this way to the Queens sky. The sun came up right around here, but it probably moved further East by now. I still can’t sleep, but out here barely mind January winds. My toes are cold but I barely mind knowing that May is so soon, and the rosé’s really too sweet because it was cheap.
I had never been that close to free, alone up here, an island of fireworks, between that job and the next. But still, being up here with you, it would be magic and cute, we can get drunk we can dance and fall asleep. The little speaker going and the blanket laid out, I know there’s Shelter Island out there, but this is ours. And it’ll be gone by the time it’s overrun.
For this first assignment, I didn’t deviate too far from this course’s title: MAGIC WINDOWS
I thought it would be interesting to try to alter the perception of space to teleport from within the inside of a room. I tried changing the scenery of the outside world by projecting live video feed from Tokyo, onto the inside of a window from my friend’s place in the East Village. After spending all of January hanging around Shibuya, this is the place I wanted to be teleported to the most, so I found this live webcam feed of Shibuya crossing:
and taped white paper onto the glass to project onto:
After tweaking around MadMapper for a bit:
I also thought it would be cool to project the actual street view in the East Village back onto the window, so you could still get the real view of the street while having the window fully covered up and private. I hooked up a webcam to look onto the street corner, and since the night scene matched the lighting on the inside, it looked kind of uncanny; there was some pixelation because I had to blow up the video feed in MadMapper to fit the perspective, but I rather like the effect:
In thinking about integrating these two ideas and about possible futures of mixed reality, I started mocking up what it might look like to have someone call from overseas and have the room transform to that place’s time and atmosphere. Though the clip below doesn’t fully illustrate it, the idea is that when my mum in Tokyo calls, the window would change to the Shibuya crossing scene to heighten the sense of connection – a magic window 🙂
Finally, for a GIF that best fits me, here’s a representation of my beloved cats Uni & Schnitzel in my bed:
So happy I finally know how to set up the oculus on my own! This is something that I’ve seen done many times but have been too intimidated to try and seems like a breakthrough! The Rift was surprisingly easy to set up with Haiyi from class- we never got the remote to work, but the controllers were working fine. At one point, the MSI restarted without warning, and when we booted everything back up, I realized there was no way to skip ahead through the experience to get to the crash point, which would’ve been useful.
I tried a few experiences including, “Dear Angelica”, “Henry”, and Google Earth VR, which were all very different stylistically. The storyline in “Dear Angelica” was so heartfelt, with a subject matter that’s highly personal and relatable at the same time. The aesthetic reminds me of tiltbrush strokes and it was nice to float in space among words. The elements would also fade away if I got too close, which felt really elegant and natural. The tone reminds me of “Notes On Blindness”, both in its narration and illumination of certain parts of the scenes. “Henry”, on the other hand, felt a bit more like a kid’s film was adapted to a VR movie, and I didn’t leave the experience feeling like I fully understood why I was there. Google Earth VR seems like a natural extension of the browser version- soaring through the tops of buildings was really cool, and the point of view while floating felt hyperreal or game engine-like, especially with the sound effects. Scanning the Earth is conceptually a crazy idea, and beyond its entertainment value I’m sure there are a ton of practical applications for it. For me though, it holds up as an experience on its own, and the ability to click through to different cities felt like a narrative in its own way.
I read somewhere that VR experiences are stored and last in your memory in the same way that real memories are, and whether or not that’s true, “Dear Angelica” and Google Earth VR are definitely lasting in my mind in a visceral way. Overall, it was really nice to see a range of experiences, and I feel so much more excited about trying out more, now that setting it up isn’t such a scary thought!
For the Unreal exercise, I wanted to have a woman dancing in a nightclub, so I made my character in Mixamo and exported her T-pose and two different animations – ninja idle, and hip-hop dancing.
I ended up spending a ton of time on the building the scene itself and learning how the materials and lighting work together:
Everything was working fine until I tried importing my character and changing its material settings in Blueprint, at which point Unreal kept crashing while compiling the changes. Lowering the Engine Scalability settings seemed to get me further and I was able to load the two fbx animations into the Sequencer, with a cross fade between them to make the transition smoother.
I definitely want to delve further into Unreal Engine since this Blueprint format seems rather approachable. I didn’t get to try scanning with the ipad, but I would like to explore that, as well as scanning objects into Unreal and making them move using keyframes in the Sequencer.
For our After Effects project, Nico and I decided to combine our passions for casinos and cats to create a narrative around the question of what my dear cats Uni & Schnitzel are up to while I’m at ITP all day and night. The cats have a cardboard box that I often find them hanging out in when I get home, so we were inspired to use the box as a portal to a parallel universe where the cats are expert gamblers in Las Vegas.
We initially wanted the cats to win a bunch of money and bring other animals back to my room for a giant party, but after having shot a bunch of assets of the cats, we realized we animating a ton of animals might be beyond our capacity. We decided to have the cats come back to my room as lions instead – this ended up being such a happy accident as a solution to a problem; I think it works so much better narratively for the cats to come back as lions.
We added a track by Empire of the Sun called, “Walking on a Dream”, and it synced up perfectly to the mood we were going for.
I’m really happy we got to fully delve into After Effects, and that we decided to take on the more challenging route of animating real animals rather than use standard animation-ready assets. It definitely ended up being a lot more work in the end, so perhaps what we ended up with doesn’t feel like it reflects all the days and hours we put into it, but as a result, I feel very comfortable using basic After Effects, and pursuing its tools further.
I love our piece so much and so do my friends! Here it is below: