Live Web

Live Web midterm idea

I want to explore concepts we touched upon with the Citizens app, where anyone can stream out their perspective and essentially become a camera.  To take it a step further, users who are following someone else’s stream would be able to affect the streamer’s environment through sending a message, ideally by voicing their message in a localized space in a virtual environment. In order to do this, I plan on using Twitch to stream out a video feed of a VR headset, and use the Twitch API with Socket.io for Unity.  The environment will be a simple space with a mirror in it, and the streaming user in the headset (most likely myself), will embody an avatar, seen through the mirror.

key things I’m going for:

  • a sense of copresence virtually through voice
  • share someone’s virtual perspective synchronously and be able to affect that environment via text to speech
  • explore any behavioral shifts that might occur from having chat texts voiced out loud

voice chat

Below is a link to my rather basic voice chat running on Node.js on Digital Ocean. I took the example chat application and added Speech Synthesis with the Web Speech API to make the text input “talk”.  I wanted to make the voice change randomly, but sadly couldn’t get it to work!

http://104.248.52.112:8080/index.html

Citizens App and a self portrait

portrait of a DSPS (delayed sleep phase syndrome) growl:

http://p.maispace.space/testfbx/webgl_loader_fbx.html

I downloaded and tried out an app called Citizen , which is a synchronous platform that shows its users real-time crime or emergency activity in any given neighborhood.  Users can either search incidents based on their current location, or look up a neighborhood for a list of recent or even “trending” activity.  The incidents seem to be generated by police reports and 911 calls; on their site they write, “Citizen monitors a variety of public data sources using proprietary technology, allowing us to provide real-time alerts for crime and other emergency incidents”.

There are a few interesting features on the app, including a chat for each incident, as well as video streaming capabilities.  There are chats for both specific incidents, which can add a level of user input and information (or judgemental commentary), as well as neighborhoods in general, which can create a stronger sense of community (or animosity).  The location based video function feels similar to Snap Map, where snapchat stories that have geotags are visually represented on an actual map- some of my friends have found functional applications for this in their research for their upcoming vacations.

A few months back, I had the thought of how some time in the future we might be able to switch our perspectives between different people’s cameras when everyone wears a device that can capture 360 video and feed it into our eyes. I was sitting on a fire escape watching a row of fire trucks turn the corner and disappear, but I could hear them stop just up the street. It would be incredibly convenient, but also terrifying to be able to tune into the perspective of some pedestrian up the street in order to know what’s going on.

Anyhow, the video function on this emergency information app definitely seems like a step toward user-generated live media with a focus on our real physical community and surroundings, which is cool. I also like that the live video is used more so as a tool with an outward sort of emphasis, rather than the often times insular instagram stories, which rely heavily on the front facing camera.

screengrabs here!