mai scanning

Despite multiple attempts, this was the best scan that I was able to get.  The structure sensor kept having issues around the second turn at the back of my right leg, so some of the arm and leg are textured with the wooden floor.  Unfortunately the ER didn’t have any LED lights available at the time, which I’m sure would’ve improved the results. next time!

voice chat

Below is a link to my rather basic voice chat running on Node.js on Digital Ocean. I took the example chat application and added Speech Synthesis with the Web Speech API to make the text input “talk”.  I wanted to make the voice change randomly, but sadly couldn’t get it to work!

http://104.248.52.112:8080/index.html

Hello chat

For my first assignment I tried to make a speaking chat thread, where users could type in their message and the text would speak for itself. I wanted to create a hypothetical space, or voice for The Difference where two unidentified text producers (human or machine) could communicate in real time to reveal a narrative.  Though this particular work would be antithetical to the author’s intent, I thought it could be interesting to create the environment all the same, with the voice constantly changing.

I tried to make each chat entry to be voiced in a randomized voice, so as to not assign any particular voice or characteristics-  but got stuck in the process- one of the issues was when I tried to create a variable for the array for window.speechSynthesis.getVoices, nothing would return.

Self Portrait Avatars

Making my avatar with Fuse was an emotionally taxing process. First off, I started over multiple times because I was indecisive about whether to go for one of the more realistic models, or the animated. I settled on the animated one finally, and ended up spending far too much time trying to get the facial features right, only to realize I could actually rotate the avatar to look at it from sideways- which of course, looked crazy.

After trying to fix the face from all angles, I realized I was never going to get the eyes right, or the nose, or the lips for that matter.  I thought about going for a more abstract representation, but it was past the point of no return. In hindsight, I wish I had gone with the more realistic model to see if it could produce a more accurate representation.  As a cop out, I change my skin color and hair, and body metalness (which I realized I could change much later in the process) to push the eery over to scary.

So here’s my Fuse avatar that sort of vaguely might resemble me but not quite:

My bitmoji on the other hand, was much easier in comparison. Having preset options as opposed to a seemingly infinite combination of sliders was much simpler. And in the end, I think the bitmoji looks much more like me, perhaps because it’s so abstracted:

discussion thoughts based on readings:

  • what happens when someone’s preferred avatar is a representation that is traumatizing for someone else? (i.e a hitler avatar) and if that behavior is illegal in certain countries, how do we regulate a global community of players (and/or should we)?
  • Ethical issues on commenting on physical appearances of avatars- do harassment laws apply?
  • Will gender fluidity in games influence cultural expressions of identity?

Citizens App and a self portrait

portrait of a DSPS (delayed sleep phase syndrome) growl:

http://p.maispace.space/testfbx/webgl_loader_fbx.html

I downloaded and tried out an app called Citizen , which is a synchronous platform that shows its users real-time crime or emergency activity in any given neighborhood.  Users can either search incidents based on their current location, or look up a neighborhood for a list of recent or even “trending” activity.  The incidents seem to be generated by police reports and 911 calls; on their site they write, “Citizen monitors a variety of public data sources using proprietary technology, allowing us to provide real-time alerts for crime and other emergency incidents”.

There are a few interesting features on the app, including a chat for each incident, as well as video streaming capabilities.  There are chats for both specific incidents, which can add a level of user input and information (or judgemental commentary), as well as neighborhoods in general, which can create a stronger sense of community (or animosity).  The location based video function feels similar to Snap Map, where snapchat stories that have geotags are visually represented on an actual map- some of my friends have found functional applications for this in their research for their upcoming vacations.

A few months back, I had the thought of how some time in the future we might be able to switch our perspectives between different people’s cameras when everyone wears a device that can capture 360 video and feed it into our eyes. I was sitting on a fire escape watching a row of fire trucks turn the corner and disappear, but I could hear them stop just up the street. It would be incredibly convenient, but also terrifying to be able to tune into the perspective of some pedestrian up the street in order to know what’s going on.

Anyhow, the video function on this emergency information app definitely seems like a step toward user-generated live media with a focus on our real physical community and surroundings, which is cool. I also like that the live video is used more so as a tool with an outward sort of emphasis, rather than the often times insular instagram stories, which rely heavily on the front facing camera.

screengrabs here!