Hello Computer

Hello Virtual Reality

For my final project, I wanted to explore the idea of disembodied voices in our current social environments- chats that amount to chatter through shared media and collective perspectives. The earliest computer voice I encountered was probably the answering machine on our landline, which guided us through various voicemail messages.  In the future, I anticipate spending more and more time in a headset, so I wanted to find a bridge between “real” reality and virtual reality by taking this historic voicemail interaction to a speculative environment. I also wanted to explore different ways in which we can create feelings of co-presence in virtual scenes.

What I ended up with is an interface for people to view my stream in real-time and send me messages by commenting on my channel. Their messages would generate message objects in my scene, which they could then see and hear in the same stream.


Twitch is a streaming platform that is predominantly used by gamers in order to follow each other’s gameplay and comment in real time. I decided to use this as both a streaming tool to share my perspective from my headset to my own twitch channel. By using the Twitch API with Socket IO for Unity, I set up my project to instantiate spheres to fall every time someone would comment on my channel. I then used the IBM Watson Text to Speech SDK to say “new message” with each new sphere that would appear in my scene, and read out the chat message when I collided with it.  I streamed out my game view or headset display by using OBS to stream directly to my twitch channel.


I ran into a few technical hurdles trying to get Socket IO for Unity to work with the Watson API without breaking my project. After some trial and error I found a version of a Socket IO package that was compatible with other integrations. There was also limited documentation on how to use the Text to Speech function in the Watson SDK for Unity, but finally got it working with some help.


I definitely want to keep expanding on this project and interaction flow, now I’ve set up the technical framework. There are so many different directions to go with this- for example using the username data from Twitch to assign voices, or having the content of the messages instantiate different objects. While testing out my project, I asked my sister who’s in college 3 hours away, to send me messages while looking at my stream, and it was a really sweet way to connect with each other in real time. I look forward to more experiments!

Hello chat

For my first assignment I tried to make a speaking chat thread, where users could type in their message and the text would speak for itself. I wanted to create a hypothetical space, or voice for The Difference where two unidentified text producers (human or machine) could communicate in real time to reveal a narrative.  Though this particular work would be antithetical to the author’s intent, I thought it could be interesting to create the environment all the same, with the voice constantly changing.

I tried to make each chat entry to be voiced in a randomized voice, so as to not assign any particular voice or characteristics-  but got stuck in the process- one of the issues was when I tried to create a variable for the array for window.speechSynthesis.getVoices, nothing would return.