As a composer, I write socially conscious music that investigates affective relationships between music and human experience.   Each of my compositions is tailored to the concepts of each individual project, but all my music takes its significance from public life, creating important opportunities for collective judgment and wonder.  Some of the most unique aspects of my musical language are a personal blend of microtonality, indie-pop, and experimental multimedia/visual elements. 

In my secondary research area, I have published work at the intersection of human robot interaction and music, designing social scientific studies and music technology that investigates the group effects of robots in artistic contexts.  My primary and secondary fields complement each other, allowing me to demonstrate how music and technology create nuanced networks of meaning.  With this collectively constructed approach to meaning making, I bring the listener along with me as I break new ground within each medium in which I work. Below I describe a research project completed from 2020-2021 in my secondary field:

ROBOT MEDIation of performer-spectator dynamics in Live-streamed performances

Here is a video, describing and including some documentation of the project, produced by the Arts, Science + Culture Initiative, who also helped fund this research.

Robots are gradually encroaching into all spheres of human activity, from Alexa in our homes, to sorting robots at Amazon shipping warehouses, to Spotify’s AI-driven playlist assistant. Concurrently, during the COVID-19 pandemic, performers were isolated from their audiences and bring a very different energy to recording/performing in front of just a camera. Performers that rely on audience feedback to proceed in their performance, such as cabaret musicians, buskers, drag queens, and improvisers, were especially disadvantaged. Twitch streams and other virtual performance mediums limit in-person feedback from the audience. One of the initial motivations for this project was to test if using robot mediation to simulate the audience feedback to a performer would change the stakes of performances during quarantine. Having a physical robot that is capable of producing feedback through speech and physical gestures may help connect performers and audiences in a way that ordinary virtual performances are unable to. Thinking in a broader history of musical automata, this project investigates the possibility of an “audience” automata and asks: if robots can form reactive audiences in interactive performance environments, what does that imply about being a human listener?

For the project, I composed a novel, 30-minute multimedia work for three improvising musicians, electronics, robot, and interactive audience on Twitch, developed in collaboration with computer scientist Valerie Zhao, and performer/improvisers Zachary Pulse, Zachary, and Lia Kohl. I wrote code in Python, Supercollider, Javascript, and HTML that asked the audience questions through twitch chat such as “how is [performer x] doing?” and “how am I making you feel?” and collected their responses. I designed a series of machine-learning driven, sentiment analysis algorithms that would change the written musical prompts for the performers as well as the electronic accompaniment, creating a flexible and responsive musical and theatrical experience based on each different audience. I then collaborated with UChicago computer scientist Valerie Zhao to add code that would command a NAO robot to speak the instructions and responses out loud to the performers, as well as manifest the affect of the audience’s responses with corresponding physical gestures. Zhao and I also collaborated to design and implement an IRB-approved human subject study that tested our hypothesis--“compared to a chatbot mediator, can a robot mediator enhance the performer-audience connection in a live-streamed, remote performance?”—through both qualitative and quantitative analysis of audience feedback surveys following three separate performances.

The results of the human-subject study were published in the peer-reviewed HCI Journal Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction. The paper was also accepted for presentation at the 2022 ACM conference in Sapporo, Japan. Click the button below to navigate to the paper:

Following this peer-reviewed publication, I also wrote a 10,000 word article that contextualizes the composition and the human subject study within the musical history of automata and within human robot interaction, targeted for both an audience of both musicologists and HRI designers. The article contains excerpts of the Python and SuperCollider I wrote code in its appendices. Presented to fulfill the requirements of my minor field in Research in Computer Music at the University of Chicago, this article was approved by a committee including Sam Pluta (chair), Jennifer Iverson, and Marc Downie.

ABSTRACT
            Live-streamed musical performances, in which performers and audience members are in different physical spaces, lack the same intensity of audience feedback as when audiences share the same space as performers.   In collaboration with the computer scientist Valerie Zhao, I conducted a between-subject study comparing audience experiences with a physical robot mediator versus a text-to-speech chatbot in live-streamed performances of original, interactive experimental music.  Building on a rich, musicological history of automata and informed by HRI research on the social effects of robots, I analyze the multiple interaction mechanics of my composition and the quantitative and qualitative data of the study.  While data analysis revealed no significant differences between robot and chatbot conditions, there were statistically significant differences in audience engagement, indicating a preference for the third movement of the composition in both conditions.   Audience comments also favored the third movement’s particular interaction dynamic and underscored the comedic elements of the experience. Drawing on Henri Bergson's theory of laughter as a social function, I explore the implications of these findings for scholars and artists working in human-robot interaction, ultimately offering design suggestions for future performances incorporating robots.

Click the button below to view the 10,000 word article:

View excerpts from the composition MUSICBOT1975 for robot, three improvising performers, and interactive audience below: