
Introduction
Music and body motion are strongly interconnected. As a drummer, I always feel strongly connected to the music with my motion in a performance, and I am fascinated by how a genre or the type of music influences body movements. During my studies in the Music, Communication and Technology study program, I had the opportunity to actively participate in the Norwegian Championship for Standstill competition in 2019 and grasp the methods of the whole process, starting from setting up the Motion Capture system to choosing the final winner of the competition. The MoCap system outputs a rich, continuous data stream with a larger number of data points. My motivation behind the thesis was to combine this data stream with sound to hear the participants’ movements. Moreover, this implementation can be used as an instrument for a “Standstill performance” and opens a new door for a sonic interaction space with human micromotion.
Research Questions
The thesis is based on one main research question and two other sub-questions. From the main research question, I wanted to broadly address exploring the connection between the body's music and involuntary motions.
During the Norwegian Championship of Standstill competition, the participants are in a forced condition to not move. And based on past studies, there is statistical evidence that music stimuli impact standing still. Apart from using a visualization method to analyze the micromotion, sonification can be used to listen to the data and find audible patterns. The main objective of this question is to find out what kind of difference can be noticed in the motion during the music stimuli is played and whether the sonification can reveal any information that was not visible in statistical studies.
- RQ1: What kind of motion patterns are audible from the standstill competition data?
- RQ2: How can spatial audio be used in the sonification of human standstill?
Building the application for sonification
The sonification is applied for data from the 2012 standstill competition data. The initial idea was to build a prototype using the 2012 standstill data and use it for the exploring rest of the database, including yearly data for the competition. According to Jensenius et al. (2017), around 100 participants joined the study, and the final data set consists of 91 participants. The sessions were held in groups of about 5-17 participants at a time. For making the sonification, I decided to use the Max/MSP environment. Due to the steeper learning curve, lack of coding experience, and limited time frame, using a programming language such as “supercollider” or “Python” felt like an unrealistic goal for this thesis. Max/MSP provides a great GUI-based programming environment with a much faster learning curve and a large community of users, which is helpful.
Data Set
From the recorded data, two data sets were available to use for the sonification. The first data set consists of all the x, y, z position data for each participant, 273 columns and 35601 rows of data.

The second data set is based on each participant's demographic data and consists of quantitative and qualitative data. I like to mention the data variables that consist of the participant demographic data set. Which are the group each participant belongs to (A, B, C, D, E, F, G, H, P), Participant number, Age, Sex, Height, Music listening hours per week, Music performing/production hours per week, Dance hours per week, Exercise hours per week. And some measurements were based on a Likert Scale: Tiresome experience of the session (1 to 5), Experienced any motion (1 to 5), Experienced any motion during the music segment (1 to 5). Two other variables indicate if the participant had their eyes open/close or had locked knees/or not during the experiment.
Sonification Strategies
I used parameter mapping sonification with three different approaches which are,
- Sonification of individual participant data.
- Sonification of group average position data.
- Using spatial sound with individual position data.
In each option, the first half of the data (3 min) represents the participants standing in silence and the second half (3 min) with the music stimuli. One of the aims was to experiment if the sonification can reveal how the music stimuli affect the motion during standing still. Another aspect was to explore how keeping the eyes open/closed or having locked knees can affect the motion. According to Jensenius et al. (2017), there was no statistical evidence that these factors affected the micro-motions. Minimum and maximum position values of each Axis x, y, z were calculated for each participant in Excel. These values were used for scaling the parameter values during the mapping process. In the standstill experiment, data were recorded at 100hz, and each session lasts for 6 minutes. Listening to the sonification faster than in real-time can provide better insights into the data set patterns. The first step was to implement a strategy to read the data from the CSV file and have the option to change the data reading speed.
Displacement of Position
Instead of directly mapping the x, y, position data, a new variable is defined: the displacement of the position. However, I’m not dividing the displacement of the position by the time factor to calculate the rate of change of the QoM since the displacement rate also depends on the patch's chosen sample rate.
Figure 3.3 shows a part of the patch which calculates the change of the position (displacement). First, the displacement of position is calculated for each axis of data. In each moment, the previous position value is subtracted from the current position value, and the absolute value is calculated. Using this value, the displacement can be derived for each plane (XY, YZ, XZ) by pairing the sums of individual position displacements for each x, y and z Axes. According to the study results by Jensenius et al. (2017), most motion is happening in the XY plane. In the sonification, primarily the displacement of the XY plane is considered.

1. Sonification of Individual Participant Data
As presented in Figure 3.4, displacement position values in the XY plane or position values of the Z-Axis can be selected for the sonification. For mapping in the noise section, the total displacement of position in the XY plane or the Z-Axis position values can be mapped to the amplitude of the noise and the cut-off frequency.
2. Sonification of Group Average Position Data
Figure 3.4 is an extract from the max patch that calculates average displacement values for each x, y, z Axes for two participant groups. However, this patch is only compatible with the 2012 standstill data. Since the average values depend on the number of participants in the group, further customizations are necessary to use it with other standstill competition data sets. In the mapping, a similar approach to the individual participant mapping has being followed. The average position values are used to calculate the average displacement position values in the XY plane and mapped to control the noise amplitude and cut-off frequency or the Sine tone frequency and amplitude. Also, the average Z-axis values can be used to control the noise section's parameters or the sine tone.
3. Using Spatial Sound with Individual Position Data.
The third sonification approach is to apply spatialization for the position data. The position values of x, y, z Axes represent a location in the three-dimensional space. These values are used to “sonify” the motion using spatial attributes of a sound. The spatialization approach is developed by using the ICTS10 ambisonics module for Max/MSP. It allows to input cartesian coordinates (X, Y, Z) or spherical coordinates (Azimuth, Elevation, and Distance) and renders the sound output for a speaker system or headphones. In this patch, the position data is only controlling the spatial parameters of a sine tone.