You don’t need scientific studies to prove that a pumping house track helps keep you going twenty minutes into a spin class, and children will fall asleep to a lullaby while blissfully unaware that melody can impact the frontal lobes of the brain. But could a better understanding of how music works enable us to benefit from it in new ways, beyond the gym and the nursery?
Evidence of music’s impact is easy to find. There are plenty of studies showing that music:
Here comes the science bit
Why does music have these effects? We do know a few basic things, like the fact that listening to music releases dopamine (Salimpoor et al., 2011). But what is less well understood is – just what is it in music that enables it to generate these impressive results?
This should be fertile territory for researchers. After all, any piece of music can be described precisely according to well defined properties (rhythm, tempo, pitch, melody) and turned into a series of notations – a score. As Pythagoras said around 26,000 years ago:
There is geometry in the humming of the strings, there is music in the spacing of the spheres.
Given the innumerable quantity of musical pieces out there, they should be providing researchers with one giant dataset to play with.
Somehow, it should be possible to figure out how we get from here:
Or from here:
The Missing Link
Looking at the research though, you find that the reasoning is very basic. Centala’s claim that “listening to fast-tempo music delays the onset of neuromuscular fatigue” is typical. “Fast-tempo”? Really? The researchers seem content to take a very basic classification of music and see if it can be associated with an equally basic health outcome. So, fast-tempo music helps you exercise. Slow tempo music helps you sleep. Is that the best we can do, given the incredible variety and precision with which music can be described?
That would be a great shame, particularly since modern technology offers us more and more tools to represent music and analyse its effects, as Jacob Collyer, a rising star of the British music scene, does when he demonstrates live how he uses Logic Pro mixing software to create his tracks.
The main reasons for this gap are essentially two-fold:
- We know enough to ensure the everyday things we do with music, like aerobics class playlists, work (although my kids might beg to disagree when they see me dancing).
- Even if you could find a clear link between specific music properties and their impact on human health, there doesn’t seem to be any obvious way to use such a link.
In fact, we as a company only became interested in this question by accident. When we installed our BackHug back therapy devices in busy gyms, offices and factories across the UK before the COVID19 lockdown, quite a few people told us they couldn’t relax on the device because there was too much noise around them. That prompted us to start actively encouraging people to listen to music on their headphones while they were on the device, so they could relax – however noisy the environment.
The first thing we did to encourage this habit was create a music integration widget for the BackHug App, so people could go into their Spotify, Apple, Amazon or Google music to choose a playlist while they were using the device.
The benefits in terms of relaxation were obvious and that got us thinking – could we somehow integrate the way the robotic fingers move during treatment with the music our users listen to? That way they would feel as though the device were listening to the same music as them and moving its fingers in harmony with that music. The proposition seemed simple enough, but how could we realise it in practice? What we rapidly discovered was that we would have to use software to analyse the properties of the music users listened to and find a way to translate them into the movements of the fingers. That’s why we had to confront the very question people who research the benefits of music on health seem to have avoided until now.
Time for some cognitive musicology
The beginnings of an answer to this question may lie in a little known, specialised field of music research called “Cognitive Musicology,” which uses software programmes to analyse music according to certain properties and then predict outcomes of listening to the music based on those properties. At times it can sound like science fiction:
Experienced listeners of tonal music expect completions in which the musical forces of gravity, magnetism, and inertia control operations on alphabets in hierarchies of embellishment whose stepwise displacements of auralized traces create simple closed shapes. (Larson, 2004)
It’s hard to understand what Steve Larson, a gifted jazz musician as well as musical software savant, was on about here, but, in simple terms, he is saying that we can employ software to analyse music in terms commonly used to describe physical motion (gravity, inertia) and that the software can listen to the beginning of a piece of music and use those properties to predict the musical sequence that should follow.
Although we are not clever enough to follow Larson in his pursuit of stepwise displacement of auralized traces, our intuition is that we could use software to extract properties from music – as Larson does – and map those properties onto particular motions of the fingers of our BackHug device. Cognitive musicology makes extensive use of artificial intelligence and we envisage using AI to analyse the benefits users receive from treatment integrated with the music they listen to, so we can continually refine the integration. Given the size of the dataset represented by the scores of all the music there is in the world, who knows what we might find?
We totally agree with the common sense, intuitive view that music has benefits for health, even though we will never be able to pretend to explain exactly why that is the case. But by embarking on this project of using software to integrate back therapy with music, we may be able to find a new way for your back to relax.