Deconstructing Music

Sound technician working on an open air session at Sònar Festival

Music is like water that slips through your fingers: just what is the nature of this ephemeral thing, held to be the most abstract and universal language? Is there some mathematical formula to define its essence? People have been trying since antiquity to grasp the specific character of this art that is found in every culture, without reaching any conclusion, and the search goes on to this day, when, in spite of the progress of science, the questions remain unsolved.

Musicology is the science that has traditionally studied the phenomena relating to music, its evolution and its relation to human beings and society, although over the past few decades other disciplines have been added that have contributed a new approach, breaking down preconceived notions and mythicized views based on a romantic outlook. Musical technology, artificial intelligence, neurosciences and studies of musical cognition are some of the more innovative areas in that respect. At present, one of the most influential experts on artificial intelligence applied to music is the French scientist François Pachet, head of the Spotify Creator Technology Research Lab, where the next generation of artificial intelligence tools for musicians is now being developed.

The first composition by a computer was debuted in 1956, the Illiac Suite for string quartet, by Lejaren Hiller. Pachet, however, has gone a step further and published the first album created completely through artificial intelligence, Hello World. The artists taking part used Flow Machines technology to select a group of songs (such as Beethoven’s Ode to Joy) and Pachet designed an algorithm to learn the style and then apply it to his own compositions. The surprising result shows that talent is not the exclusive territory of human beings.

In addition, the computer resources aimed at composers, of flesh and blood in this case, have proliferated, thanks to the emergence of new platforms. For example, there is Freesound, a web page you can visit to sample the sounds posted by users, the AcouticBrainz project, which has compiled data from the computerised analysis of more than nine million songs, or Essentia, which offers algorithms for computerised analysis of audio recordings depending on different musical aspects. Researchers at the University of Huddersfield (UK) are developing a system to provide solutions for artists working in the two most popular environments, Max and SuperCollider, and help them find the sound that fits best with their creative needs. This is opening up a field with endless possibilities, one that is constantly evolving. From that standpoint, acousmatic music, which is based upon noises, has become wholly obsolete.

Musical performance itself has been shaken up by computer technologies. That aspect is the prime focus of the study being made by Elaine Chew, the pianist and multidisciplinary artist who heads the Music Performance and Expressivity Lab at the Queen Mary University of London’s Centre for Digital Music. Her COSMOS project is aimed at exploring how musicians can define and express the musical structures that, although they are not inherent in the work, are modelled by each performer in the course of their performance. There is even a study, which has now been completed, led by Atau Tanaka of Goldsmiths, University of London, that examined body movement and gesticulation in performance. So far, he has designed a bracelet that detects the performer’s biosignals and sends them to a computer, which compiles all the data for analysis. The aim is to design musical instruments that will be controlled by those gestures by means of electromyography.

On the other hand, Paulo de Assis, at Belgium’s Orpheus Institute, is analysing the challenges faced by performers when playing European classical music, and he claims that the performer should not approach the score as a defined object, but rather as a piece of writing that is changing and constantly transformed: “The musician should not perform a score, but rather offer an assembly of different extracts, i.e. different editions of the score, the composer’s notes, critics’ opinions and comments on the work, performances by other musicians, and so on, to create a space of problematisation around the work” as De Assis asserts.

Interest on the part of neurosciences

Neurosciences have taken on music in the course of their quest to design therapies that are more effective and less aggressive. It has been shown that music can help to develop certain cognitive skills and it has been proven highly effective for speeding up musculoskeletal recovery. Jean-Julien Aucouturier, a researcher at IRCAM, Institut de Recherche et Coordination Acoustique/Musique (Institute for Research and Coordination in Acoustics/Music) at the Pompidou Centre in Paris, is studying how sounds can affect a person’s emotions subconsciously. In his research, he has explored the emotional DNA of sounds and created technologies that allow them to be modified to attain a given emotional state. This research has a wide range of applications, such as treatment for people affected by autism or who are unable to communicate.

At the same time, they have developed the Angus software, which allows a component of harshness to be added to different types of sounds, such as the human voice, a musical instrument or sounds of nature, to allow study of the emotional effect on people from a neurological standpoint. Likewise outstanding in the area of brain studies is the work done by Fang Liu at the University of Reading, which have shown a direct correlation between autism and congenital amusia, the inability to distinguish or reproduce musical tones, since music and language are processed in overlapping areas of the brain. There is a broad field of study here, since although it has been shown that the brain has a specialised area for sound perception, no specific region has yet been detected for musical perception. In fact, a team at the École Normale Supérieure de Paris, led by Sam Norman-Haignere, is set to start up a new technique based upon magnetic resonance with the aim of discovering and studying that region.

The area of musicology, the most conventional area, but no less valid for all that, has continued to advance along several parallel lines of study that are contributing new information on the role of music. For example, Álvaro Torrente, director of the Instituto Complutense de Ciencias Musicales, has examined emotions in baroque Italian opera of the 18th century, which was well-known for its aim of “moving the emotions”. Torrente has taken nine hundred operas by three hundred different composers based upon the twenty-seven librettos written by Pietro Metastasio, an 18th-century Italian writer and poet. He aims to create a body of four thousand arias in digital format, for computer analysis searching for connections between specific dramatic situations and given musical characteristics.

From the standpoint of history, the researchers Anna Alberni and Stefano Cingolani, at the University of Barcelona (UB), are beginning a research project on the role of troubadours and minstrels at the court of the Kingdom of Aragon. They will study five thousand documents to determine their importance in the development of Catalan poetry. Margarita Díaz–Andreu, likewise working at UB, has examined the cave art of primitive societies produced at locations considered sacred, caves with special acoustic properties, such as echoes or resonance. She aims to analyse the psychological effect that such places might have on humans, in order to explain the relationship between sound, sacred experience and landscape in pre-modern societies.

Another of the more intriguing approaches to current musical studies is found at the École Polytechnique Fédérale in Lausanne. There, Martin Rohrmeier and his team are blending music theory, computer science and cognition to analyse the structural parallels between tonal music compositions and linguistic syntax. “The structure of tonal music works contains elements that cannot be explained in linear terms (where an element depends directly on the preceding element), but that are inserted into each other or are interrelated in a hierarchical way, as in language” according to that expert.

The CompMusic Project

These are some of the research endeavours that were presented last summer at a conference held last summer at the Pompeu Fabra University, one of the focal points of the study of music from a technological standpoint. Xavier Serra, in charge of the European Research Music Conference, where some twenty speakers sponsored by the European Union met, directs the CompMusic Project. The aim of the project is to create technologies for musical analysis that take specific cultural characteristics into account, which has led him to study Chinese, Indian and Turkish music.

One of the more interesting factors brought to light in his research is that languages can determine people’s musical sensitivity. Specifically, tonal languages, such as Mandarin, are more musical: in these languages, a word can have different meanings, depending on its intonation: “These speakers are naturally more sensitive to tone, to music.” According to Serra, musical creation therefore cannot be separated from the cultural context where it occurs. For that reason, he maintains that compositions created by algorithms are of no interest. “However good it may be, music made by a computer has no meaning, because it does not communicate anything. Music has social component, it has to have a composer and a performer. Otherwise, in my opinion, it is not music.” The value of experiments such as “Hello World”, which is fed by “analogue music”, is therefore relative.

Although music has an inherent richness that offers different levels of depth of understanding, since it is very different to be lulled by pop music on the radio and to try to grasp the beauty of Chinese opera, the fact that it is increasingly immediate and ever-present in our society involves a clear danger, Serra notes: “We are remaining increasingly at the most superficial level of listening.” Another problem is the result of globalisation: “The Internet and the greatest part of the technological developments found in that environment are homogenising and levelling all sorts of musical expression, and, without realising it, we are impoverishing our world’s musical wealth.”

The newsletter

Subscribe to our newsletter to keep up to date with Barcelona Metròpolis' new developments