Musical Topography

In Absence of Zero, Brad Pitts

Andrew Todd Marcus

Final

Joshua Brancazio

In this studio, we turned Earthquakes into music in order to give voice to this natural phenomenon. We wanted to focus on giving earthquakes a different spin and perception. Instead of earthquakes being seen as natural disasters that hurt people, we created a song that demonstrates the complexity and intrigue of earthquakes that are not normally seen. By turning a series of two thousand earthquakes in a 2-week period into individual notes, we created a song that shows the unseen side of earthquakes. The magnitude, and other values correlating to earthquakes, determine the pitch, duration, and velocity of each note.

"Quake" is a musical composition that captures the intricacy and power of earthquakes. The song begins with the sound of an earthquake, low and rumbling. Notes start to play, scattered and sporadic, but as they increase in volume, they become more continuous, and form a melody. This part of the song shows the calm before the storm, but soon, a distorted version of those notes and disrupts the peace. The distortion is imposing, yet awe inspiring. Eventually, the rhythm tracks comes in, and pull composition together as they give it depth and movement. The rhythms show that there is a pattern to the chaos of earthquakes, but every so often, one of the rhythm tracks will differ, showing that there is entropy within the pattern. Later in the composition, the beats drop out and the bass gets louder to express the raw power and force behind earthquakes.

Our song "Quake" in an expressing of the many facets of eathquakes- the melodic patters that appear in the seemingly random notes show that there is some structure to seismic activity, but the overlying, rumbling, unstoppable rolling earthquake noises in the backround demonstrate that, at their core, earthquakes are a powerful force of nature.

Presentation

Nathaniel Tong and 2 OthersCarlos Alvarenga
Joshua Brancazio

process

Carlos Alvarenga

The whole studio was about the way to visualize data from a musical point of view. At the very beginning we tried to figured out what kind of data would provide us a interesting piece of music. we wanted to be something different, something that maybe nobody would expect to look from a music perception. we thought that using the differents constellations of stars as our data would be a nice approach for the project at the same time we talked about using crime rates, this one was a strong candidate to use for our project, at  the beginning was an enthusiastic idea but then we decided that it would it be unic to use live-time data, so in that way the composition would be even more interesting and complex. Then we came up with this idea to use earthquakes live-time data, it was perfect and fulfilled all the expectations that we were looking for, and that's the one that we adopt as our group goal. 

In order to progress in the project we had to get used to use the different programs as Ableton and Max MSP, and the escential part of turning data into music notation, for then edit it in Ableton to give all the effects that we wanted. As the process of searching for data in real time, we realised that it wasn't that easy as we thought it would be, and even worse the programing of a pitch in Max that allows us to interpretate the real time data would take us too long, and we were running out of time, so we had to move into something more basic and consice that we can reach in the precise time, so we decided to not use real-time data.

Once we found our data chart, we continued working on the pitch in Max to turn the data in a midi file. then we got the midi file ready to be edit in Ableton. we decided to work in differnt compositions individualy, and at the end we were gonna join them or pick the best.  

The first composition that we came up with was pretty good, it actually had some rhythm wich is really complicated if you are using a total random data set as your musical standars, the bad thing about this prototype was that it was to simple and clearly there were too many things in it that we can improve to make it better, also it was too long we needed to make it shorter and more complex. what we changed was that we decided to cut the midi file in half and play the two parts at the same time, this made the composition more consice.

Another thing that we changed was the pitch for Max, the first itch allowed us to made the what we wanted in the most basic way, but then we decided to improved in order to give us more possibilities to edit it.

The second composition that we came up with was the one that lead us to the final thing. the good thing about this was the time and a lot of cool effects that contained, the only thing that we add to it was some real earthquakes effects, and that we finished.

The Process

Gavin Zaentz and 2 OthersKristopher Aime
Alexander Skipitaris

Our idea explored rural, urban, and suburban cities and internet usage. We wanted to take information and give it another way to visualize it. Because of that, we turned information into music. Our music was made from three different parts of internet information. The notes are based on the FIPS, the volume is based on the age, and the duration is based on the year. Each of those have their own separate track, and that allowed us to export them separately and even combine them to make better music. Our final piece is a combination of the music that the FIPS in rural places and the FIPS and urban places gave us. Once we had the Excel file we had to narrow it down from 54 parameters to 5-6. Once we had that we used MaxMSP to create a MIDI file. With that we then started to split up our sound clips. First we split it up by USR than sorted by age and made MIDI for all of them. We added the Kalimba sound and started mixing and layering the sounds to create something more complex.

Process

Kate Reed and Jules Gouvin-Moffat

Our studio was Musical Typography, where we essentially created music for specific geography.  We started off the studio by looking at projects that use live data, how they portray it, and how we might represent that with music. Once inspired to create our own projects then broke off into groups to decide what kind of data we wanted to use to make music.

We made a list of different data that we could access. We sorted each kind of data into two basic categories, data that is live and changes all the time, and data that is set and doesn’t change. Some of the data we were interested in exploring had to do with crime rates, stars, dogs, earthquakes and the Hubway system. We heavily explored stars and earthquakes. We were wondering if we could turn stars into music, and for instance what a constellation might sound like? We ended up choosing a project much closer to home. We chose the Hubway system.

The Hubway is Boston’s bike sharing program. All of its data is open to the public. We decided to take this data and track each bike individually. We wanted to personify the bike and really see how it spends its day and where it goes. We chose three different bikes to track. We are using the data for the bikes for a whole month and comparing their journey. We originally planed on having two separate tracking systems, each a year apart. We decided against that later on though to keep the project simple.

The Hubway gives us a lot of data on each bike. The information that’s important to our project is the duration of the bike ride, the start date, the start station and end station and weather the rider is male or female.

We plan on having a circular visualization. Each Hubway station will be evenly spaced around the circle. Each bike will be a dot and it will go from station to station inside the circle and will leave a trail behind that will look like a spoke on the circle, turning it into a wheel. As each the bike makes more and more trips, the previous trips will become more opaque, creating a web of spokes. The three bikes we track will each be a slightly different color, and it will be fun to watch them shoot around from station to station.

With the plan in place, we then spent the day going though all of the data we have and sorting though it. We have three years worth of data and it is very overwhelming. I think we successfully combed the data and have just what we need now. We can now bring it into Max and start analyzing it and turning it into music and visuals.

The Hubway data is very difficult to organize. There is so much data that it is really difficult to sort through. The computer simply can’t handle all of the data at once, so we waited a lot for the computer today. We also decided to use different data. We are going to map the time of day with the volume of the music, and the duration of the trip with both the duration of the notes and the pitches for the music.

We had another slight change of plans, and decided that we wanted to create an individual song for each of the three bikes, as opposed to one really long song. Each bike will have its own song and then depending on how the songs work together, we are going to layer each bike song on top of one another. We want to explore the concept of layering the bikes, and seeing what emotion that provokes.

Since one of our main goals is to personify the bikes, we are going to use the music to emphasize that. Maybe one bike will be in a minor key, because maybe it doesn’t get used as much, or one bike might be super low sounding? It’s important to us that the bikes to sound different and take on their own personalities.

Once we finished sorting through the data we put it into Max. We got a song from the data for each bike in Max. When we put all three bikes together it sounded horrible though. We hadn’t thought to keep the same musical mode for each bike, so none of the sounds matched. We then re-did the Max portion of the bikes, but kept the same settings for all of them. This made them much more cohesive. Once we had the midi file from Max for all three bikes, we brought it into Ableton. Max saves the file as a midi file, which means that it saves all the data about the music, except for the actual sound of the notes. In Ableton we were able to assign different instruments to each bike and start hearing what they sounded like together. One thing we found is that to have the bikes going all at once is simply too much sound. It broke up the sound better when we split up each bike into a bass, alto and treble range. We are just starting in on the music aspect, and have lots of exploring to do! 

At this point we were feeling a little directionless and needed a reality check. We had made the music and had used the data but didn’t know what to do next. Our music didn’t sound particularly good, but nevertheless, it was made. We didn’t know exactly how to proceed. After expressing this to the coaches, we decided to start over. 

The reason our music didn’t sound good the first time was because our data had no space. This time instead of layering the tracks on top of each other, we decided that each bike would have an individual track and song, but still the composition was a constant string of notes with no breathing room to allow any thought to develop. We had also lost the concept of the personifying the bikes somewhere along the way of our process, and wanted to get back to that idea.

One of the ways we addressed the thickness of sound was to add space to the data. For instance, after a really long bike ride (duration) we would then add a musical rest for each bike that was the same duration of the ride. We also came up with three different bike personalities: cheerful, sad and alone, and busy. We then added more musical space to match each personality. The sad and alone bike has the most space in the data and is in a minor key. The busy bike has no space of the data and is stressed sounding, and the cheerful bike has a few rests and is in a major key. We did all the Max data today and got the midi file for each bike.

My partner, Julia, formulated the final music and I made the diagrams to explain our concepts. It was frustrating because I had to make diagrams for every single step of the process, which is a lot of diagrams. While diagrams are not fun to make or satisfying once you’re done with them, I can see that they are helpful in conveying your process to other people.

I made three diagrams today explaining how we found the pitch, volume, and duration for the music. As evident from the diagrams, there are so many steps to getting ready to make the music then the actual making of the music goes by really fast and is completely an individual thing.

This studio was less about the final project and more about learning the software. I’m much more adept at Max MSP and Ableton now, and glad for it.

Final

Kate Reed and Jules Gouvin-Moffat

The aim of this studio was to represent data in a unique, interesting way. Kate and I loved the idea of using the massive amount of Hubway user information (a bike sharing program in Boston) available publicly. At first, a basic bicycle likely doesn’t seem that interesting to most of us. That is what served as our inspiration and our challenge for this project-we wanted to give a certain kind of humanity to three randomly-chosen bikes. We tracked two data points-the time of day the bike was used, and the duration of the ride-of these bikes for a month. Then, we assigned them a personality, which would be our main guide later for creating the music. (The three personalities were depressed/lonely, a busy, and a cheerful) We used Max MSP to turn the data points into sound information, and Ableton Live to turn the sound information into actual music. Our final, fascinating result for this project are the three ballads of the bikes.

Musical Topography

Andrew Todd Marcus

My project is a music box, with music based off the topography of a made up skyline. First, I thought about what this mysterious city was like, and what happened in this city. I drew some sketches of this city, creating unique buildings and bridges. After creating this city, I made some skylines with six different heights. One, being a rest in the music, and two through six being different music notes. After creating these five skylines of an unknown city, picked three to use. I then overlapped two cities at a time, noticing which parts overlap, and which parts don't. I took the information I had about each skyline, and turned it into a graph. I made seven graphs in all, each graph we turned into music. Three were three separate skylines: Red, blue, and yellow. Another three were two skylines combined: Red and yellow, blue and red, and yellow and blue. And the last was all the skylines incorporated into one graph: Red, yellow, and blue. That was the final music piece. Then I sketched some boxes, and then made a box on Rhino, with an area to put the cylinder, and a surface to place the tine. After, I laser cut my pieces, put them together, and attached the tines and cylinder with nails in it. It was finally put together. When I turn the cylinder, the metal hits the tines, creating the music of this unknown city.

Musical Topography Compositions

Andrew Todd Marcus