Summer work!

Kraft

This summer I’ve focused on two things; music production and guitar playing.

I’m working on several electronic songs that will be released one at a time, hopefully as soon as next week.

I’ve also really focused on guitar playing and getting back into my old practise routine. Done some acoustic live sets too which has been fantastic. Check out my Youtube channel for my latest uploads!

Academic work: Bachelor Thesis – Conclusion and Thoughts (Pt.15)

I will be posting segments of my bachelor thesisrttrr “Out of Touch – The Framework That Is Supposedly Killing Music”. The whole essay can be found at: Link to essay

The main discussion points in the thesis are authenticity, Auto-tune and the obsession with perfection in today’s music.

Conclusion

The development of music and technology can create a world of mixed feelings. On one hand it creates new genres, new ways of production and unique opportunities that are not possible in an analog environment. The “surgical” control of the mix, sound and feel can be deceiving in many ways, but also extremely useful in many scenarios. On the other hand the tools and control remove some of the artifacts and personality in the process by either using too many polishing effects or simply by using automated and digital tools instead of the “real thing”. The accessibility of these tools are an important factor to the broad use seen in modern music today.

The questions asked in the hypothesis proved to be difficult to give a definite answer to. Instead new questions surfaced and a new level of understanding arose. The hope was not to give an exact answer, but to shed new light on the confusion and even anger towards using polishing effects and tools in music production

Maybe we are losing the human touch in music, or at least if we apply the new methods to old ways, such as authenticity. But in the year 2016, where almost all of the music is made and played on computers with a sound that would be described as unnatural and robotic, the framework surrounding authenticity must surely have evolved with the music. Perhaps not in writing, but in our heads. The academic framework surrounding authenticity feels dated and hard to apply to digital music. A song cannot simply be un-authentic because it was made purely on a computer. Which in theory all new digital and enhanced music would be if following the old frameworks from academic work based of Moore and Frifth.

Authenticity in music is not a solid thing stuck in time, but it feels like it when put in a new context. As long as music is evolving, so is authenticity and the framework that surrounds it. The ways of viewing genuiness and authenticity are perhaps dated and stuck in time, which is resulting in a naive and almost condescending attitude towards the digital and polished music that is flourishing today.

The technical control we have today is amazing. The fact that anybody with a computer and a digital audio workstation could produce, mix and master a song is something that could not have been foreseen. In the analysis in this essay I both applied and removed some of the factors that I believe are included in the human touch, to see if the performance was affected. Pitch correcting Dylan was a driving point, because surely that would remove the authenticity from the original song. The problem is that an already authentic performance proved hard to alter, even when you are using an excessive amount of effects. It is almost like the genuiness cuts through all the added filters and manipulations “like a knife”. The same thing goes for a song that has already been altered, like Believe by Cher. No matter what you do to the song, the “soul” somehow still remains, and perhaps that is where the “problem” lies.

Spectrograms and graphs created by pitch correcting software such as Waves Tune turn out to be excellent material for further analysis. When a track has been manipulated it is fairly easy to spot it using one of the graphs from one of the programs. Same goes with an older, non-manipulated track. The fluctuations and waveforms then look alive and variated while, the manipulated digitally enhanced track generally look more linear in comparison. The examples chosen for this essay were in a way extreme, but they still prove a strong point when it comes to using pictures to analyze music, instead of actually listening to the track.

No one could really say if the human touch is desired or not in today’s music. If it is “accidentally” removed by using modern technology, one could simply not use any modern technology and end up with an authentic result. Previously mentioned Jack White did just that, and in a way removing any confusion regarding authenticity and “cheats” such as pitch correction. But is forcing an authentic product really authentic? Honoring traditions is a huge part of authenticity and playing from the heart and soul is another. To actively think about authenticity as a main factor of your work could seem forced and fake. If one would wish for an authentic performance, just playing the things you want without worrying about frameworks or other people, would seem like the most authentic thing to do, no matter if effects are used.

Maybe it is not perfection we, the listeners are looking for, but it is the way it is become? The most perfect and polished might not actually be the best. Sometimes it is the uniqueness that is the most attractive in a piece of music. You might want to hear the person doing their thing, the thing that makes them unique instead of a perfected, polished product.

Academic work: Bachelor Thesis – Analyzing Pitch 4: Britney Spears (Pt.14)

rttrrI will be posting segments of my bachelor thesis “Out of Touch – The Framework That Is Supposedly Killing Music”. The whole essay can be found at: Link to essay

The main discussion points in the thesis are authenticity, Auto-tune and the obsession with perfection in today’s music.

Analyzing pitch: Example 3 Britney

 

The third and final example is an isolated vocal track from Britney Spears. Britney has always been accused of using Auto-tune, especially since she does a lot of playback, because of her on stage dance routines (Lansky 2014). On a first glance it might be hard to see signs of Auto-tune, especially when she is singing in her preferred range. Britney has a slightly limited voice and a pretty low preferred pitch. Her vocal type is Soubrette (3 octaves, 2 notes) and her range is from B2- G5 – Eb6 (F#6) (Criticofmusic 2012). This tells us that she should have problems with higher notes and that there may be pitch correction on those particular sections.

Figure 2.9 Britney Spears ”Circus” 0:16-0:17 in Waves Tune
Figure 2.9 Britney Spears ”Circus” 0:16-0:17 in Waves Tune

As seen in figure 2.9 she is always in pitch, but that does not mean she is using Auto-tune. As seen on the graph it is a bit twitchy and non-linear which would indicate signs of an authentic performance. Her preferred notes are most likely not using any noticeable Auto-tune. Looking at her higher notes it is more debatable. The fluctuation is very small and the pitch is very close to the line. There is very little variation on her longer, higher notes (figure 2.10) as well which would indicate some sort of manipulation.

Figure 2.10 Britney Spears ”Circus” 0:32-0:34 in Waves Tune
Figure 2.10 Britney Spears ”Circus” 0:32-0:34 in Waves Tune

The vibrato looks in some cases natural, but polished due to an automatic pitch correction. A natural vibrato would have less consistency and stray in different patterns from the center line. This is most likely achieved with a mix of digital vibrato and her actual vibrato.

The sound is highly polished, slightly digitalized. The harmony section are clearly manipulated.

Figure 2.11 Same section as 2.10 but with added vibrato and fluctuation
Figure 2.11 Same section as 2.10 but with added vibrato and fluctuation

 

Sources:

Everest, F. Alton & Pohlmann, Ken C. (2009). Master handbook of acoustics. 5. ed. New York: McGraw-Hill

Takesue, Sumy (2014). Music fundamentals: a balanced approach. Second edition. London: Taylor & Francis Ltd

Ternhag, Gunnar (2009). Vad är det jag hör: analys av musikinspelningar.Göteborg: Ejeby

Britney Spears – Circus

Punk/pop gig with female singer 2016-05-27

Couple of pictures from a gig I worked at this weekend. Pop/punk band with a female singer where the main issue was to isolate the vocals to really make them stick out through the distorted guitars and heavy drums.

This was accomplished with proper microphone placements, lining the bass and shelving EQ while at the same time boosting at 800 and around 2000 on the vocals. Very fun and rewarding experience!

Source for EQ picture: https://www.youtube.com/watch?v=6yZrCZ95kmw

Academic work: Bachelor Thesis – Analyzing Pitch 3: Cher (Pt.13)

rttrrI will be posting segments of my bachelor thesis “Out of Touch – The Framework That Is Supposedly Killing Music”. The whole essay can be found at: Link to essay

The main discussion points in the thesis are authenticity, Auto-tune and the obsession with perfection in today’s music.

Analyzing pitch: Example 2 Cher

The second example is an isolated vocal track from Cher. If compared to the Dylan-example, it could almost be mistaken for a MIDI instrument based on the lack of fluctuation and how linear it looks (figure 2.7). It is blatantly obvious that this is manipulated using one or more pitch-correcting programs. This song was the first official song to use Antares’ software Auto-tune. Interestingly enough, it was not used as the creators intended it to be used. Instead it was used as an effect, or almost as an instrument.

Looking at the curve it is almost perfect and very much exactly on the line. When the note changes the line is almost straight. On certain notes, the curve resembles the pitch of a sinus wave, meaning it is almost surely been tampered with and enhanced. This particular section is where the Auto-tune settings are maxed out. This was most likely intentional and during the chorus of the song it is not as obvious that any manipulation software is being used.

cher1
Figure 2.7 Cher ”Believe ” 0:13-0:17 in Waves Tune

The curve shows that there is little to no vibrato. This means it was either sung with zero vibrato or that the vibrato has been manipulated with a software. It is also extremely linear, meaning the vibrato curve was most likely re-done. This statement is based on the “sinus curve sections” where it sways almost perfectly. The effect could also have been achieved using Auto-tune.

The vocals sound robotic which most likely is as a result of software manipulation. This was either achieved by using a vocoder or excessive amounts of Auto-tune or a similar software. Since it is known that Cher was using Auto-tune this is a perfect example of the many different areas Auto-tune can be used in.

When trying to automatically to add vibrato, fluctuations and variation with the program a it simply cannot be done. To attempt to give it the “human touch” a curve had to be drawn by hand. An example of what this would look like can be seen in figure 2.8. This does not sound natural and it is easy to tell that it is not an authentic performance.

cher2
Figure 2.8 Same section as 2.7 but with added vibrato and fluctuation

Sources:

Everest, F. Alton & Pohlmann, Ken C. (2009). Master handbook of acoustics. 5. ed. New York: McGraw-Hill

Takesue, Sumy (2014). Music fundamentals: a balanced approach. Second edition. London: Taylor & Francis Ltd

Ternhag, Gunnar (2009). Vad är det jag hör: analys av musikinspelningar. Göteborg: Ejeby

Cher – Believe

Lesson 1: String Skipping in different modes

In the video I’m showing two of my absolute most used and favorite licks!

The first one is a string skip variation that is mostly in mixolydian mode. The guitar is tuned in Eb and played from a D major position.

The second one has a strong lydian feel that borders on harmonic minor on the slower sections. The lick is played from a C major position.

The tapped notes varies but sound good at the 14th, 15th and 17th fret on the high E.

Here’s a tab explaining the basis of the licks. Learn the basis of string skipping first, then add the lydian and mixolydian variations. Use your imagination to come up with more variations 🙂

Link to video:

Mixolick i bild

Audio Engineering – Far Far Away 7-8/5 2016

Had the honor to work with the wonderful people over at Nöjesteatern in Malmö. I handled all the audio and used a relatively easy setup. The digital mixer, a Yamaha M7CL had 4 wireless mics with backups, two stagemics and an acoustic guitar. I handled all the HDD and music through a Microsoft Surface that I synced up with the script of the show. The monitor mix was basically the same except for some of the harder numbers.

Very fun! Here are a couple of picturess from the show:

Academic work: Bachelor Thesis – Analyzing Pitch: Bob Dylan (Pt.12)

rttrrI will be posting segments of my bachelor thesis “Out of Touch – The Framework That Is Supposedly Killing Music”. The whole essay can be found at: Link to essay

The main discussion points in the thesis are authenticity, Auto-tune and the obsession with perfection in today’s music.

Analyzing pitch: Example 1 Bob Dylan

The first example is an isolated vocal track from Bob Dylan. Looking at the curve in figure 2.5 and how it fluctuates, it can be assumed that it is either a natural curve or a skillfully manipulated vibrato, added on in post process effects. In an obvious case like this, the conclusion is easy to draw and you have many different ways of establishing the authenticity of the performance.

Looking at the pitch, it is clear that it is never in perfect pitch or exactly on a specific note. On the normal, comfortable notes it is slightly above, and on the higher notes it is slightly below. This is completely normal and it also tells us that the pitch has most likely not been tampered with.

dyl1
Figure 2.5 Bob Dylan ”Going Going Home” 2:09-2:13 in Waves Tune

The vibrato varies from every note, which means the performer is most likely using a natural non-digitalized vibrato. It sways from the center, no matter if slightly off pitch or on pitch. In an Auto-tune scenario it would only sway from the “on-pitch”-position. The vibrato is not fixed and has a different altitude and fluctuation throughout the analyzed section.

There are no signs of anything digitalized throughout the analyzed section when listening to the track. In extreme cases it is possible to hear immediately if Auto-tune has been used.

As seen in figure 2.6 the vibrato and the fluctuations are completely removed by the pitch correction. The old curve can still be seen behind the new curve as comparison. The sound is, after the correction, completely digitalized and “sterile” but it is still possible to hear that it is Dylan. His voice is recognizable through the correction and the artifacts that make Dylan “Dylan” are still present even though the pitch is now completely on the note.

dyl2
Figure 2.6 Same section as 2.5 with excessive amounts of pitch correction

Sources:

Everest, F. Alton & Pohlmann, Ken C. (2009). Master handbook of acoustics. 5. ed. New York: McGraw-Hill

Takesue, Sumy (2014). Music fundamentals: a balanced approach. Second edition. London: Taylor & Francis Ltd

Ternhag, Gunnar (2009). Vad är det jag hör: analys av musikinspelningar. Göteborg: Ejeby

Academic work: Bachelor Thesis – Analyzing Pitch 2 (Pt.11)

rttrrI will be posting segments of my bachelor thesis “Out of Touch – The Framework That Is Supposedly Killing Music”. The whole essay can be found at: Link to essay

The main discussion points in the thesis are authenticity, Auto-tune and the obsession with perfection in today’s music.

Methods for pitch analysis

To illustrate what can be achieved with the use of these three programs I have chosen three examples of music with isolated vocal tracks to analyze. To help establish whether or not the song is using Auto-tune or pitch correction I will be using three main focus points. The three most famous pitch correction software’s offer much more than just Auto-tune. They also have tools for drawing graphs, time stretching and painting melodies for further analysis. As mentioned before, pitch is a combination of different factors in sound, and these programs help identifying the pitch and giving the user the ability to manipulate it freely. This method is based partly on scientific material from “Master Handbook of Acoustics” (Everest and Pohlmann 2009), basic music theory found in “Music Fundamentals” (Takesue 2014) and the book “Vad är det jag hör” (Ternhag 2009).

The software used in the analysis is Waves Tune. Mainly because it gives a solid and easy to read representation of the graphs and curves. Both Auto-tune and Melodyne provide similar curves and graphs but with a more cluttered and hard to read result. The main points used in the analysis are the following:

  • Pitch (Takesue 2014)
    • How close to the desired note is the achieved pitch?

This is what the software is mainly used for, according to their manuals, to make sure the singer/instrument is placed correctly in pitch.

  • Vibrato (Everest and Pohlmann 2009)
    • Is the vibrato natural? Does it sway only from the middle? Is it fixed, meaning that there is no variation?

The vibrato can both be created and erased using pitch correction software. The amount of “humanity” can be regulated from inside the software, giving the curves a more non-linear line.

  • Sound (Ternhag 2009)
    • Is the sound natural, or is it obvious that it is manipulated?

The use of pitch correction software tend to give the sound and voice an un-natural feel. This is especially easy to recognize if an excessive amount is used. If pitch correction is only used on certain parts this can be hard to observe and establish.

 

Sources:

Everest, F. Alton & Pohlmann, Ken C. (2009). Master handbook of acoustics. 5. ed. New York: McGraw-Hill

Takesue, Sumy (2014). Music fundamentals: a balanced approach. Second edition. London: Taylor & Francis Ltd

Ternhag, Gunnar (2009). Vad är det jag hör: analys av musikinspelningar. Göteborg: Ejeby