Thoughts on Sight reading

https://www.youtube.com/watch?v=aASBNbeREEY

I just watched this video and was inspired! Margaret Fabrizio talks about how there is a difference between “decoding” and “reading”. She acted out what it was like to be a first grade student trying to read A Tale of Two Cities. “I..I..It..w..wa..was..the..be, be, best of tims, tim-es, times”. This brought back memories of what it was like when I took organ lessons in junior high and high school. The organ came with a box of graded sheet music that started with Twinkle, Twinkle Little Star and continued through folk songs and popular music. After teaching chords in the left hand in a standard position and teaching the pedals for the bass notes, it went on to have the chord symbols written over the melody staff. I stopped reading the left hand and depended on the chord symbols. I still suffered through decoding the right hand, but after stumbling through the melody several times I would have it memorized and after that could play it by ear. I can still remember bumbling my way through some of these old songs and suddenly my parents would figure out what I was trying to play and would start singing. One time it was  “Casey would waltz with a strawberry blond”…

…except that my dad changed the lyrics to “Casey was hit with a bucket of sh–” (Mom interrupts “Larry! Larry!!”)…”and the band played on”.

When I reached highschool my mom found a new organ teacher, a young woman just out of college. She was very theoretical and taught me to play from fake books — how to do blocked chords and a walking bassline. She also finally figured out that I couldn’t really read! We agreed that I would also work on some very simple music along with arranging tunes from the fake books. I made some progress and have many good memories of working with her. She invited me to Wanamaker’s department store to see the famous organ there (the young man who was her boyfriend at the time was the organist, and he chatted with her while the music roared away, both hands and feet flying). She also invited me to her house. One whole room of the house was built around a pipe organ. She let me try it out. While I was playing, a key got stuck, so we went into the back room where many of the pipes and gadgetry was. It was a small Vox Humana pipe going “neeeeeeee”.

I stopped taking lessons my senior year. I wish I had kept going! But I was wrapped up in playing Cat Stevens on the guitar, and also taking calculus and physics. I was also in the choir. Choir was amazing (we learned the Faure Requiem and Bernstein’s Chichester Psalms) but again that was by memorization.

I did not take part in any form of music during the intervening years. I didn’t get back to music until I had turned 30 and was an undergraduate (in entomology!). My husband (a graduate student at the time) had the urge to have something piano-like in the house and we got our first synthesizer. I was frustrated with my inability to play from sheet music, and my husband, a computer guy, suggested that there was probably software that would help. We did not find software that helped with sight reading, but we did find software that would let you play music into the computer and then edit it, the way you could edit an essay in Word. The notes took the form of little rectangles of varying lengths, arranged with high notes at the top and low notes at the bottom — like the roll from a player piano. This is called sequencing software. With the help of the sequencing software I found that I could compose by ear. This was a life-changing discovery. When I was supposed to be working on my senior year classes I also created a little album of music (recording it on cassette tapes), drew album art and gave copies to all my friends.

When we moved to Maryland I joined a church and have been a member of the choir there for about 15 years. Our choir director is the sort of person who encourages you to stretch out of your comfort zone. She makes this possible by creating a supportive environment — building a sense of trust among the choir members. It’s OK to make mistakes. I have often had the experience of being the only alto at rehearsal and having to sight read new music . And it’s totally OK.

Now I am in my 60s and I have an opportunity to devote a lot of time to music. As someone who’s been musically…illiterate? for many years, I’m excited to see what progress I can make!

I would be thrilled to someday be able to play something like this!

Understanding the Fundamentals of Music lecture 1

Dr. Greenberg defines music as “sound in time” or “time ordered by sound” — basing this on an earlier definition that included the word “purposeful”. Interesting that Greenberg took “purposeful” out.

The first unit of the course will be about the timbre of different instruments.

He begins by talking about the major classifications of instruments. The first instrument, he said, is the human voice; he won’t go on to discuss it, except to say that other instruments aspire to have its flexibility and expressiveness.

As a potential composer, that made me think about how the timbre of different instruments might remind the listener of different kinds of human voices. Childlike, wheezy-old, raging, crooning, howling at the moon. What kind of person is speaking in this composition? Do they have “friends” with them? Or an argumentative crowd?

“Anthromorphizing” the instrumentation.

Also in this lecture he talks about the bassoon and the contrabassoon; he asks “was there ever an instrument simply called the ‘oon’ ?” Unfortunately no, although at one time there was a tenoroon.

 

Robert Greenberg — Fundamentals

Back in 2010, my son started attending a private high school which was 45 minutes away. This meant that frequently I was in the car about 3 hrs a day, 5 days a week. I found a great way of taking advantage of the driving time: listening to lectures from The Teaching Company. My favorite instructor was Dr. Robert Greenberg and over the span of 4 years I acquired and listened to many of his courses.

Since I’ll be taking classes at Howard Community College this fall, I thought it might be good to return to my Robert Greenberg lectures; I can listen to them while doing the dishes, cooking, etc.

Today I started with Understanding the Fundamentals of Music. This is a relatively short course, with just 16 lectures (unlike his very long music history course, How to Listen to and Understand Great Music).  The main things I remember from this course were: him telling a funny story about his wife (a piccolo player), making fun of oboeists (the high pressure affects their brains?); examples of solos that each instrument might dream of (ex. bassoon in the beginning of Stravinsksy’s Rite of Spring?), songs with strange meters like “Take Five”, and a very thorough discussion of tonic and dominant chords. I think he actually goes into the Greek experiment with the string and the proportions (Pythagorus?). Good stuff. Lecture 1 today!

It did happen!

On July 24th I was waiting to see what happened when the No Man’s Sky universe was re-written. I had heard that the change would come at 9 am, so at 10 am I checked in. My beautiful home planet was still there! The graphics seemed more detailed, and there was more lag than usual, but that stormy golden world of Tempus Fugit continued to exist. I have to admit I was a tiny bit disappointed. How can you write an epic song cycle about THE LAST OF THE 300 WORLDS when they didn’t actually end?

On the other hand, I wasn’t quite sure “this was it”. After all, the universe rewrite was a new patch to the game, and those usually take longer to upload. I logged out of the game and started looking on the forums. There’s an active Reddit dedicated to No Man’s Sky. I found out, no, the patch had not yet been released. I was on pins and needles and kept checking back. Finally someone posted on Reddit that it was here — about 2 pm. At 3 I logged on to Steam and clicked on “No Man’s Sky”. There was in fact an update and it took about 45 minutes to download. I had promised myself that I would not get caught up playing the new version of the game (called “Next”), but I did want to see what the new world looked like. I clicked “play” and watched the loading screen, which I have seen so many times before — stars coming into view in the distance, drawing near and streaming past. The star-stream ended with white fog, which cleared to reveal….I’m no longer on Tempus Fugit. I’m on a space station. My ship has changed — it’s full of obsolete technology — and it runs on different fuel now. The star system has the same name as before, but I didn’t dare fly down to explore the planets for fear of running out of “gas”. The space station was much more extensive, with all kinds of aliens walking around instead of the usual half a dozen guys sitting around a table playing cards. In fact the new, improved space station reminded me a little of the space station in Mass Effect called The Citadel.

Meanwhile, the game went on sale and I got copies for several of my young friends. I’m hoping they will get started exploring this new universe and that later I can tag along with them. “What do we use for fuel now? How do I build a base? How do we do multiplayer?” etc. They seem to pick these things up much more quickly than I do.

In conclusion

It did happen! The universe changed. I can write The Last of the 300 Worlds, rather than My Home Planet Has Better Graphics Now.

The Universe Will Be Destroyed July 24th

I have spent many hours in the procedurally generated universe of No Man’s Sky. I’ve spent so much time there it’s almost like having spent a month traveling around the US. Like seriously, I’ve logged more than 300 hrs. Kinda scary when you think about it. I’ve filled a terabyte of space on my disk drive with screen capture videos.

About 6 months ago there was a major update. When this happened there were many improvements. However, the update re-wrote the entire universe. My home planet was burned to a cinder; Dawnseas no longer has an ocean, Etienne Rouge is no longer red. Only a few of the planets that I had discovered, named and loved have remained as before. I’m embarrassed to admit, I cried when I saw what had happened to my home world. It’s only pixels. But…I will never be able to go there again!

As of July 24, 2018 there will be a huge update. Rumor has it that there will be a form of multiplayer and even PvP. But I’m pretty sure that the universe will be rewritten from scratch. That means I have only a short while to record footage of my favorite planets.

It also means that when that universe is gone, it is gone. That chapter of the story will close. I will have a finite amount of video footage to draw from. This puts limitations on the project (a good thing) and also gives it an overarching emotional theme. Goodbye Dawnseas, Rosperigosa, Neochadwickia. Goodbye Naguxoisanorca.

https://www.nomanssky.com/2018/05/no-mans-sky-next-multiplayer-and-release-date/

 

Change direction

OK, so if I’m going to do an independent project totally on my own, I can take it in the exact direction I want rather than trying to meet someone else’s standards. Which is kind of a shame, because trying to meet standards & goals outside of my comfort zone could be a really good thing.

The main thing I’m interested in now is music for video games and for other virtual environments (ex. background music for a virtual tour of an architectural building still in the planning phase). One of the different things about this kind of composition is the use of “stems”. You submit not only the finished piece but also its building blocks (the bass line, the melody line, the weird sci-fi sound effects line) which can be used by the game designer at will — like toppings on a pizza. Here layer parts one, two and three; here just one, here pile them all on.

In game design this is important because the software can be programmed to react to what the player is doing and increase the intensity of the music by using more of the stems layered together. (“Player approaches monster, is attacked and retreats; cue ‘Run Like Hell’ theme”.)

I have been working on my own, posting videos on Youtube. Eight of the videos take place in the planet exploration game of “No Man’s Sky”. It would be a great project to write some additional music to capture the mood of these planetary environments, and also take the compositions I’ve already written and work with them; break them down into stems, put them in official music notation. This will be a lot of work because when I mixed the songs last year I blended all the parts — I don’t have the individual stems anymore! I’ll have to re-create them. Also, I dread working with Finale. I have done it but believe me I would much rather eat kale. Raw kale. Tough raw kale. Kale doesn’t make me beat my head on the desk.

Another project I’ve started (but haven’t posted any examples of yet) are videos that take place in the medieval fantasy world of “Dark Souls”. For these I would like to use music from the repertoire but arranged for synth (by synth I mean any electronic reproduction of instrument sounds, including the realistic samples used in Garritan Instant Orchestra). One song is based on a MIDI file of Machaut’s “Rose, Liz, Printemps, Verdure”. I used a different instrument for each voice part and used dynamics to bring one instrument to the foreground, then take it back and bring another forward. This comes under “arrangement” rather than “composition” but it’s also something I want to learn about.

So here we go.

  1. Take notes on Emily Reese’s “Top Score” series. React to them.
  2. COMPOSE more music — for the sci-fi world of No Man’s Sky
  3. Take already composed music and make stems and scores
  4. ARRANGE music for the fantasy world of Dark Souls
  5. Read Winifred Phillip’s book A Composer’s Guide to Game Music

My “No Man’s Sky” playlist

Mixed feelings

I’m beginning to have mixed feelings about taking “composition” for credit. If I understand right, Dr. Composition hasn’t been in charge of an independent study student before, and doesn’t know if he has time for me in his schedule. I don’t want to put him on the spot. Better to work on my own, and meanwhile scope out the environment this fall semester, see if there’s any faculty who are willing / able to help me.

The areas I will need help are

1) someone who will note that I did, indeed, submit X amount of work this week and

2) someone who can help me write things out in standard notation. I get so far on my own and then get stuck.

3) Critiques of the compositions (“Hey, you might want to think twice about putting parallel 5ths in the baseline”) would be great, too.

 

By Donovan Govan. – Image taken by me using a Canon PowerShot G3 (reference 7877)., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=118178

 

 

Thinking about Lisbeth Scott’s interview

One of my goals for this fall is to go through Emily Reese’s series of interviews with composers, and take notes on each episode. The other night I listened again to one of my favorites, her interview with Lisbeth Scott. Lisbeth started out as a classical pianist but changed direction, branching out into singing and composing. Here she is as vocalist in the soundtrack for the game “Journey”.

https://www.youtube.com/watch?v=aBXokDs6WyE

In the interview she talks about how this music came about. The composer, Austin Wintory, had specific ideas about what he wanted the score to be like, but he had known Lisbeth from previous work and wanted to use her particular voice as an instrument. Part of the composing process included Wintory bringing instrumental tracks to the studio and having Lisbeth improvise vocalizations over them. [Check — was she improvising or trying out what he suggested?]. As her career continued, Lisbeth began to focus more on her own compositions. In the interview Emily Reese asked her “How did the transition happen?” Lisbeth replied “Did you ever get the feeling that you’re going to throw up and there’s nothing you can do to stop it?” They both laughed uproariously for a few minutes (one of my favorite thing about this interview is the great chemistry between Emily and Lisbeth) and Lisbeth continued to say that there is something there that just has to come out.

When I think of my own composing, it’s not because there’s something inside that needs to come out. It’s more of an urge to build. One of my favorite parts of keeping an aviary is designing different kinds of environments and enclosures for the birds. Species A and Species B both want a lot of room to fly but you can’t put them together because B would bite A’s toes. Species C gets along with A but A would steal C’s food. How to arrange the compartments? One section I invented is a miniature greenhouse jerryrigged from 2 x 2s and shed kit brackets. It’s like building with Legos, but with added constraints. You don’t usually have to worry about Legos biting each other’s toes.

I also enjoy creating worlds in Minecraft. I built this cube-planet in honor of a friend’s birthday. The over-all design was made using software called “World Painter”, and repetitive structures were built with “MC Edit”, but the rest of the work was built block by block.

Unlike Lisbeth Scott, with her overwhelming urge to express something from deep within, most of my composing is about …building.  I want to build a sound environment that feels…edgy and dangerous? austere but beautiful? cheerful? awe-inspiring?

I want to build worlds. Places, moods you can travel to.

I’m picturing my upcoming music classes as a visit to the hardware store. Look! A mitre saw! A tile cutter! Stacks and stacks of 2 x 4s!

 

By Per Erik Strandberg sv:User:PER9000 – Own work, CC BY-SA 2.5, https://commons.wikimedia.org/w/index.php?curid=830530

 

 

 

Sound in time

Dr. Robert Greenberg says (quoting another author) that the definition of music is “sound in time”.  According to that definition, music would include what we normally think of as music both instrumental and vocal; also speech, bird song, aleatoric sound environments, traffic noises, heartbeat and breathing, and sounds created by software (artificial intelligence).

Does John Cage’s 4’33” fit the definition? It has time but not sound — unless you count the expectation of sound, or the environmental sounds that happen during the piano’s silence.

I think there is a gradation of purposeful creation, of intent, in these examples. A possible name for this might be “intentional music”. Artificial Intelligence algorithms generate patterned sounds. Are they intentional? Partly — the software was originally created by a human. In a sense the software is a tool or even an electronic instrument. I have a Korg Karma keyboard which has a “Karma Engine”, basically a Band-in-a-Box with some intelligent randomness built in. Someone designed this as an instrument, which (with practice and exploration) I can learn to use. As I  choose the arpeggiations and sound patches more insightfully, my music becomes more intentional and meaningful.

Deadmau5 in his Masterclass talks about creating sounds with his analogue synthesizers. He changes the timbre, attack and decay of the notes. Some of his music seems random or repetitive, but the sound quality itself was created purposefully.

Aleatoric sound environments can be created intentionally and may be used to create a mood.

Traffic noises are statistical variations of sound in time. If you took the variations in amount of sound, and then sped it up so that one day’s rhythms took up a few seconds, you would wind up with something that sounded more music-like. In this way, selecting the information and speeding it up / slowing it down / amplifying it adds intention.

Breathing, heartbeat, and footsteps seem like random sounds but they are influenced by the state of the body making them. Because they are so connected to emotions they can be used purposefully to create moods.

And, my favorite example of all, bird song. My family has been keeping an aviary of birds and raising chicks by hand for many years. Is bird song purposeful? Some bird vocalizations are automatic and instinctive. If someone jumped out at you in a dark room you would probably make a sound automatically. In the same way if you startle a bird it will alarm call, and if you grab it suddenly it will “scream”. These sounds seem automatic. Bird song, however, seems intentional. Most young birds go through a stage of “babbling” during which their song is unformed and random. Then gradually over several weeks they will “decide” on what their song will be. Some birds (ex. song sparrows) develop a repertoire of several songs. Other species (such as  mockingbirds, catbirds and the brown thrasher) mimic other birds and arrange the “quoted” songs in their own way. And finally, some birds (ex. starlings, budgies) take mimicked bits and warp them, transform them. I need to research this more, but I’m remembering that there is an evolutionary inclination to evolve more complex songs because the female birds give preference to the males with varied songs.

Sound in time…

So — according to Dr. Robert Greenberg, composing music is simply making sounds in time. However, the way I’m thinking of it, there is a gradation of intentionality, of purpose. Am I just making sounds with my voice or my instrument to see what sounds it can make? Noodling, improvising? Improvising in the context of a group performance? Arranging bits very carefully in hope of expressing emotion?

What I would like to do is learn more about music theory and about composition in general so that my “sounds in time” become more intentional and expressive.

But for now, as a composer, it’s good to just make lots of sounds and pay attention to them.

——————–

Here is the Korg Karma featuring its intelligent arpeggiator, the “Karma Engine”

 

The mockingbird, a mimic and arranger

 

A starling, mimic and sound-warper

Whitacre, Earworm, AI, Rugnetta

An interview with Eric Whitacre. I found this interview very moving.

 

The “Earworm” series of videos by Vox is interesting and entertaining and thoroughly geeky. Here’s the most recent episode.

 

An article about music composed using “Artificial Intelligence” (AI)

https://www.wired.com/story/music-written-by-artificial-intelligence/

 

And — I had lost touch with Mike Rugnetta’s work several years ago. I’m excited to discover that he has continued his podcast series, called “Reasonably Sound”. One of the old episodes, called “The Drop”, turned me on to EDM and Paul van Dyk. Now I’m listening to Deadmau5, so that podcast made a big impression!

http://reasonablysound.com/