Archive for the ‘studio’ Category

Voidstar Festival + Future Endeavors

So, the Voidstar 25th Anniversary Festival has come and gone. I’m mostly happy with how both my ukuphambana set and the Zero Times Infinity performance turned out (though the latter was cut a little short). It was a fantastic time and I’m very happy to have met some new friends and enjoyed the work of my fellow festival artists. Wish I could get it together to do more of this kind of thing.

As far as what’s coming next: I’d like to create studio versions / mixdowns of the tracks I created for the festival set, and I’ve also been doing some rapid development of older ukuphambana sketches into more fleshed-out (but still not quite finished) in progress work. I expect one or both of these efforts to be well underway if not done by the end of the year. I might try to shop around some more albums to other distributors / labels. And I’ve just started work on a remix project that I’m pretty excited about.

The Rhythm of Work

I wrote back in January that this was shaping up to be a very productive year. I had finished and posted four new ukuphambana tracks by early March — six if you include the noise bits I created in the process of practice jamming for the Zero Times Infinity show. Now here it is, nearly June, and … nothing. What’s going on?

One factor was the ZTI show itself: it takes time to work out what kind of setup to use for a show, quite a bit of time to develop and practice techniques for live performance, and yet a bit more time to set the studio back up again afterwards. Making it worse, I allowed myself to fall prey to the urge to reorganize the studio before putting everything back together again. This is a terribly seductive form of procrastination, because although it generally results in some long-term workflow improvements, it’s the death of productivity while it’s happening. There are rewards to performing music live, but there are also good reasons I don’t do it very often, and they all boil down to the opportunity cost in lost production time being too heavy.

Next, there’s my annual technology investment. Early spring when my tax refund comes in, I generally buy some bit of new gear, and it takes some time for me to get my head around how to use it and how it’s going to fit into the rest of my studio arsenal. This is an embarrassingly self-indulgent thing to complain about, but it does have an impact (which I conveniently forget every year when the prospect of acquiring new toys beckons.)

Then there’s the annual shift in my daily schedule. Over the cold winter months, I’m a night owl, working on music after the children are in bed, then sleeping as late as I can get away with the following morning, but in summer it’s easier for me to get up early before the rest of the family to hit the studio and turn in before sunset every night. Either one works pretty well, actually. It depends on when I find it easier to muster the energy and when I’m least likely to be disrupted by family concerns, and these things vary with the seasons. But during the transitional times between one schedule and the other, it’s hard to make time for music.

Though I find it frustrating to admit that I have slow periods, I guess it’s good that I’ve been at this long enough that I can see them coming and have at least some idea when they are likely to occur and why. It gives me at least a toehold into improving the situation, and helps me keep the whole thing in perspective.

Oh, one more reason for the delay, and this is the best bit: I’m working on one hell of a new track. It’s weird and sprawling, and it’s taking me some time to tame it. It’ll be well worth the wait once it’s finally done. At the current rate of progress, I give it another week or two. Stay tuned!

Recording: Yellow Jacket

Summertime insects are out, and with their advent I have my first field recording of the season. This weekend, a yellow jacket got trapped on the inside of my basement studio window. I grabbed my Zoom H2, and had a chat with my little friend before opening up the window and shooing it out. I’ve now posted the recording up on Freesound.

Buzz buzz buzz…

Enjoy!

Sculpting Versus Bricklaying

Thinking back on the last few tracks I’ve written, I noticed something that I thought I’d share.

Some tracks clearly come together from the bottom-up: first the sounds, then beats, then sections, then overall structure. It’s like building a wall, one brick at a time. You start with nothing, then add elements one piece at a time until it’s done. Free As a Neutrino… and Leisured Forfeit were both done this way, for example. It may be a hasty generalization, but I think most of my more listener-friendly tracks use this technique.

Other tracks start out with broad washes of sound or noise, and the structure comes from carving away layers where they’re not needed, creating space and motion. In The Future We Shall Know Less was done this way. My noisier tracks tend to be, the ones that are a bit heavier on texture and less on structure.

Then there are some that combine the two approaches. The way these work is a bit like hand-sculpting chunks of masonry, mortaring them together, hacking away at the results and repeating the process again over and over until a shape gradually emerges. The low-level details and wide scale structure emerge together out of each other. These are without a doubt the hardest tracks to finish, because I spend a lot of time while I’m working on them lost as to exactly what I’m doing and where it’s going to wind up.

All the same, I find them the most satisfying to complete, and the most mysterious: these are the tracks I find myself coming back to years later, scratching my head and saying, “I did this, really?” Bleem is one of these. I had no idea what I was doing through most of it, and it still puzzles me how it ever got done.

I might make a conscious effort to combine these approaches more in future work, if it doesn’t slow me down too much. In any case, I think it’ll be helpful to have identified a new axis along which I can place things, another degree of freedom in deciding how I can work and what directions I can take.

Integrationist Musings

I’m an integrationist by nature. One of my creative strengths is in putting elements together in ways that other people might not have considered. This is true of tools and techniques as well as sounds: I get a kick out of making different bits of technology to work together, combining their strengths. My latest track, Within Epsilon of Bleem, is eclectic even by my standards, so I thought it might interest people to know how it was assembled.

During the making of Bleem, my studio changed quite a bit as I subtracted several pieces of gear which I no longer use enough to justify keeping them, and found new ways to integrate several other bits and pieces that I have found useful or fun to use. Most of what I’ve removed has been hardware, and most of what I’ve integrated has been software, though this hasn’t been an explicit part of the plan. Software just takes up less space, so it’s easier to justify keeping it. That being the case, most of what I’m about to write concerns getting several of my favorite synthesis software packages to talk to each other.

At the center of my studio is Ableton Live. I’ve used it for the better part of ten years, and have yet to find any other sequencer that more easily allows me to sketch out simple ideas — either in audio or MIDI — and then pull the best of those ideas out into a fully-realized composition. I’m not an expert at DAW’s, but I have a hard time understanding why people use the more complex, “serious” workstation software that’s out there. Before I started using Live, I had a brief go at learning Digital Performer. Although I got some good results, using it made me feel like kicking puppies more than it made me feel like writing music. Live always feels like it’s at least trying to help, so even when I stumble into some dark corner of the software I don’t recognize, it seems like it might be an exciting place to explore rather than a place I might get knifed and have my wallet stolen. Live is easy to get started with, surprisingly deep, and always inviting.

Another favorite tool of mine is Renoise. It’s an interesting beast. Renoise is a tracker, a style of music sequencer with a long and rich history. Tracker-style music editors co-evolved with the early ’90s demo scene, which means that the earliest ones were designed for efficiency, working as they did with very limited resources. Their earliest users also tended to lean more in the direction of hot-dogging hackers than professional musicians.

Given those constraints, the design sensibility is eccentric, to say the least. The main composition screen is something like a sideways piano roll crossed with a spreadsheet. Notes are traditionally entered through the computer keyboard, and appear in text form as an entry in a vertically-arranged grid. There’s a conceptual learning curve when you first get started, but the upside is that, once you get your head around the basics, actually using the thing is very, very fast. In that regard, I think it’s more like learning a traditional musical instrument than a lot of more standard computer music environments. The people who really work at it achieve levels of virtuosity that I find hard to imagine using a point and click interface. Renoise comes from that tracker tradition, but its feature set rivals that of a commercial grade DAW: MIDI implementation, VST hosting, Rewire integration, you name it. Rock solid timing and very robust, too. I can’t remember ever having crashed my copy.

Renoise excels in quickly laying down complex, multi-layered percussion tracks, and this is usually where I start when I use it. Most of the faster, fiddlier bits of percussion programming in Bleem were written in Renoise. You can here an excerpt here. I’ve written many tracks entirely in Renoise, and enjoyed doing it; but for all my praise, it does have its weak spots. I find it hard to see the large-scale structure of a composition written with a tracker, in a way that I don’t with something like Ableton Live. It’s also too easy to get ensnared endlessly fiddling with minor details instead of moving along in the composition: when I write with Renoise I tend to think in terms of creating a single pattern with layer upon layer of detail, stacking ideas vertically on top of each other instead of moving along horizontally through the composition.

So for Bleem, I tried something new: I used Ableton Live as my starting point, but tied Renoise in as a Rewire slave. By doing this, I gave myself the best of both worlds: Live as my wide-angle overview, with the ability to easily bring in parts sourced from other places, but Renoise for the lowest level drum programming details, where its pattern editor really shines. I liked this setup so much that I might make it the default starting point for future efforts.

Another favorite music tool of mine, and one that figures heavily in Within Epsilon of Bleem, is Metasynth. Metasynth is another strange creature, basically a consequence of asking what it would be like if you took a spectrograph and made it editable, and then pushing that idea in many different directions. It was famously used to create the “hidden face” in an Aphex Twin track, as described here. Many of the sounds in this piece started out life as recordings mutated into something new via Metasynth filtering. For example, the wobbly metallic loop that starts the track was created by processing a recording in Metasynth; it recurs, completely transformed into a series of high-pitched textures, later in the track.

I find it interesting to contrast Metasynth with Renoise, because they are so similar in some ways: both are utterly unique in their approach, doing things in a way that nothing else quite does; and both can be used to write entire compositions on their own, but each has strengths and weaknesses that I feel make them work best as players contributing within a larger context. Metasynth is very different from Renoise in other ways, though. Where Renoise is visually minimal and keyboard-intensive, Metasynth is by its nature highly visual and graphically oriented; and where Renoise’s developers are very responsive to user feedback, Metasynth’s creators seem to have an idiosyncratic vision which they follow nearly to a fault. Nowhere is this more evident than in Metasynth’s stubbornly isolationist insistence on being a stand-alone tool: no MIDI integration, no Rewire, no VST hosting. It just doesn’t fit with their way of doing things, and if you’re using Metasynth, the attitude goes, you’ve got everything you need.

As you might expect, this rubs my integrationist tendencies the wrong way. So I took an unusual step of configuring Metasynth as the default external audio editor in my copy of Ableton Live. This means that whenever I want to do something particularly weird and Metasynth-like to a sound in Live, I can easily edit it in Metasynth, pull the results back into Live, and keep going where I left off. It’s not a perfect approach, but it works for me. You can hear a lot of Metasynth driven sounds on Bleem. A brief excerpt showing why I love Metasynth can be heard here.

The melody lines that come in around 2:48 were written in Buzz Tracker. It’s hard to say what keeps me coming back to Buzz. The version I have is old, and crashy, and I have to keep a separate Windows laptop running almost exclusively so I can run it, but it was an amazing synth program when I first discovered it, and nearly ten years later I still find new things to like whenever I start it up. Usability and workflow issues aside, there are synths in Buzz whose sound I love, and so far that’s continued to justify the hassle of maintaining a separate environment.

In the case of Buzz, integration came in a form of DJ’ing, essentially: luckily Ableton Live and Buzz have very solid timing, so I was able simply to program out the Buzz sections while leaving a looped section of Ableton Live running in the background, record the output of my Windows laptop into a new Ableton Live audio channel, and everything stayed more or less synchronized.

The newest contributors in my studio are Numerology and Native Instruments’ Komplete plugin bundle. I bought both of them this past year, during the making of Bleem. I’m still getting comfortable using both of them, but together they make an appearance in the form of the bleepy loop that sneaks its way into the track early on, but only appears prominently toward the end. I integrated Numerology similarly to Renoise: by tying it in through Rewire while composing, and then mixing its output down to a separate audio track in Ableton Live. I briefly flirted with running Renoise, Numerology and Live all at the same time, including plugins. It worked, but with enough timing glitches and dropouts to convince me that I need either better hardware or to spend some time optimizing my MacBook before trying to use all three together. So it’s back to layering, which is fine. I think switching between three radically different environments is too complicated for my brain anyway.

Oh, and hardware? Yeah, I use hardware sometimes too. In particular, Bleem includes some rhythm loops I created on a Korg ER-1, and some background textures created with a Nord Micro modular. This has gone on a bit too long, though, so I think those can be the subject of another post.

Return top