Music of Dalton Bentley
  • Home
  • Albums with Chas Thoms
  • New Work by Dalton
    • Now That You've Gone
    • Still Remember
    • Shanidar
    • No One Knows (Where It Goes)
    • Window
    • The Wind
    • Blind Man's Vision
  • History of Dalton in Music
    • Archived live lead improvisations by Dalton from 70's
    • Pictures of Dalton from bands in the 70's
    • Some 1972 live cover work featuring Dalton lead
    • 1973 Live in El Paso
  • Contact Dalton
  • Music Blog
  • Best of All Possible Worlds

Evolution of Music

10/6/2012

0 Comments

 
I believe that music probably emerged accidentally from our language ability. I have been thinking for a long time about the way our brains seem to generate patterns and compare them to incoming perceptions from the external world in order to identify what we are experiencing. There is significant survival advantage in being able to process spoken language rapidly and accurately. Those early hominids who were better able to understand and act on information exchanged during a hunt for large and dangerous prey would tend to live longer and leave more offspring with this ability to populate the future. This survival advantage would include other auditory signal processing like identifying the sounds of predators, for example. So we soon ended up with a great capacity for identifying patterns of tones and rhythm, a capacity that would also be the basis for music. Only about 4% of humans have significant impairment of their ability to appreciate music, and some forms of this amusical, if you will, impairment appear to be inheritable, so clearly genetically based. There is good evidence of implicit musical capability in the brain: Studies show that “the human brain unintentionally extrapolates expectations about impending auditory input. Even in non-musicians, the extrapolated expectations are consistent with music theory.”

I recently discovered that others have thought about the evolution of music in this way also and therefore refer to music as “auditory cheesecake:” We evolved a huge appetite for sugar and fat when those nutrients were scarce in our diet but valuable to obtain, so needed high motivation (an affinity for the taste of these nutrients) to assure we identified sources and consumed them. That evolved taste for sugar and fat drives us to consume cheesecake, something that never existed in nature---except perhaps in the Garden of Eden (grin). So too might we consider our taste for music to have emerged from our capacity to process the sounds of language.

Some argue that we have to be taught to appreciate music in a way that is not required to appreciate cheesecake. However, from my experience this is not true---I always enjoyed hearing music, never really recalling a period of my life when there was not music of some sort occurring. Creating music does require learning and that ability varies.

There is plenty of reinforcement of our capacity to appreciate music. It is estimated that approximately 40% of the lyrics of popular music relate to sex in one way or another. Players of music seems to attract more mating attention (if not simply because of the pleasure of music in and of itself, it certainly is a gauge of the reproductive desirability of the player, requiring confidence before a group, coordination, dexterity, memory skills, physical fitness), something many of us admit is at least a fringe benefit, if not outright motivation for playing. Shakespeare said in one of his plays, “If music be the food of love, play on, give me excess of it.”

I’ve noted elsewhere that there seems to be a tribal reinforcement of group membership at work in music at times, e.g., in rap, military cadences, martial and ceremonial occasions. This would include the idea of the preservation of history by oral transmission in lyric poetry, taught from elder to young and passed on in each generation. The use of long distance instruments like trumpets and drums for communications in battle might have paved the way for Louis Armstrong and Buddy Rich. I do believe, though, that the earliest instruments were percussion (clapping hands, objects clacked together, drums of hide stretched over wood, etc.) and voice, followed by simple wind instruments, carved wood or ivory flutes (which have been dated to 35,000 to 43,000 years old).

I think we of the rock genre may be the modern troubadours, creating and performing songs of love and life, idealized and raw, but always with intensity.

See The Economist for an article sourcing some of the quotes and statistical references I used:

http://www.economist.com/node/12795510

See also Wikipedia for a good article on prehistoric instruments:

http://en.wikipedia.org/wiki/Prehistoric_music

And a great Wikipedia article on the cognitive neuroscience of music:

http://en.wikipedia.org/wiki/Cognitive_neuroscience_of_music

0 Comments

Won't see Little Jimmy Dickens again

10/4/2012

0 Comments

 
It seem to me that today's "country" sounds more like rock, but I would say less like 70's rock, which featured a lot of very good players back in the day when you actually became popular partly because of your performance talent rather than the degree to which your physical appearance attracted men and women, i.e., I don't think the players are very good, exceptions would include Brad Paisley, who is one hell of a guitar player, as well as a real country music writer, performer (Whiskey Lullaby gotta put tears in anyone's eye). As my friend and colleague (played in Dog Canyon band with Bill in 1971) Bill Welsh told me, “Little Jimmy Dickens [he was about 4 feet tall in boots] would not make it in country today.”

True it is that humans, being primates (as well as souls, but that is a another matter) interact socially with one another using myriad visual cues---the regal alpha baboon looking serenely at the horizon as he is groomed by his adoring mate and respected by his troop (for the moment) no doubt is confident that everything is as it should be (and that is possibly why they have not developed television or other characteristics of an advanced civilization, grin). My observations here are not entirely sour grapes as it were, i.e., there have always been those who admired me on substance as well as appearance (perhaps a select group, grin). I am complaining more about what seems to me to have been a transformation of culture after about 1974 such that appearance of things began to triumph over substance to a degree not previously seen. This may be partly the natural outcome of the rejection of values that occurred in the 60's, leaving in its wake only the herd and its collective narcissism.

I watched a 2010 documentary where some new jack groups played at Abbey Road studios. One of the band members (a guitarist and vocalist) explicitly stated the new philosophy: "I don't play all that well, nor do I wish to, since that would constrain my creativity" (paraphrasing). This may be less true in current country than popular music (I can't even call it rock at this point), since you do have the country session guys making it into the bands, providing the technical talent to buttress the "talent" (the studio term for the vocalist that is selling the song to the purchasing public).

I don’t believe that "good" is a matter of popular vote though. If you define good in terms of sales then you are talking more about successful rather than a gauge by competent practitioners of the range and depth of capability and creativity an individual brings to the table with his instrument, voice, composition.

As far as the hypothetical battle between "old rockers" and current professional country players, this would be difficult to quantify, e.g., in my opinion there aren't any country players who could keep up with Jimmy Page or Steve Howe (and no one can keep up with Al Di Meola, but that is a matter of superhuman capability), but Billy Gibbons, or Eric Clapton or Mark Knopfler would be easy enough to follow note for note for any lead guitarist, whatever the genre. It is important to note that reproducing licks does not necessarily mean that they could be independently created. It is foolish to sneer at a guitarist simply because you find it easy to play what they play---the genius comes what notes you play, not how they are played.

The popularity of rap does not affect my understanding of how you properly rate the technical virtuosity of a musician, or my own personal reaction to this genre or any other (if I walk the road less travelled, still it is my own road). The popularity of rap is merely an indication that tribal reinforcement is still a purpose of music, loosely defined (normally I would consider music to require both rhythm and melody, but I am biased by my culture I suppose), i.e., rap tells the story of boys in the hood and thereby imparts some significance that would not otherwise be there (there have always been rougher parts of town, but few sang of it---Elvis was a bit sentimental in his take of the ghetto, grin).

The perception of art is highly subjective. Anything humans think about or experience is conditioned by their own life's experience and personality. This is true even in areas like science, where we purport to produce explanations that stand up to test by others, test by reason and possibly experiment. I say purport because when it comes down to fundamental beliefs, even professional scientists can be myopic. For example, the scientific community was outraged that a study (Feeling the Future...) which establishes that humans may at times sense future events before they happen was published in the Journal of Personality and Social Psychology. The current scientific establishment believes (with religious fervor, ironically) that "the brain is just a computer made of meat" and that "we are all zombies; nobody is conscious." Any scientist who challenges those assumptions is attacked out of hand---because of the irrational bias of the scientific community on this subject. I do believe, however, that if we work at it, we can at least be aware of our biases---and the influence of the ego, which, for obviously good reasons in terms of survival, "may need to impress, dominate or control and sees others as either threats or tools."

0 Comments

A Practical Example of Frequency Plot Analysis and EQ

9/29/2012

0 Comments

 
After writing about frequency domain analysis recently, I thought a brief example of the utility of this technique in a practical audio application would be useful. I digitized an approximately 10 second piece of acoustic guitar performance from a recording dubbed to a Type I cassette tape about 40 years ago from a quarter inch reel to reel tape recording. The source reel to reel recording was made on two tracks of a TEAC 2340 using high quality quarter inch tape and normal bias at 7.5 ips (and quarter track semi-pro  format, i.e., either two or four tracks, as desired, running in the same direction along the length of the quarter inch wide magnetic tape). You can hear the tape hiss noise on the old recording here (use headphones or turn up your volume loud or you may not hear the noise):

littlemarthafreqdomaudarticlewnoise.mp3
File Size: 158 kb
File Type: mp3
Download File

I used Audacity (the fantastic open source audio editor) to generate a frequency analysis of that recording:

Picture
That spike on the right side of the plot is at about 15 kHz (15000 Hz) and is at -60 dB. I suspect that this is a residual harmonic (subharmonic) of the reel to reel 150 kHz tape bias signal. In order to record in the linear area of magnetic tape it is common to combine a bias signal with the signal being recorded in order to assure there is sufficient magnetic flux to the tape. You can’t hear much over 20 kHz, so this is not a problem in playback; however, since we have a lot of tape hiss, which is higher frequencies, we want to reduce that frequency along with other higher frequencies. You could use a notch filter to target specifically the 15 kHz frequency, but because the tape hiss occupies a much larger range of frequencies, I decided to use a multi-band equalizer to roll off (decrease the level of) all the frequencies in the recording higher than approximately 3 kHz. I used the Equalization Effect in Audacity to decrease the levels of frequencies in the recording, decreasing them by a larger and larger factor as the frequency increased (you can see that I moved the attenuation sliders farther and farther down as the frequency they effected increased in the screen below). I did this because most of the legitimate sound content, that is, the recorded guitar sound, is lower frequency than the tape hiss, but I wanted to keep some of the brightness of the guitar while reducing the perception of the hiss noise. Here’s how I setup the equalizer:

Picture
After reducing those high frequencies by applying the above equalization settings, this is how the recording sounds:

littlemarthanoiseoutfreqdomaudarticle.mp3
File Size: 163 kb
File Type: mp3
Download File

You can see the effect of the equalization on the frequency analysis of the edited recording:

Picture
See how the 15 kHz noise spike is now about -79 dB? That is 19 dB lower than originally and you can see that all those frequencies higher than 3 kHz or so are now ramping down at a pretty good slope to a much lower level. That is why the edited recording has almost no tape hiss (yet the guitar still sounds almost as bright as it did originally). I should note that another reason eq’ing out the high frequencies a bit took the hissing noise down so well is that the cassette tape was apparently recorded (copied from the reel to reel) in Dolby (B) noise reduction mode on the cassette deck used (the cassette tape Dolby box was checked by the person making the recording, so I believe they did indeed use Dolby).

Magnetic tape has an inherent noise floor (a tape hiss) deriving from the magnetization of the ferrite particles on the tape emulsion even in the absence of a recorded signal, i.e., if you take a blank cassette tape and play it back at full volume, you will hear some hiss. Dolby noise reduction was developed in the 60’s to try to reduce that inherent magnetic tape noise. Basically, when you record a tape with Dolby noise reduction the high frequencies in the audio you are recording are boosted as they are transferred to the magnetic tape, the higher level (loudest) signals less, the lower level (quietist) signals more. When you play the tape back, you want to use the same Dolby noise reduction circuitry to reduce those boosted high frequencies by the same amounts to restore the original dynamic range i.e., reduce the higher level high frequency signals a little, the lower level high frequency signals a lot. Since the tape hiss comes from the playback tape at a constant low level (as a property of the magnetic tape itself rather than the recorded audio), it gets lowered a lot as it enters the Dolby circuit prior to the playback amplifier (since it is a low level high frequency signal) and drops by as much as 10 dB below the actual audio you recorded (since the low level high frequency audio you recorded was boosted when you recorded it). This reduces the level of tape hiss and its perception by the hearer significantly.

In our test case here, the cassette recording was apparently recorded with Dolby noise reduction, so the high frequencies were boosted. However, when I played them back, I did not have access to Dolby circuitry. When I cut the high frequencies (as I described earlier), I probably reduced the tape hiss a lot, but only reduced the guitar recording high frequencies back to their original levels (since they would have been previously boosted during the Dolby recording process years ago when the tape was dubbed from the reel to reel master). In effect, I manually created a Dolby noise reduction filter.

So, I think you can see some immediate applications for these kinds of techniques in audio recording and mixing. For example, if your acoustic guitar track sounds muddy, look at the frequency plot of the recording and see where the lower frequency levels are, then do some equalization in that area. As a matter of fact, before I began the noise reduction process on this test recording, I did reduce the frequency content below 400 Hz (using the Equalization Effect) in order to clarify the sound of the guitar (and compressed and amplified the track, which was recorded at too low a volume). There are some general rules of thumb for eq of common rock instruments, for example, bass guitar tracks muddy up a multi-track mix unless you eq out the general neighborhood of 250 Hz in the bass guitar track. You can, of course, skip the frequency analysis and go entirely by ear. In that case, just increase the slider levels on particular frequencies of your equalizer until the objectionable sound quality becomes even worse, and then lower those same frequency sliders on the equalizer to eq out the undesired frequencies for that track. Or, conversely, adjust the sliders on the equalizer for a particular track at different frequencies until the track punches through the surrounding mix or acquires the type of sound you are looking for.  It is helpful to loop a few seconds of the track (set it up to keep repeating automatically) while you are making this kind of analysis by ear, so you don’t have to keep restarting the track.  

0 Comments

Frequency domain vs time domain in music

9/27/2012

0 Comments

 
Spectrum plotFrequency plot of D chord (click to see large image)
I was talking to some colleagues about EQ, using equalization to make particular tracks on a multitrack recording stand out. In talking about EQ I mentioned the idea of viewing a sound wave as a frequency plot, so I thought it might be interesting (possibly more interesting to me than others, grin) to post an image of a D chord played on my acoustic guitar (click on the image above to see a larger version of that frequency analysis), which includes a wave plot of the actual variation in time of the strings as they vibrate and the other the transformation of that series in time to a spectrum plot, which is a way of looking at the event in a place without time, if you have time, grin.

This is a D chord played with open 5th and 4th, fret 2 3rd, fret 3 2nd, and fret 2 1rst (strings), actually the first chord I played on my recording of Window. Every make of guitar has more or less a unique spectral signature. You can see that the wave of the actual sound wave propagating through the air from the guitar (below the spectrum/Frequency Analysis) is approximately a sine wave with about 4.56 ms (0.00456 second) between each peak. This is about 220 Hz, or approximately the note A3. This was surprising to me, since I was playing a D chord (however, if you consider harmonic content there is additional A3 available as the second harmonic of the 5th string, so perhaps this accounts for the strength of A3 energy). You can see from the frequency analysis that this sound wave actually contains many different frequencies and this is normally the case in nature unless you have a pure sine wave, in which case all of the energy would be in exactly the frequency of that wave.

The vertical height of the purple peaks in the Frequency Analysis is the relative energy at each frequency, higher vertically on the graph equals stronger/louder, however, the labeling is from 0 dB downwards. The larger a negative number, the smaller the quantity, so, for example, -20 dB is larger than -37 dB. I’m not sure which type of dB they used here, probably voltage amplitude, in which case a frequency at amplitude – 20 dB is  approx. 7 times greater than another frequency amplitude at -37 dB) . I labeled the more important peaks.

You can see the open 5th string A2 note about 110 Hz at about -37 dB, the open 4th string D3 approx. 146.83 Hz at about -33 dB, and the 3rd string 2nd fret A3 note approx. 220 Hz very strong at around -20 dB (as we might expect, since the time varying wave is very close to a 220 Hz pure sine wave), and the 2nd string 3rd fret D4 note approx. 293 Hz -38 dB. The A4 440 Hz energy at around -37 dB could be primarily the second harmonic of the 2nd string 2nd fret A3 (the first harmonic is defined as the fundamental frequency of a vibrating string; the second harmonic is 2X that frequency, so 2 X A3 220 Hz = 440 Hz or an A4 note). The A4 note and all of the remaining energy on the frequency analysis are higher vibration multiples of the actual notes played (harmonics). The A4 energy at 440 Hz could include components of the fourth harmonic of the open 5th string 110 Hz, i.e., 4 X 110 Hz = 440 Hz also, but there should be less energy in higher order harmonics than lower so I would assume less of a contribution from the open 5th string fourth harmonic than the 3rd string 2nd fret A3 220 Hz second harmonic.

To actually characterize the timbre of a particular instrument, you can make successive frequency analyses at a number of times after a chord or note is played and then make a 3-d plot or wire frame of that, but I don’t have that capability at present. I note that the software I used to make the analysis here is open source, i.e., free (Audacity). I also use Audacity for producing mp3 files of wav file output from my multitrack recording software.


0 Comments

1, 4, 5 Soliloquies, Moose

9/2/2012

0 Comments

 
Picture
I-IV-V sincere-strum-soliloquy style songs don't touch me at all, except to the extent I appreciate the message in the lyrics. Perhaps musicians are often selecting what is comfortable and natural to perform, personally. In this case, the chords and rhythm are uncomplicated and the vocal is really relying on the tonal quality and phrasing of the singer (and the personal appeal of the singer perhaps) rather than a striking melody. Set, the set of expectations we bring to bear on a particular experience, is important also, e.g., I enjoy a Bentley-Thomas song, "You're the Only One," (a I-IV-V of this same genre basically) though I probably would not seek it out were it not for the personal connection (to the composer, Chas Thomas, with whom I recorded the song for the 2010 Out of Time album), as well as the interesting lyric device (reversing the person of the lyrical narrator) and the great studio production and instrumentation (grin). In psychology, set has been famously demonstrated in the case of subjects rating the telephone conversation of a person associated with an attractive picture as more intelligent and pleasant than the identical conversation with a remote party associated with a less attractive image. I don't intend to say this particular form of set is at work here; it is merely an accessible example of how expectations color perception, and music is a complex perception.

But, returning to the chord pattern of I-IV-V, the simple harmonic motion between I and IV (for example, G and C) is a common element in American gospel music, which is one of the fundamental bases from which American musical styles evolved. Insert the V chord and you seek the tonic (another way of referring to I) from a fourth below, and have the I-IV-V which is the primary progression of the 12 bar blues, most 50’s and 60’s rock and roll, and American folk and country. You can modify the way in which you use these three chords and obtain new expressions, e.g., I to V, then down to IV, then back to V before resolution. Chord progressions are simply harmonic routes leading the music towards or away from a particular tonic chord (the key center) that we can choose to traverse to suit our own artistic direction, with a particular style of music offering its template of chords within which to operate---or to depart from when a new style is created. "A song has a few rights, the same as other ordinary citizens. If it feels like walking along the left hand side of the street, passing the door of physiology or sitting on the curb, why not let it? If it feels like kicking over an ashcan, a poet's castle, or the prosodic law, will you stop it?" [A quote from American composer Charles Ives]

 So, as The Preacher claimed so long ago, is there nothing new under the sun and all that will be done has been done before? I don’t believe so. For travelling a well-known set of paths, chord progressions, we can create new sequences of notes dancing in, out and among these paths---the melody: Now there’s the crux of the biscuit (using Zappa’s inscrutable but oddly appropriate phrase) for me! A song using the I-IV-V chords can completely transcend the common routes and cause you to forget you have ever heard it before, for example, the great gospel song (there is a lesson here also, i.e., why do the gospel songs offered in recent years lack melodic soul---when the host culture lacks soul is it possible to express such a melody?), How Great Thou Art. If you place the song in the key of C, it begins with the I chord, C, but the melody immediately catches your attention by repeating a G note three times (“O Lord my”) and dropping to an E note (“God”), possibly because this represents kind of a reverse arpeggio on the triad of which the C chord is composed (C note + E note + G note is the normal ordering of a C chord). …and so on, the melody of this song just moving me, making me actually feel the awesome wonder spoken of by the lyrics from the first time I heard it in a Baptist church a half century ago.

I don’t think about music theory when I’m listening to a song (I’m referring to the musical composition rather than the lyrical composition here, though the two are related ideally, the music of the composition helping others to get to the same place we are in the story we are telling, the poem we are expressing, with the lyrics) and I normally rely on intuition when I’m writing a song. A portion of a chord progression or melody might come to me on its own (from my muse, or Moose as we jokingly referenced, grin) and only afterwards might I consider music theory in order to pursue elaboration of this core (if the Moose is stingy at that point). I’ve found you can kick the Moose in the ass by running through music theory scales or chord progressions and letting Him (His Mooseness) tell you what has potential (by what you feel intuitively on hearing those, often in the context of the lyrics mood). Chas and I used that technique in composing many of the songs on the Out of Time and Just One More Time albums (after Chas complained about our use of I-IV-VII on a couple of songs, e.g., Shattered Apple Pie, Blind Man’s Vision---he feared falling into that formulaic chording pattern…though John Cougar, among others, made a career out it).

A good melody is what draws me in…and to some degree the judgment of what is a good melody is a matter of personal experience, i.e., what makes the particular listener feel something notable.


0 Comments

Paul McCartney and related

8/14/2012

0 Comments

 
After my post August 11, 2012, I feel I should clarify my opinion of Paul McCartney after disparaging his new song, My Valentine (2012). To briefly discuss that new song before proceeding with my primary mission here, My Valentine (hear it, see the video at http://youtu.be/f4dzzv81X9w) is not McCartney creating music, so much as McCartney falling back on music theory (in the absence of inspiration) in order to insert a stealth composition in among his covers of old traditional and pop songs by other artists on his Kisses on the Bottom album. Rolling Stone commented on his work on that album that “like his former song writing partner [John Lennon], McCartney is better transforming influences than mirroring them.”

As a musician, I admit it is entertaining to demonstrate one’s ability to mimic a style, as Weird Al Yankovic and Frank Zappa did so humorously. As an aside, I find it incredible that no one but me seems to realize that Frank Zappa was ingeniously lampooning Ian Gillan and Ritchie Blackmore (of Deep Purple) in his 1973 recording, Fifty-Fifty, which appeared on his Over-Nite Sensation album. That song is worth hearing, apart from the humor, for Jean-Luc Ponty’s violin simulation of Ritchie Blackmore’s guitar work, as well as Zappa’s insane vocal emulating Gillan’s annoying alternating falsetto scream/normal voice (fifty-fifty of each, grin)---you can hear it at http://youtu.be/25ThICK0Fbw.

However, Paul McCartney is probably the best songwriter and player (I’m not even going to qualify that by genre) who has ever lived, judging by the number and quality of his best compositions (e.g., his 1965 song Yesterday, the most frequently covered song in history) to inspire, evoke emotion, and just plain entertain. I personally feel that McCartney’s 1966 Paperback Writer may be the best single rock and roll song I have ever heard---his bass on that song evokes images of a jukebox rumbling in a crowded juke joint. 32 Billboard number one songs, 60 gold discs and over 100 million albums and a career that has spanned over 50 years? I don’t believe anyone has ever done that (or will ever do that again, for various reasons).

So, if I don’t like My Valentine, it is not because I don’t respect Paul McCartney. It is rather that I expect nothing but excellence from him.

0 Comments

Kenny Loggins and popularity

8/11/2012

0 Comments

 
I just read a recent blog post by Kenny Loggins. He told of playing for a crowd of 10,000 in Kansas City on August 3, 2012, then only 120 showed up at another Kansas City venue (Folly Theatre; seats 1200) the next day to hear him with his new band, Blue Sky Riders. He played all his hits August 3, the new stuff August 4. So, what's the difference? Maybe part of it was Kenny's fame was more attached to specific popular hits, so when he diverges from those archetypes that still resonate with The Crowd, they don't hear him. I mean, I think people would still turn out for Jimi Hendrix, were he around, to simply jam something new (guess we'll never know). Maybe the new material is just not all that good (it sounds pleasant to me, but doesn’t grab me enough to make it part of my musical landscape). I mean, McCartney's "My Valentine" new song he performed at the Grammy's show earlier this year stunk in my opinion (as did Joe Walsh's guitar work live on that number--I've never seen Joe so uninspired). It's curious though (about Kenny), since half the battle is getting someone to listen to your song. I've been at CD Baby now (with the two albums Chas and I did) for a couple of years and I've never listened to any of the 300,000 other artists up there (maybe one for a few seconds, just to compare sound level and quality). The Black Keys got popular building their fan base slowly through constant live performances touring from 2001 on until they broke through in 2010 or so. They sound good, but hardly brilliant or innovative (think the Fabulous Thunderbirds did that same sound 20 years earlier). At least they play rock music! My own tastes have changed over the years. At one time (1971) I am ashamed to say I enjoyed Grand Funk Railroad (briefly, thankfully). For a while if it wasn't Yes, my answer was No. Now I'm more song oriented, looking for a strong emotive reaction, good lyrics, good melody, possibly interesting video. For example, things like Shiny Happy People by REM. But the critics hated that song and made Michael Stipe say he hated it too---although he clearly did not and the song was number 4 on the charts for a while (the critics seemed to prefer the darker Losing My Religion released on the same album). I still like to hear good work on an instrument, but guitar shredders have no soul---if you're just playing a lot of notes without emotional grab you may as well generate them on a computer. I personally like my new song, Window, grin---I guess that is my good fortune at this late date in my life, to be able to write and record exactly (within the limits of the available time and instruments I have available) what I want to hear.

0 Comments

Hello, world.

7/14/2012

0 Comments

 
Hello world---that is the typical first print statement content a C language programmer uses, whether in testing or in teaching. It is not a bad way to begin a conversation potentially open to everyone on this little chunk of rock with peculiarly favorable conditions for the formation of pretentious collections of molecules like ourselves. I say pretentious in that we like to believe we are different somehow from the natural world in which we live and that belief is often contradicted by the painful realities of biology. That being said, I know, as do most people with a mind not barricaded by defense mechanisms, that human existence does include experience that cannot be explained in terms applicable to the so-called laws of nature (unless you take a quantum mechanical Alice-Through-The-Looking-Glass view of the macro world---something not yet justified by any unification of the current laws of physics, the elusive Theory of Everything that would bring Einstein's Relativity and Gravity to the negotiating table with Quantum Mechanics). And one of these things we experience that seems unique in the Universe of mere things (I say mere things with tongue in cheek somewhat, being not entirely hostile to monism) is music. Emotion is not unique to humans (and I'm confining my discussion to our own cosmic backyard at the moment), but includes mammals generally I guess, the limbic system having proliferated in most of us since it was offered as one of the tools of evolution some 150 million years back. Something about rhythm and tone seems to evoke emotion, and that, for me, is music. Emotion is rather like the tone controls on your stereo, coloring verbal thought (you might feel the blues, you might be cherry hot, grin) and flooding the mind with sometimes undefinable feeling perhaps not even amenable to word. So, I always play it as I feel it.

0 Comments

    Author

    I've been playing guitar for 47 years and have a background in electronics and software design that began with the inception of the microcomputer and participated in the evolution of computer and Internet. I am an eclectic, being interested in many areas, including psychology, anthropology, philosophy, and mysticism. So, I enjoy rational and civilized discourse in almost any area and find a connection between all.

    Archives

    October 2012
    September 2012
    August 2012
    July 2012

    Categories

    All

    RSS Feed