Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
The Sounds of Music
#91
Humans Evolved to Play Music

If music is sensitivity and responsiveness to the vibratory energies of the world, then it dates back nearly 4 billion years to the first cells. When sound moves us, we are also united to bacteria and protists. Indeed, the cellular basis of hearing in humans is rooted in the same structures, cilia, possessed by many single‑celled creatures, a fundamental property of much cellular life.

If music is sonic communication from one being to another, using elements that are ordered and repetitive, then music started with the insects, 300 million years ago, then flourished and diversified in other animal groups, especially other arthropods and the vertebrates. From the katydids animating the night air in a city park, to the songbirds that greet the dawn, to the thumping fish and caroling whales of the oceans, to the musical works of humans, animal sound combines themes and variations, reiteration and hierarchical structure. To argue that music is sound organized only by “persons” and not “unthinking Nature,” as philosopher Jerrold Levinson has done, is akin to claiming that tools are material objects modified for particular use only by humans, thereby excluding the artisanal achievements of nonhumans like chimpanzees and crows. If personhood and the ability to think are the criteria by which to judge whether a sound is music, then music is a multiplicity encompassing the many forms of personhood and cognition in the living world. Erecting a human barrier around music in this way is artificial, not a reflection of the diversity of sound making and animal intelligences in the world.

If music is organized sound whose intent is wholly or partly to evoke aesthetic or emotional responses in listeners, as Godt and others claim, then the sounds of nonhuman animals must surely be included. This criterion aims, in part, to separate music from speech or emotional cries, a challenging line to draw even in humans where lyrical prose and poetry erode the division from one side and highly intellectualized forms of music chip away at the other. All animals live within their own subjective experiences of the world. Nervous systems are diverse, and so the aesthetics and emotions that are part of these experiences no doubt take on multifarious textures across the animal kingdom. To deny that other animals have such subjective experiences is to ignore both our intuitions from lived experience (we understand that our pet dog is not a Cartesian machine) and the last 50 years of research into neurobiology, which now can map within the brains of nonhuman animals the sites from which emerge intention, motivation, thought, emotion, and even sensory consciousness. Laboratory and field studies show that nonhuman animals, from insects to birds, integrate sensory information with memory, hormonal states, inherited predispositions, and, in some, cultural preferences, producing changes in their physiology and behavior. We experience this rich confluence as aesthetics, emotion, and thought. All the biological evidence to date suggests that nonhuman animals do the same, each in their own way. For the cat, then, “yowling” is music if it stimulates aesthetic reactions in feline listeners. The subjective responses of other cats are the relevant criteria by which to judge the sound’s musicality.

If music is sound produced through modification of materials to make instruments and performance spaces in which to listen, then humans are nearly unique. Other animals use materials external to their bodies such as nibbled leaves or shaped burrows to make or amplify sounds, but none make specially modified sound‑producing tools, even the skilled toolmaking primates and birds. Music, then, separates us from other beings in the sophistication of our tools and architecture, but not in other regards. We are, as other musical animals are, sensing, feeling, thinking, and innovating beings, but we make our music with tools in a built environment of unique complexity and specialization.

https://www.wired.com/story/music-sound-...y-history/
Reply
#92
"Convergence Theory" is the culmination of a mathematical/philosophical work by Robert Edward Grant with the collaboration of Talal Ghannam, Alan Green, Jonathan Leaf, Adam Apollo, Michael Edwards, John Wsol and Jamie Janover — collectively known as the "Mathemagicians Group" who have been meeting every Thursday for a couple of years to flesh out their collective and individual hypotheses until the spaghetti sticks to the kitchen wall.

"Monadic Geometricity" takes its name from John Dee's inspirational hermetic works behind the Shakespeare Mystery.

Robert Grant based his intuitions for "Convergence Theory" (explained in greater detail at his website: RobertEdwardGrant.com) on a drawing of the Flower of Life by Leonardo da Vinci.

Alan Green set it to music and geometrical waveform images.

Reply
#93
“Software With Infinite Patience.”

His software sang the words of God. Then it went silent.

Who was Thomas Buchler, the late creator of beloved Torah program TropeTrainer? And can anything be done to revive his life’s work?

Every written word or short phrase in the Torah is assigned one of a body of musical motifs, known collectively as cantillation or trope in English, or ta’amim in Hebrew. The words of this text had been written down, in the consonant-only Hebrew alphabet, by sometime around 400 to 600 BCE; but as the musical component continued to develop, it remained an entirely oral tradition. As more written texts were added to the Jewish sacred canon, they, too, were set to trope and sung aloud.

Then, in 70 CE, Rome conquered the area that is now Israel and the West Bank, and much of the Jewish population dispersed across the ancient world. As communities became more isolated from each other, the oral tradition of trope began to mutate.

This was a huge problem. Trope are not just melodies — they also function as punctuation, musically joining linguistic clauses or separating them, indicating different kinds of pauses and where verses end. Getting the trope wrong can radically distort the meaning of the text.

Several groups of Jewish scholars, alarmed by this, began working on a solution. What emerged, over several centuries, was a data-storage innovation: a set of marks above and below the letters that indicate both vowels and, crucially, the correct trope for each segment of the text.

The scholars who developed these marks came to be known as Masoretes, from a Hebrew root meaning to pass down — though some argue the term derives from another root, meaning to tie down. The system was refined and standardized in the first half of the 10th century CE by a scribe named Aaron ben Asher, the scion of a long line of Masoretes; ben Asher also composed a detailed system analysis and user’s manual.

“They were recording as carefully as they could what they thought to be the best, most accurate reading tradition,” trope scholar Hayyim Obadyah explains to me.

This solution was not without controversy: Just like cassettes, trope marks automated what had been a human interaction. Encoding information in written form stops it from changing — ties it down, fixes its form — but it also frees the information to be transmitted without the approval of experts. Nonetheless, what was seen as a perversion by some was the only way for the knowledge to survive at all.

Source code

Once Buchler understood the system of trope marks, he had to find a way for his software to do what the cassette tape could do: produce, on command, the sound of a voice chanting Torah.

At first, he thought he could just record each verse or phrase as a sound file and have the program play it back. This is how most modern computer voices, like Siri’s, work.

But this would require a huge sound library, too big for a single hard drive to hold — and since this was still the days of Web 1.0, storing files in the cloud to be retrieved by an always-online computer, as Siri does, wasn’t an option.

What Buchler needed was a true speech synthesizer, a program that could generate its own sound files from scratch. There was only one option: the DECtalk text-to-speech voice engine.

“He wanted the engine because it was the only one that had half a chance of being able to sing the way it needed to sing,” Stacey Schnee tells me.

“Torah trope can go up and down, twist and turn and flutter and rise and fall.”

https://www.inputmag.com/features/tropet...h-software
Reply
#94
Quote:trope, in medieval church music, melody, explicatory text, or both added to a plainchant melody. ... Two important medieval musical-literary forms developed from the trope: the liturgical drama and the sequence (qq. v.). A troped chant is sometimes called a farced (i.e., interpolated) chant.

Better known as ear worms! lol




[/url]
[url=https://communication-breakdown.com/mybb/Thread-is-old-music-killing-new-music]Is Old Music Killing New Music?
Reply
#95
(03-18-2022, 02:41 AM) Wrote: "Convergence Theory" is the culmination of a mathematical/philosophical work by Robert Edward Grant with the collaboration of Talal Ghannam, Alan Green, Jonathan Leaf, Adam Apollo, Michael Edwards, John Wsol and Jamie Janover — collectively known as the "Mathemagicians Group" who have been meeting every Thursday for a couple of years to flesh out their collective and individual hypotheses until the spaghetti sticks to the kitchen wall.

"Monadic Geometricity" takes its name from John Dee's inspirational hermetic works behind the Shakespeare Mystery.

Robert Grant based his intuitions for "Convergence Theory" (explained in greater detail at his website: RobertEdwardGrant.com) on a drawing of the Flower of Life by Leonardo da Vinci.

Alan Green set it to music and geometrical waveform images.


[Image: m8BNqmW.jpeg]

[Image: virus_cymaticsSM.png]

[Image: dannybecher-cymatic-folder-01-web_pic.jpg]
Reply
#96
Jingles and Jangles certainly do identify characters...

[Image: 1599159625?v=1]
Reply
#97
Chuckle Chuckle Chuckle 

[Image: 6waoku8635w81.png]
Reply
#98
“Frisson” derives from French and is “a sudden feeling or sensation of excitement, emotion or thrill,” and the experience is not confined to music. Historically, frisson has been used interchangeably with the term “aesthetic chills.”

According to a 2019 study, one can experience frisson when staring at a brilliant sunset or a beautiful painting; when realizing a deep insight or truth; when reading a particularly resonant line of poetry; or when watching the climax of a film.

Thinking back to my experience of listening to Johnny Cash, it was at the precise moment the song “violated my expectations” that I felt frisson. When I anticipated that the song would decrescendo, it crescendoed even more. And, as Huron’s book discusses, the most reliable indicator of musical frisson is an increase in loudness.

Other reliable indicators include the entry of one or more instruments or voices; an abrupt change of tempo or rhythm; a new or unexpected harmony; and abrupt modulation. Music psychologist John Sloboda found that the most common types of musical phrases to elicit frisson were “chord progressions descending the circle of fifths to the tonic.” This is a deeply affecting chord progression common in many of Mozart’s compositions.

https://bigthink.com/neuropsych/frisson-song-playlist/
Reply
#99
Chuckle Balloons Popcorn 

[Image: 61664a140ff41.jpeg]
Reply
New super Sun photo Theatrics, daily updated collections
http://eastbend1993politics.alexysexy.com/?unique
extreme asian politics moves romantic evening then politics streaming videos romanian politics model agencies psp politics game caught my ex doing politics
Reply


[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: