Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts

May 27, 2011

identity in duality: craniopagus twins

The story first caught my attention in November. Now, in a humane and insightful piece, the NY Times magazine gives the incredible, philosophically- and neurologically challenging tale of craniopagus twins the long-form treatment.
The explanation Cochrane proposes is surprisingly straightforward for so unusual an outcome: that visual input comes in through the retinas of one girl, reaches her thalamus, then takes two different courses, like electricity traveling along a wire that splits in two. In the girl who is looking at the strobe or a stuffed animal in her crib, the visual input continues on its usual pathways, one of which ends up in the visual cortex. In the case of the other girl, the visual stimulus would reach her thalamus via the thalamic bridge, and then travel up her own visual neural circuitry, ending up in the sophisticated processing centers of her own visual cortex. Now she has seen it, probably milliseconds after her sister has.

The results of the test did not surprise the family, who had long suspected that even when one girl’s vision was angled away from the television, she was laughing at the images flashing in front of her sister’s eyes. The sensory exchange, they believe, extends to the girls’ taste buds: Krista likes ketchup, and Tatiana does not, something the family discovered when Tatiana tried to scrape the condiment off her own tongue, even when she was not eating it.

Even knowing about the tests and what Cochrane believed, I listened to the family’s stories with some amount of skepticism. Perhaps they were imagining it or exaggerating for the sake of a good story. Then in one of the many idle moments of the five days I spent with the family, the girls were watching television, and I absent-mindedly gave Tatiana’s foot, which Krista could not see, a little tickle. She turned to me and smiled, and then Krista spoke: “Now do me,” she said. Had she felt the sensation but wanted the emotional experience of knowing that she, too, was receiving that kind of playful attention?
If you TL;DR this one, you're going to miss out.

May 22, 2011

attention

I.
Earlier this year, I recommended Lawrence Rosenblum's See What I'm Saying, which explores the lesser-known aspects of sensation and cognition. What I didn't mention was that I had two of my English classes read an excerpt, then head out into the halls to test our echolocating skills. Since we had so little practice, we were terrible at it--but we could hear the possibilities. Navigational failure was a pedagogical success.

I was reminded of that experience when pointed by Maggie Koerth-Baker to this blog entry by neuroscientist Bradley Voytek.
We're used to thinking of our senses as being pretty shite: we can't see as well as eagles, we can't hear as well as bats, and we can't smell as well as dogs.

Or so we're used to thinking.

It turns out that humans can, in fact, detect as few as 2 photons entering the retina. Two. As in, one-plus-one.

It is often said that, under ideal conditions, a young, healthy person can see a candle flame from 30 miles away. That's like being able to see a candle in Times Square from Stamford, Connecticut. Or seeing a candle in Candlestick Park from Napa Valley.

Similarly, it appears that the limits to our threshold of hearing may actually be Brownian motion. That means that we can almost hear the random movements of atoms.
Voytek calls humans "inattentive superheroes," our skills fundamentally underdeveloped in a world full of noise. We underestimate the value of silence, of darkness, of time spent alone. We'd like to be more focused, but we don't know how--and we keep filling our lives with more things that siphon attention away.


II.
Much of the siphoning is well-intentioned, an attempt to remind us--to alert us--to pay attention. You're rolling through a residential neighborhood, at the wheel of a two-ton death machine. In the corner of your eye, a yellow warning: "Children at Play." It's a safety measure that can be--and will be--easily ignored. And probably should be torn down.
The National Cooperative Highway Research Program, in its "Synthesis of Highway Practice No. 139," sternly advises that "non-uniform signs such as "CAUTION--CHILDREN AT PLAY," "SLOW--CHILDREN," or similar legends should not be permitted on any roadway at any time." Moreover, it warns that "the removal of any nonstandard signs should carry a high priority."

One of the things that is known, thanks to peer-reviewed science, is that increased traffic speeds (and volumes) increase the risk of children's injuries. But "Children at Play" signs are a symptom, rather than a cure--a sign of something larger that is out of whack, whether the lack of a pervasive safety culture in driving, a system that puts vehicular mobility ahead of neighborhood livability, or non-contextual street design. After all, it's roads, not signs, that tell people how to drive. People clamoring for "Children at Play" signs are often living on residential streets that are inordinately wide, lacking any kind of calming obstacles (from trees to "bulb-outs"), perhaps having unnecessary center-line markings--three factors that will boost vehicle speed more than any sign will lower them.
If, at our best, we're "inattentive superheroes," at our worst, we're overly confident, cognitively-deficient supervillains.
As is often the case in driving, when we meet the enemy, it is us. You want difficulty in judging spatial relations? Consider the research, by Dennis Shaffer, that showed people reporting 10-foot-long highway stripes to be two feet long. You want difficulty estimating speed? Consider this study, which found drivers underestimating their speed in the presence of children by upwards of 50 percent. You want exceeded sensory abilities? Consider the widespread phenomenon of "overdriving" one's headlights. You want trouble estimating distance? Ask any driver how many feet they'll need to stop, driving at 65 mph. You want impulsive? Who's reaching across the seat for that buzzing BlackBerry?

If "Children at Play" signs are ineffective at capturing our attention--or doubly ineffective when they do--what about other supposedly helpful road signs: speed limits, "Road Narrows," "Koala Crossing?" (We'll leave aside "One Way" for now.) What if they were gone--all gone? John Staddon points toward a possible future:
So what am I suggesting—abolishing signs and rules? A traffic free-for-all? Actually, I wouldn’t be the first to suggest that. A few European towns and neighborhoods--Drachten in Holland, fashionable Kensington High Street in London, Prince Charles’s village of Poundbury, and a few others--have even gone ahead and tried it. They’ve taken the apparently drastic step of eliminating traffic control more or less completely in a few high-traffic and pedestrian-dense areas. The intention is to create environments in which everyone is more focused, more cautious, and more considerate. Stop signs, stoplights, even sidewalks are mostly gone. The results, by all accounts, have been excellent: pedestrian accidents have been reduced by 40 percent or more in some places, and traffic flows no more slowly than before.
Of course, all of this could be moot once automobiles become truly auto. And then we can turn our attention toward more important things.


III.
For some, it's even harder than usual to block out the tumult of the everyday. In the fourth part of a fascinating series, Marie Myung-Ok Lee describes how her autistic son was finally able to learn how to ride a bike.
After my husband and I bought him a bike with training wheels, he would sometimes sit on it for a minute or two, try to pedal, and then have a tantrum, hurling the bike in frustration. His classroom bike-riding lessons weren't going any better. At a school meeting, the consensus among his teachers and other professionals was that independent bike riding was something he'd probably never learn.
They probably would have been right, were it not for Lee's persistence in seeking out a remedy: high-grade marijuana.
[C]annabis not only mitigates J's pain, it also seems to help him to focus... [M]arijuana's effect on short-term memory allows a user to focus intently on a single sensation (that "Whooooaaaa, man... look at that flower" feeling). One feature of autism is a heightened, disordered, nondiscriminating sensitivity, so that autistics seem to see and feel and hear and smell everything at the same time.... But with cannabis (which also regulates anxiety and stress), I noticed that J had a much higher tolerance for activities that involve multiple steps, like unloading the dishwasher.

Bicycling, when you think about it, involves myriad functions: coordination of gross motor movement with the vestibular, visual, and proprioceptive systems that regulate balance. On a nice weekend I brought J, his bike, his helmet, and a wrench to a nearby private school that has a bunch of wide, paved paths. I removed the training wheels from his bike, put him on it, and gave him a push, figuring that once he realized how good it felt to bike--to move along on his own power--he was going to love it. He pedaled and immediately tipped over, laughing, as he was expecting the training wheels to be there holding him up. But after a few tries, he started to get it. And before the afternoon was over, he was biking independently.
Lee's story is inspiring and infuriating; our federal government's increasingly bizarre insistence on persecuting medical marijuana users made her take unnecessary personal and medical risks. In a saner world, her doctor would have been able to prescribe a standard, fully-tested treatment, and her son's triumph would have been heartwarmingly ordinary.

It may tax your 21st-century attention span, but start with the first part and keep going until you're done.



Asides
When I was young, I could get so wrapped up in a book (or so focused on my Legos) that I'd shut out the world. Maybe that's why I've never been interested in trying pot: that "Whooooaaaa, man..." sensation may not sit well with a brain perfectly comfortable managing its own focal point.


When we let someone else sort the signal from the noise, we risk missing the whole signal. Call it a "filter bubble," algorithmically facilitated attention-narrowing.


Just because you can see a photon from space, doesn't mean you should drive without your glasses.


Driverless cars? Soon. But not quite yet.

Mar 16, 2011

the squishy self

If V.S. Ramachandran has a new book out, you can bet your sweet occipital lobe I'm going to link to Colin McGinn's review.
Why is neurology so fascinating? It is more fascinating than the physiology of the body--what organs perform what functions and how. I think it is because we feel the brain to be fundamentally alien in relation to the operations of mind--as we do not feel the organs of the body to be alien in relation to the actions of the body. It is precisely because we do not experience ourselves as reducible to our brain that it is so startling to discover that our mind depends so intimately on our brain. It is like finding that cheese depends on chalk--that soul depends on matter. This de facto dependence gives us a vertiginous shiver, a kind of existential spasm: How can the human mind--consciousness, the self, free will, emotion, and all the rest--completely depend on a bulbous and ugly assemblage of squishy wet parts? What has the spiking of neurons got to do with me?
I disagree with McGinn: neurology isn't any more fascinating than physiology, because as Lawrence Rosenblum's See What I'm Saying compellingly argues, neurology and physiology are blissfully codependent.  You are the mind your body builds, and the body your mind conceives.

Mind-body problem solved.  Wasn't that simple?

May 17, 2009

scaring yourself to death

Franklin Delano Roosevelt, riffing, once said that the only thing we have to fear is fear itself. On a related note, NewScientist's Helen Pilcher tackles a fascinating topic: the deadly nocebo.
The placebo effect has an evil twin: the nocebo effect, in which dummy pills and negative expectations can produce harmful effects. The term "nocebo", which means "I will harm", was not coined until the 1960s, and the phenomenon has been far less studied than the placebo effect. It's not easy, after all, to get ethical approval for studies designed to make people feel worse.

What we do know suggests the impact of nocebo is far-reaching. "Voodoo death, if it exists, may represent an extreme form of the nocebo phenomenon," says anthropologist Robert Hahn of the US Centers for Disease Control and Prevention in Atlanta, Georgia, who has studied the nocebo effect.

In clinical trials, around a quarter of patients in control groups - those given supposedly inert therapies - experience negative side effects. The severity of these side effects sometimes matches those associated with real drugs. A retrospective study of 15 trials involving thousands of patients prescribed either beta blockers or a control showed that both groups reported comparable levels of side effects, including fatigue, depressive symptoms and sexual dysfunction. A similar number had to withdraw from the studies because of them.
The effect is played out in "anticipatory nauseau," "mass psychogenic illness," and who knows what other maladies. By now, someone you know has probably already contracted sympathetic swine flu.

May 10, 2009

today's spring cleaning links

The mess in the apartment is contained and certified non-toxic. The mess on my hard drive, in my Google Docs, and in my Firefox bookmarks, though, is overwhelming. I have piles of lessons, handouts, images, links, and random observations, stashed in a "Dump" folder on the desktop (backed up, promise) or clogging up the bookmarks toolbar.

While it was still morning, while the laundry took a bath in the washing machine, while the sun faded into cloud, I sorted through the clutter, taking some links from the junk drawer and tossing them here on the blog.

Enjoy.

1. Clear out the cobwebs in your lecturing style.

2. Reorganize your way to a better timed essay.

3. You never know what you'll dig out of the bottom of the history trunk--or how it might get you in trouble.

4. Toy cars, dolls, stuffed animals, and superhero action figures that symbolize America's most salient political debates of the early 21st century.

5. You could always just hide junk by painting it into invisibility.

6. Why did you buy all this stuff anyway?

7.Need the will to get started? Find it nestled in the recesses of your brain.

Apr 19, 2009

"choice blindness" and post-hoc rationalization

NewScientist has a recent article about a phenomenon related to the halo effect called "choice blindness."
Rather than playing tricks with alternatives presented to participants, we surreptitiously altered the outcomes of their choices, and recorded how they react. For example, in an early study we showed our volunteers pairs of pictures of faces and asked them to choose the most attractive. In some trials, immediately after they made their choice, we asked people to explain the reasons behind their choices.

Unknown to them, we sometimes used a double-card magic trick to covertly exchange one face for the other so they ended up with the face they did not choose. Common sense dictates that all of us would notice such a big change in the outcome of a choice. But the result showed that in 75 per cent of the trials our participants were blind to the mismatch, even offering "reasons" for their "choice"....

Importantly, the effects of choice blindness go beyond snap judgements. Depending on what our volunteers say in response to the mismatched outcomes of choices (whether they give short or long explanations, give numerical rating or labelling, and so on) we found this interaction could change their future preferences to the extent that they come to prefer the previously rejected alternative. This gives us a rare glimpse into the complicated dynamics of self-feedback ("I chose this, I publicly said so, therefore I must like it"), which we suspect lies behind the formation of many everyday preferences.
Read the whole thing to learn the scope of "choice blindness," and how it points to an everyday sort of epiphenomenalism.

Mar 25, 2009

stop me if you've read this before

Two and half years ago I linked to an article describing how scientists had learned how to create "déjà vu on demand." That kind of research is providing a window into consciousness, turns out.
It is possible that both Moulin and Cleary are correct. The perirhinal cortex may store information about spatial relationships, rather than time, place and sequence of events, and so normal familiarity feelings could come largely from layout and configuration, backing Cleary's findings. Indeed, there may be many ways to produce false familiarity, according to psychologist Alan Brown of Southern Methodist University in Dallas, Texas, author of The déjà vu experience (Psychology press, 2004). His own experiments indicate some other possibilities. For example, he has induced the feeling by distracting volunteers while they saw a glimpse of a scene and then moments later giving them a good look. "If you take a brief glance when distracted, and look at the same scene again afterwards, it can feel like you've seen it before but much earlier," says Brown. He has also induced it by showing people images of things they had forgotten. "Just as a stomach ache can hurt the same way but be caused by lots of different processes, it could be the same way with déjà vu," he says.

The real problem with explaining déjà vu, however, is not how we can get familiarity without recognition, but why it feels so disturbing. "We'd get it all the time if it were just familiarity with real experiences," says Ed Wild from the Institute of Neurology in London. He suggests that mood and emotion are also important contributors to the sensation of déjà vu. We need the right combination of signals, not just the layout of a scene but how we feel at the time, to believe something is familiar when really it is not.
The entire article's well worth a read.

Mar 4, 2009

doodle your way to a better memory

Doodling isn't a distraction, it's a memory aid. Really:
40 members of the research panel of the Medical Research Council's Cognition and Brain Sciences Unit in Cambridge were asked to listen to a two and a half minute tape giving several names of people and places, and were told to write down only the names of people going to a party. 20 of the participants were asked to shade in shapes on a piece of paper at the same time, but paying no attention to neatness. Participants were not asked to doodle naturally so that they would not become self-conscious. None of the participants were told it was a memory test.

After the tape had finished, all participants in the study were asked to recall the eight names of the party-goers which they were asked to write down, as well as eight additional place names which were included as incidental information. The doodlers recalled on average 7.5 names of people and places compared to only 5.8 by the non-doodlers.

"If someone is doing a boring task, like listening to a dull telephone conversation, they may start to daydream," said study researcher Professor Jackie Andrade, Ph.D., of the School of Psychology, University of Plymouth. "Daydreaming distracts them from the task, resulting in poorer performance. A simple task, like doodling, may be sufficient to stop daydreaming without affecting performance on the main task."
I'm not surprised; in fact, I've asked students to doodle while I've read excerpts from essays or stories, since I found it allowed them to better concentrate on the task at hand. It's nice to have science backing me up.

[via Futurepundit and Instapundit]

Feb 10, 2009

an idle mind is the devil's picture-show

Your brain hates nothing so much as nothing, to the point that it will fill the nothing with anything just to have something there. Or something:
The late historian Lord Dacre of Glanton, formerly Hugh Trevor-Roper, was unusual among [Charles Bonnet Syndrome] patients in that he talked openly about what he jokingly referred to as his 'phantasmagoria'.

He would see horses and bicycles racing, and whole landscapes whizzing by as if he were on a train. On one occasion, he found himself trapped in an apparently endless tunnel.

Hallucinations tend to have common themes: simple geometric patterns, disembodied faces with jumbled features, landscapes, groups of people, musical notes, vehicles and miniature figures in Victorian or Edwardian costume. They can be in black and white or colour, moving or still, but they are always silent.

The condition was named after Charles Bonnet, an 18th-century Swiss natural philosopher whose grandfather had seen people, patterns and vehicles that were not really there. Bonnet was the first person to identify that you could have visual hallucinations and still be mentally sound.

The condition can affect anybody at any age with diminishing eyesight. Even people with normal vision can develop it if they blindfold themselves for long enough.
On a related note, this is why artificial intelligence, insofar as it means replicating human cognition, will work only if flaws are designed in. The brain is naturally buggy.

[via BoingBoing's David Pescovitz, who also links to an interview with the perpetually fascinating Dr. Sacks.]

Dec 3, 2008

Obama gets you right in your vagus nerve

Emily Yoffe reports that neuroscience is catching on to something students of oratory have known for, oh, millennia: rhetoric sends people.
Elevation has always existed but has just moved out of the realm of philosophy and religion and been recognized as a distinct emotional state and a subject for psychological study. Psychology has long focused on what goes wrong, but in the past decade there has been an explosion of interest in "positive psychology"—what makes us feel good and why. University of Virginia moral psychologist Jonathan Haidt, who coined the term elevation, writes, "Powerful moments of elevation sometimes seem to push a mental 'reset button,' wiping out feelings of cynicism and replacing them with feelings of hope, love, and optimism, and a sense of moral inspiration."...

We come to elevation, Haidt writes, through observing others—their strength of character, virtue, or "moral beauty." Elevation evokes in us "a desire to become a better person, or to lead a better life." The 58 million McCain voters might say that the virtue and moral beauty displayed by Obama at his rallies was an airy promise of future virtue and moral beauty. And that the soaring feeling his voters had of having made the world a better place consisted of the act of placing their index fingers on a touch screen next to the words Barack Obama. They might be on to something. Haidt's research shows that elevation is good at provoking a desire to make a difference but not so good at motivating real action. But he says the elevation effect is powerful nonetheless. "It does appear to change people cognitively; it opens hearts and minds to new possibilities. This will be crucial for Obama."
And how does it work, neurologically speaking?
Keltner believes certain people are "vagal superstars"—in the lab he has measured people who have high vagus nerve activity. "They respond to stress with calmness and resilience, they build networks, break up conflicts, they're more cooperative, they handle bereavement better." He says being around these people makes other people feel good. "I would guarantee Barack Obama is off the charts. Just bring him to my lab."
"Vagal superstars": brilliant, or completely batty. Or both.

Nov 9, 2008

the active idle brain

Every now and then, NewScientist publishes concurrent or even consecutive articles that, taken together, pose a dilemma unnoticed by the editors. The latest issue has a great example of a hidden paradox concerning the value of idling.

In the first, Douglas Fox reports that scientists have discovered a neural network that may form and strengthen memory when we're not actively thinking. [sub. req.]
"There is a huge amount of activity in the [resting] brain that has been largely unaccounted for," says Marcus Raichle, a neuroscientist at Washington University in St Louis. "The brain is a very expensive organ, but nobody had asked deeply what this cost is all about."

Raichle and a handful of others are finally tackling this fundamental question - what exactly is the idling brain up to, anyway? Their work has led to the discovery of a major system within the brain, an organ within an organ, that hid for decades right before our eyes. Some call it the neural dynamo of daydreaming. Others assign it a more mysterious role, possibly selecting memories and knitting them seamlessly into a personal narrative. Whatever it does, it fires up whenever the brain is otherwise unoccupied and burns white hot, guzzling more oxygen, gram for gram, than your beating heart.

"It's a very important thing," says Giulio Tononi, a neuroscientist at the University of Wisconsin-Madison. "It's not very frequent that a new functional system is identified in the brain, in fact it hasn't happened for I don't know how many years. It's like finding a new continent...."

The brain areas in the network were known and previously studied by researchers. What they hadn't known before was that they chattered non-stop to one another when the person was unoccupied but quietened down as soon as a task requiring focused attention came along. Measurements of metabolic activity showed that some parts of this network devoured 30 per cent more calories, gram for gram, than nearly any other area of the brain.
In the pages immediately following, Lewis Dartnell describes how researchers are turning to a modified form of "distributed computing" to harness strangers' idle minds.
But there are limits to what even a million computers can do. "Despite computers being very quick and accurate at certain problems, for many tasks they are still far surpassed by the human brain, such as in visual processing, spatial reasoning or problem solving," says Aaron Sloman, who studies artificial intelligence at the University of Birmingham, UK. So now the idea of distributed computing is being turned on its head. Instead of harnessing idle machines, researchers are inventing ways of using the processing power inside the brains of "idle" computer owners.

There seems to be no shortage of this intellectual power going begging. Clay Shirky at New York University has calculated that every weekend in the US alone, 100 million person hours are spent watching TV adverts - the same amount of time it took to create and edit the 2.5 million encyclopedia entries on Wikipedia. If only a fraction of this spare brainpower could instead be channelled into simple online tasks that help science, the contribution would be enormous.
Now the dilemma arises. The brain, at idle, is doing absolutely critical work, consuming 20% of the body's energy. Yet scientists want to essentially de-idle the minds of millions to help solve bafflingly complex problems, at unknown cost. Compound that with the problems of multitasking, and we have no idea of the potential net neurological losses cause by a lack of laziness.

Jul 1, 2008

Carl Zimmer's new pad

Carl Zimmer, in the running for America's best science popularizer, has moved his blog to Discover's website, since he's got a monthly gig writing about brain science. Excellent news. As they say, update your links and feeds accordingly.

Apr 23, 2008

retraining the brain

In a brief NewScientist interview, Jill Bolte Taylor, neuroanatomist, describes the insight into her thinking a stroke in her left hemisphere provided.
Yes, renewing or rerunning neurocircuits was a cognitive choice. The non-functional circuits started to come back online one at a time and I could choose to either hook into that circuitry or not feed it. For example, when the anger circuit wanted to run again, I did not like the way it felt inside my body so I said "no" to its running. Every time it tried to get triggered and run again, I brought my attention back to it - I did not like the way anger felt so I shut it down. Now that circuit rarely runs at all, mostly because I feel it getting triggered and nip it in the bud....

So, I look at us as a collection of neurocircuitry of thoughts and emotions and physiological responses. When you see the brain as the kind of computer network that it is, it becomes easier to manipulate. But you have to be willing. People say "Oh I'm so much more than my thoughts, I'm so much more than neurocircuitry," and I'm like, yeah, I had that fantasy once, too. I don't any more. As human beings we all have the ability to focus our minds on what we want to think about.
The more I learn about neuroscience, the more my thoughts on morality drift toward Aristotelian virtue ethics--that we, as a collection of neurally-inscribed habits, can reshape and retrain our behaviors betterward.

Taylor's twenty-minute talk in the video below is by turns tragic, mystical, and hilarious. "But I'm a very busy woman! I don't have time for a stroke!"

Apr 13, 2008

does it take the brain 7 seconds to make decisions conscious?

Surprising research suggests something to that effect:
When Hayne's team later analysed the fMRI scans, they found that the prefrontal cortex – a part of the brain that is involved in thought and consciousness – lit up seven seconds before the subjects pressed the button.

By deciphering the brain signals with a computer program, the researchers could predict which button a subject had pressed about 60% of the time – slightly better than a random guess.

"It seems that the brain is making the decision before the person themselves," he says.

Although we make some choices in a heartbeat, Haynes thinks his experiment captures the dawdling tempo of daily life.

"In most cases, we decide internally in a self-paced way: 'Now I want to get some orange juice' or 'I'm going to get some apple juice instead','" he says.

Our brains might pick beverages long before we realise, but Haynes thinks such decisions are still a matter of choice. "My conscious will is consistent with my unconscious will – it's the same process," he says.
Note the careful choice of language in the title to this post: "make decisions conscious," not "make conscious decisions." If, with further repetition and refinement, the results bear out, the adjective's order matters a great deal. (With a small sample size and crude technology, the "if" is rather large at this point. We'll have to see.)

Dec 6, 2007

finding a subjective correlative

Neuroscience and literary criticism meet, wonderfully and strangely, in this piece by Philip Davis. By developing and testing hypotheses on literature's effect on cognition, literary neuroscientists are looking for what I'd call a "neural correlative."

Imagine a critic scanning a text for what TS Eliot called the "objective correlative," famously described as
...a set of objects, a situation, a chain of events which shall be the formula of that particular emotion; such that when the external facts, which must terminate in sensory experience, are given, the emotion is immediately evoked....
Similarly, literary neuroscientists would look for the effect, not the cause--examining the brain, not the text. Davis explains:
With the help of my colleague in English language Victorina Gonzalez-Diaz, as well as the scientists, I designed a set of stimuli—40 examples of Shakespeare's functional shift. At this very early and rather primitive stage, we could not give our student-subjects undiluted lines of Shakespeare because too much in the brain would light up in too many places: that is one of the definitions of what Shakespeare-language does. So, the stimuli we created were simply to do with the noun-to-verb or verb-to-noun shift-words themselves, with more ordinary language around them. It is not Shakespeare taken neat; it is just based on Shakespeare, with water....

So far we have just carried out the EEG stage of experimentation under Dr Thierry at Bangor. EEG works as follows in its graph-like measurements. When the brain senses a semantic violation, it automatically registers what is called an N400 effect, a negative wave modulation 400 milliseconds after the onset of the critical word that disrupts the meaning of a sentence. The N400 amplitude is small when little semantic integration effort is needed (e.g., to integrate the word "eat" in the sentence, "The pizza was too hot to eat"), and large when the critical word is unexpected and therefore difficult to integrate (e.g., "The pizza was too hot to sing").

But when the brain senses a syntactic violation there is a P600 effect, a parietal modulation peaking approximately 600 milliseconds after the onset of the word that upsets syntactic integrity. Thus, when a word violates the grammatical structure of a sentence (e.g., "The pizza was too hot to mouth"), a positive going wave is systematically observed.
Davis's excitement at the results is as measurable as the N400 effect:
This, then, is a chance to map something of what Shakespeare does to mind at the level of brain, to catch the flash of lightning that makes for thinking. For my guess, more broadly, remains this: that Shakespeare's syntax, its shifts and movements, can lock into the existing pathways of the brain and actually move and change them—away from old and aging mental habits and easy long-established sequences.
To switch poet/critics, this is something akin to Wordsworth's "flash upon that inward eye," magnetically measured.

Oct 28, 2007

lose sleep, lose your mind

I'd say my experience as a teacher and coach confirms this.
Feeling cranky after a bad night's sleep? Now there could be an explanation. Brain activity associated with psychiatric illness has been observed in healthy people who missed a single night's sleep. As well as shedding light on why sleep deprivation makes us feel so bad, the study could change our thinking about mental illness.
By 8:00 Friday morning, after a week of sleep deprivation, students are at their most lethargic. By 5:00 on a Saturday evening, after a short night and a 6:30 bus ride, they're at their most hyper--just before the crash, a phenomenon I call "retrorockets."

But it's not just about adolescents. When I'm sitting on roughly five or six hours of sleep per night, little things change. I try to put the toothpaste cap on the toothbrush, or the oatmeal in the refrigerator. I forget where I placed something, or appointments I've set. I can also feel my emotions amping up--not just in the face of, say, distressing news, but even in my dreams.

Some of this week's themes, brought to you by chronic undersleep:
I'm trying to sneak across the border into Mexico, where a gigantic Y-shaped electrified tower-bridge-fence-thing stands in the way. I discover a subway-esque tunnel underneath--but here, parts of the floor are electrified, leading to one tense trip.

I'm sitting in the front passenger seat of my car. Out of the darkness, rabid raccoons begin assaulting the vehicle. I fight them off by slamming the door on their heads.

In yet another, I'm dealing with a troublesome student. He's pestering his buddy, and just as I'm heading over to quell the disturbance, he punches his buddy in the face. I physically have to drag him to the office, where we meet with the administrators. Since the Dave Matthews Band is coming to CHS for a concert, and the kid's a huge DMB fan, they decide he can go if he promises not to do this again. My anger is immense.

In the middle of a different teaching day, word comes out that the United States is under nuclear attack and has retaliated in kind.

I'm riding around on a boat in the middle of a flooded city, with all my possessions aboard. A fellow teacher, driving, goofs around and ends up capsizing the boat. Most of my tacky ties are seemingly lost. Later, when the water has receded, I find some of them under a bed, damp and somewhat discolored.
Real life is better, as my first batch of debaters, half novices, took home several trophies at the Gig Harbor intro tournament, and now I get to see wife and family this afternoon before heading back to another crazy week.

Sep 14, 2007

the phenomenology of Larry-sight

"I can't see! I can't see!" Larry shouts.

"Why's that?" Moe asks, alarmed.

"I got my eyes closed!"
Of course he can't see. Or can he?


[via Online Papers in Philosophy]

Sep 9, 2007

David Copperfield meets David Hume

George Johnson visits the Magic of Consciousness conference in--where else?--Las Vegas.
Sounding more like a professor than a comedian and magician, Teller described how a good conjuror exploits the human compulsion to find patterns, and to impose them when they aren’t really there.

“In real life if you see something done again and again, you study it and you gradually pick up a pattern,” he said as he walked onstage holding a brass bucket in his left hand. “If you do that with a magician, it’s sometimes a big mistake.”

Pulling one coin after another from the air, he dropped them, thunk, thunk, thunk, into the bucket. Just as the audience was beginning to catch on — somehow he was concealing the coins between his fingers — he flashed his empty palm and, thunk, dropped another coin, and then grabbed another from a gentlemen’s white hair. For the climax of the act, Teller deftly removed a spectator’s glasses, tipped them over the bucket and, thunk, thunk, two more coins fell.

As he ran through the trick a second time, annotating each step, we saw how we had been led to mismatch cause and effect, to form one false hypothesis after another. Sometimes the coins were coming from his right hand, and sometimes from his left, hidden beneath the fingers holding the bucket.

He left us with his definition of magic: “The theatrical linking of a cause with an effect that has no basis in physical reality, but that — in our hearts — ought to.”
[via OPP]

Sep 4, 2007

no brain is an island

Stuart Derbyshire, reviewing Chris Frith's Making up the Mind: How the Brain Creates our Mental World, wants to rescue free will from neuroscience. He focuses on schizophrenia because of its challenge to stable conceptions of the self, and because of its mysterious etiology and causality.

In doing so, however, Derbyshire makes two minor but bothersome errors. First, he ignores the vast number of pathologies where causality is better understood. Second, following Frith, he ignores the vast body of recent neuroscientific research on schizophrenia. Considering the plight of schizophrenic patients, Derbyshire quotes Frith, who writes,
‘There are no objective physical signs of schizophrenia. The diagnosis is based on what the patient tells the doctor. Patients say that they hear voices when no one is there (false perceptions – hallucinations). Patients describe how they are persecuted by their colleagues at work when there is no evidence that this is the case (false beliefs – delusions). Patients with hallucinations and delusions are sometimes described as being out of touch with reality. But it is the mental world, rather than the physical world, that they have lost touch with.’
Their fragmented consciousness is hidden from immediate view, but that doesn't make it impervious to objective measurement.

Schizophrenia, according to recent research, is associated with reduced cerebral laterality, reduced amygdala volume, frontal-subcortical circuit dysfunction, gray matter excesses in the caudate nucleus--the list goes on and on. Some of these differences might even be heritable.

In summary:
Until recently, the dominant view was that schizophrenia patients have limited, if any, neuropsychological impairments, and those that are observed are only secondary to the florid symptoms of the disorder. This view has dramatically changed.
Errors aside, what about Derbyshire's broader point, the underappreciated role of the will? Current neurosciencelooks for cause and effect in patterns of brain activity. This, according to Derbyshire, takes too narrow a view.
Frith’s dual contentions that reality is illusory and free will is just a manufactured state of mind are both far too strong. Our limited direct access to the world ‘as it truly is’ is certainly a real problem. It is a problem because the world does not divide itself into fact-sized chunks that can be consumed by our senses. Whether a forest is perceived as a unit or an aggregation of many trees is arbitrary, just as it is arbitrary whether we observe leaves as independent or continuous with twigs and whether the twigs are independent or continuous and so on ad infinitum. Nature does not inherently divide itself into salient pieces, and what is salient or important is only revealed in the relationships within nature....

[I]t is only through our relationship with the world that we can come to divide the world and begin to describe it. The facts that we can lay claim to about the world are arbitrary in so far as they are selected from an almost infinite number of potential facts, but we can, nevertheless, have great confidence that the facts we are gathering are real. We can have this confidence because our actions based upon those facts generally lead to expected outcomes – trains move forward through space, telephones transmit recognisable voice signals, medical intervention saves lives, and so on. These happy outcomes indicate that our division of the world is grounded in reality and that although our facts are arbitrarily selected we are not making them up as we go along.

In short, Frith misses, or understates, the role of inquiry in constructing a real representation of the world. Inquiry brings human beings into an understanding of the world that continues to more closely approximate the way the world truly is. The constraints that our brain places upon inquiry do not dictate reality but rather allow us the freedom to interrogate reality....

The fundamental mistake that Frith makes – and this is a common error – is to believe that agency or free will are products only of the human brain. The brain is necessary but it is not sufficient, and chasing agency into the brain will only yield disappointment or, in this case, a sense that agency is illusory. If agency is not merely a product of ordinary brains, then it follows that abnormal brains might not be the whole or only answer when there are psychiatric problems and delusions of agency such as in schizophrenia.
Derbyshire hints at, but never fleshes out, the processes involved in "inquiry." For this, we have to turn to other writings, for example, an attempt to answer the politically charged question, "Can fetuses feel pain?"
[C]onscious function can only emerge if the proper psychological content and environment has been provided. Before infants can think about objects or events, or experience sensations and emotion, the contents of thought must have an independent existence in their mind. This is something that is achieved through continued brain development in conjunction with discoveries made in action and in patterns of mutual adjustment and interactions with a caregiver. The development of representational memory, which allows infants to respond and to learn from stored information rather than respond to material directly available, may be considered a building block of conscious development. Representational memory begins to emerge as the frontal cortex develops between two and four months of age, supported by developments in the hippocampus that facilitate the formation, storage, and retrieval of memories. From this point tagging in memory is possible, or labelling as "something," all the objects, emotions, and sensations that appear or are felt. When a primary caregiver points to a spot on the body and asks "does that hurt?" he or she is providing content and enabling an internal discrimination and with it experience. This type of interaction provides content and symbols that allow infants to locate and anchor emotions and sensations. It is in this way that infants can arrive at a particular state of being within their own mind. Although pain experience is individual, it is created by a process that extends beyond the individual.
I sense an affinity between Derbyshire's position and what Russian developmental psychologist described as cultural mediation. We form and re-form the experiences of those around us, shaping and re-shaping the brain--ours and others'. In grand metaphysical style, you might say that every "I" is a "we."