SHAFAQNA (Shia International News Association) – The ability to control a tablet through our brain could be here in the near future. Researchers in Samsung’s Emerging Technology Lab say that the goal of this project is to expand means that people use to interact with devices. The company’s researchers are working with Roozbeh Jafari, who is an assistant professor of electrical engineering at the University of Texas, Dallas.
There’s no prototype right now which could be used in mass manufacturing process. Their current tests in the lab focus around the possibility of launching an application, selecting a song or a contact or powering on/off a Samsung Galaxy Note 10.1. Their early stage research with a cap that has EEG monitoring electrodes reveals that such an interface could help people with mobility issues accomplish tasks that they couldn’t do in the past. Researchers demonstrated opening apps without even touching the tablet by simply concentrating on icons that were blinking at a distinctive frequency. The ultimate goal is to make ‘dry’ EEG sensors which can be embedded in a cap, then all people would have to do will be to wear a cap in order to use their tablet without having to repeatedly poke the screen.-www.shafaqna.com/English
SHAFAQNA (Shia International News Association) -- A team of Japanese neuroscientists has used brain scanning technology to read the content of people's dreams.
Yukiyasu Kamitani of the ATR Computational Neuroscience Laboratories in Kyoto and his colleagues used functional magnetic resonance imaging (fMRI) to scan the brains of three people as they slept, while simultaneously recording their brain waves using electroencephalography (EEG).
The researchers woke the participants whenever they detected the brain wave patterns associated with the earliest stages of sleep, asked them what they had just dreamed about, and then let them go back to sleep. This was done in three-hour blocks, and repeated between 7 and 10 times, on different days, for each participant.
During each block, participants were woken up 10 times per hour. Each volunteer reported having visual dreams 6 or 7 times every hour, giving the researchers a total of around 200 dream reports from each of them.
Most of the dreams reflected everyday experiences. "I had a dream [that I was at] a bakery. I took a roll … then went out on the street, and saw a person taking a photograph," reported one participant. "I saw a big bronze statue … on a small hill [and] below the hill there were houses, streets, and trees," said another. Some contained slightly more unusual content, such as meeting a film star or being in a recording studio.
Kamitani and his colleagues used a lexical database called WordNet to extract key words from the participants' verbal reports, and picked 20 categories — such as "car", "male", "female", and "computer" — that appeared most frequently in their dream reports. They then selected photos representing each category, scanned the participants' brains again while they viewed the images, and compared brain activity patterns with those recorded just before the participants were woken up.
The researchers analysed activity in brain areas V1, V2 and V3, which are involved in the earliest stages of visual processing and encode basic features of visual scenes, such as contrast and the orientation of edges. They also looked at several other regions that are involved in higher order visual functions, such as object recognition.
In 2008, Kamitani and his colleagues reported that they could decode and reconstruct visual images from the activity in these brain areas. Now, they have found that activity in the higher order brain regions could accurately predict the content of the participants' dreams.
"We built a model to predict whether each category of content was present in the dreams," says Kamitani. "By analysing the brain activity during the nine seconds before we woke the subjects, we could predict whether a man is in the dream or not, for instance, with an accuracy of 75–80%."
He adds that the experiments did not examine the visual structure of the participants' dreams. "It's about their meaning, but I still think it's possible to extract structural characteristics like shape and contrast, as we did in 2008."
The work was presented at the annual meeting of the Society for Neuroscience in New Orleans last October, and has now been published in the journal Science. It suggests that dreaming and visual perception share similar neural representations in the higher order visual areas of the brain.
"This is an interesting and exciting piece of work," said Jack Gallant of the University of California, Berkeley. "Because dreams can be decoded more accurately from higher-level visual cortex than from primary visual cortex, it suggests that dreaming involves some of the same brain areas that are involved with visual imagery."
"And because dream decoding is most accurate for a few tens of seconds before waking, it also seems to suggest that our waking recall of dreams is based on short-term memory."
Kamitani and his colleagues are now trying to collect the same kind of data from the rapid eye movement (REM) sleep, a deeper stage of sleep also associated with dreaming. "This is more challenging because we have to wait at least one hour before sleeping subjects reach that stage," he says. "I don't have a pet theory about the function of dreams, but knowing more about their content and how it relates to brain activity may help us to understand them."
SHAFAQNA (Shia International News Association) – Ed Boyden tilts his head downward, remaining still except for his eyes, which dart back and forth between blinks for a full 10 seconds. Then, as if coming up for air from the sea of knowledge, he takes a breath, lifts his head back up and begins to speak again.
During these contemplative moments, you have to wonder what's going on inside the head of this young scientist who, at age 33, has already helped invent influential technologies in the study of the human brain.
It made sense when he told me, on a cold February day in his office at the Massachusetts Institute of Technology, "I guess I was always a philosopher at heart as a kid."
The morning of our meeting, The New York Times had just reported the Obama administration is considering funding an initiative called the Brain Activity Map project.
This is a collaboration of researchers who are seeking tools to map the human brain in unprecedented detail. A better understanding of how thoughts lead to actions, and how neural circuits lead to disease, could influence treatments for such conditions as epilepsy, autism, dementia, schizophrenia and even paralysis. Boyden is already working on such tools.
Clearly excited, Boyden bounces from his computer, where he's getting "zillions" of e-mails about the Brain Activity Map news, to the small table where we're speaking. In December, he participated in a brainstorming session in Washington about the endeavor. After our meeting, he tells me, he'll lead a conference call with other researchers about next steps.
That was before the government's forced budget cuts, so it's unclear whether new federal money for brain research will come through anytime soon. But the Brain Activity Map proposal is mainly about innovative collaborations in neuroscience, which are taking place anyway. Boyden is a co-author of a new paper in the journal ACS Nano, describing the value of nanotechnology tools in mapping the brain.
His efforts are being rewarded, as evidenced by the hodgepodge of awards in his office (as well as a certificate for "Mr. Most Likely To Be Late Because He is Teaching Students How to Build a Microscope"). In mid-March, Boyden won the €1 million Grete Lundbeck European Brain Research Prize, shared with five other scientists, for his pioneering work in using light to manipulate the brain.
"If you had to make a list of all the people in the world who are innovating in neuroscience, I think he'd be at the top of it," said Garrett Stanley, associate professor of biomedical engineering at Georgia Institute of Technology.
Why we're making a map of the brain
Optogenetics: Lighting up the brain
When you electrically stimulate one part of the brain, a lot of nerve cells called neurons get hit at once. In order to understand what particular kinds of neurons do, there needs to be a way to target them separately. For this, Boyden and colleagues turned to nature.
"All over the tree of life, you can find organisms that use molecules to convert light into electricity for photosynthesis or photosensation," Boyden said.
One example is single-celled algae, which has a small eye spot -- a brown sphere -- that senses light, prompting hairlike structures called flagella to move and making the plant effectively swim.
What if you could take a small piece of DNA from the algae and transplant it into a neuron so the neuron now produces a light-sensitive protein (and installs it on the cell's surface)? Then, Boyden and colleagues reasoned, you would have a neuron that could be turned on or off with light.
As a graduate student at Stanford, Boyden would often get into late-night conversations with Dr. Karl Deisseroth, who was an M.D.-Ph.D. student at the time working in the same lab. Deisseroth and Boyden began exploring their common interest in how to control specific types of neurons in the brain. Together, they came up with the idea of inserting light-sensitive proteins in particular kinds of neurons.
It was August 4, 2004, around 1 a.m. when Boyden put a dish of cultured neurons under a microscope. The neurons were genetically altered with a light-sensitive protein called a channelrhodopsin. He shined a blue light at them. Amazingly, on the first try, the technique worked.
Boyden recalled that night of scientific discovery in an essay published online by Faculty of 1000. He wrote: "I e-mailed Karl, 'Tired, but excited.' He e-mailed back, 'This is great!!!!!'"
The data that Boyden collected that night demonstrated the ideas he and Deisseroth would later formalize in a 2005 Nature Neuroscience paper. Around that time, the term "optogenetics" was born to describe what they were doing. Deisseroth is also a co-recipient of the recent European brain research prize.
Today, at least 1,000 neuroscience groups worldwide are using optogenetics to study the brain. The technique has so far been used in monkeys and mice to control their behavior, which has its own importance because it can yield new insights about the brain in general, Boyden said. Developing treatments for humans is another goal, but for reasons of ethics and complexity, that takes a lot longer.
What you can do with light
Being able to turn individual cells on and off could be powerful in finding therapies for brain disorders. For example, researchers could explore whether particular kinds of neurons are involved in the symptoms of schizophrenia, and selectively turn those off while leaving neurons essential for thinking intact.
This also has implications for treating addiction. In one experiment, scientists altered a mouse's brain so dopamine neurons -- involved in the sensation of pleasure, as well as addiction -- could be turned on with light. They delivered brief light pulses through optical fibers, prompting the mouse to stick its nose in a small portal over and over.
"An activation in these neurons lasting one-fifth of a second is enough to make the animal do more of whatever it was just doing," Boyden said.
Neurons can also be selectively silenced with light when they express other kinds of proteins. This is an avenue of exploration for treatments for epilepsy, a condition in which an excess in activity of neurons produces seizures.
Optogenetics may also prove crucial to creating a treatment for blindness; the idea is that you could make cells light-sensitive, converting the eye into a camera and restoring vision, Boyden said.
How to 'take over' a brain
"The tools he's developed are so broadly applicable," said Craig Forest, assistant professor at Georgia Institute of Technology. "Is it a stretch to say it's like the microscope for the brain?" He adds, "Optogenetics is the, kind of, microscope for neuroscience right now."
An accelerated career
The seeds of Boyden's career were planted in childhood. Growing up north of Dallas, he wanted to understand something about humanity and why we are here. He liked math better than science at first: "Math was the way of getting to the inner truth of things," he said. But then he wanted to know how our minds are able to understand math.
His thoughts gave way to an idea he now calls the "loop of understanding": Math is how we understand things at a deep level, our minds do math, the brain gives rise to our minds, biology governs our brains, chemistry implements biology, the principles of physics rule over chemistry, and physics run on math. It's a loop from math to math, with all the knowledge in between.
"I don't think I came up with that eloquent way to describe it until I actually came to MIT, but yeah, I was very interested in these kinds of things as a young teenager," he said.
His interest in science wasn't entirely a surprise; his mother has a master's in biochemistry, and conducted research on nicotine, but stayed home to take care of Boyden and his sister. His father was a management consultant (both parents are now retired).
"I was really a workaholic from age 10 onwards, I guess," he said.
Boyden remembers a statewide science fair in Texas when he was 12, where science-minded kids went to present their research.
It's hard to get him to talk about it now, but when I pressed him on the topic, he admitted, matter-of-factly, that he won the fair.
"Oh, you did! Do you remember what it was?" I asked of his project.
Boyden looked slightly uncomfortable, as if it were an embarrassing secret. "Uh, it was an area of geometry that has to do with the number of points on a plane and how they're connected and so on. It's not very easy to explain, nor frankly anything valuable."
However trivial his school-aged projects seem in retrospect, Boyden got to skip two grades. He enrolled at MIT when he was 16. On campus, he chose to live in a dormitory with a partying reputation. "I was very quiet, and so I went there to try and learn how to interact more with people," he said.
Today, he seems to have no trouble in that regard. Said Forest, who works on technologies with him to study individual neurons, "Not only is he just an incredible innovator, but he is just a really nice guy."
When Boyden arrived at Georgia Tech in February to give a lecture in the "Young Innovators" series, his puffy North Face jacket, slightly tousled curly locks and frizzy beard contrasted with Forest's pressed suit, lilac collared shirt and short salt-and-pepper hair -- almost like an Apple vs. PC commercial. The two communicate almost every day.
It's obvious how much respect Boyden has for his collaborators. Speaking to a standing-room-only lecture hall lined with dozens of students and faculty members, he spent the last minutes of his presentation reading off the names of students, post-doctoral fellows, collaborating groups and funding sources from a projected slide that listed even more of them. Then, he looked to the crowd and, almost in a whisper, said, "Thanks."
The idea that you could silence or activate neurons with light is a powerful one in combination with the tools that Boyden and Forest are working on together.
The two met at a social event at the MIT Museum while Forest was working on his Ph.D. at MIT. At that time, Forest was developing an instrument for genetic analysis involving thousands of glass tubes. Boyden told him about his own work in neuroscience, and about a problem that involved glass tubes for measuring neurons.
Forest didn't know anything about neuroscience, but he and colleagues "had a lot of expertise in how to build great devices," Boyden said.
The two researchers realized they could combine forces by creating a bundle of glass tubes, similar to what was being used for DNA analysis, for recording the electrical activity of individual neurons.
Neuroscientists often use mice, which have brains that structurally resemble those of humans, to test new technologies. Mice were the subjects of their first tests.
"We had no idea what we were doing at first," Forest said. "We went from a bundle back to one tube. We got that to work after three years of trial and error."
Those years paid off. Boyden and Forest, working with Forest's graduate student Suhasa Kodandaramaiah, developed a robotic system to record activity of a single neuron in the living brain using a tiny glass tube. The general technique, which has been around for about 30 years, is called "patch clamping," but this is the first time it has been fully automated through the assistance of a robot. The researchers have demonstrated the effectiveness of this technique in living mice, both awake and asleep.
Recording the neurons
Scientists know that the tube -- the "micropipette" -- has hit a neuron by measuring the electrical resistance between the tube and the brain. Resistance increases dramatically when the small needle bumps up against a neuron, as the tube forms a tight seal with the nerve cell.
"At that point, neuroscientists get very excited because you have the ability to hear the electrical activity of that single cell," Forest said. "Despite all the firing of millions of neurons in the neighborhood, once I form that seal, I can hear that one with exquisite sensitivity and precision."
There are only about a dozen of the automated patch-clamping robots in the world, but it's not Boyden and Forest's intention to keep them a secret. On the contrary, they've posted the complete plans for the robot on the Internet so anyone can download them and create their own.
"I would argue, if you really understand how the mind computes your thoughts and emotions -- things like sharing credit, teaching and doing good for humanity become natural," Boyden said.
The researchers are also working on a scaled-down version that would cost less and be more portable. They have spun off an Atlanta-based start-up company called Neuromatic Devices to sell the finished products, but neither holds equity in it.
Optogenetics can be combined with automatic patch clamping to identify neurons of interest and then measure their activity. The robot could also perform single-cell surgery.
Forest's group has already used the robot idea to simultaneously record intracellular activity from more neurons than anyone else -- only three, surprisingly -- in a live mouse. The Brain Activity Map's ambitions involve interceullar recordings thousands to millions of neurons at once.
When compared that way, the current neural recording technology is like a steam engine, while we need a rocket ship, Forest said.
"We've got a lot of work to do, and our hope is that this just plants some initial seeds towards that endeavor," he said.
One of the people e-mailing Boyden after The New York Times story was George Church, who has published research with him in the past. Church was one of the leading figures of the Human Genome Project, and a current backer of the Brain Activity Map.
Optogenetics, combined with high-density optical fiber arrays, could be a promising tool in the quest to map the brain, Church said. In other words, researchers can then use optic fibers to manipulate multiple neurons that activate in response to light. High-density optical fiber arrays would offer more, and thinner, probes for neuronal exploration than bulky electrodes.
Boyden and colleagues are also working on three-dimensional brain interfaces, manufactured in a way similar to computer chips. Rather than having computer circuitry on them, they have dense electrodes that would allow research to pick up on the activity of many neurons at once.
The data from recording so many neurons will require a huge amount of computer storage; for recording the entire brain at once, it would be mind-boggling.
"Most of neuroscience has gone on with people studying one part of the brain at a time," said Georgia Tech's Stanley. "Here's somebody's lab over here that studies this part of the brain, there's somebody's over there that studies that part of the brain. Putting it all together is really a tough problem."
Controlling his own brain
People don't like to talk about enhancing the brain, Boyden said; it makes people uneasy to think about designing or engineering a way to sharpen our minds. Yet plenty of people take pharmaceuticals -- sometimes without a prescription -- to help themselves focus or be less anxious, and caffeine and alcohol have been around for centuries.
"I think the most important thing is for humanity to openly discuss this topic," he said. "If we can discuss it, and we also can talk about side effects, should we maybe try to design more optimized versions of things?"
Enhancing his own brain was something Boyden thought about long before he became a neuroscientist.
At 18, he was taking six courses per term, all graduate-level. Feeling overloaded, and also curious about what it would be like to take a drug that focuses the mind, he went to the campus psychiatrist and got a prescription for Ritalin. He doesn't remember anyone telling him to just take fewer classes.
The drug did make him focus, a lot. He took one pill when he woke up and another just before plasma physics class, "so I don't remember much from that term except for plasma physics," he said.
But after a couple of months he stopped taking the Ritalin. He's not clear on whether the drug altered him permanently, or whether he just learned to focus better, skills for "how to simulate some of those processes in my mind."
Academically, Boyden never slowed down. On his 25-page resume, he lists perfect, or more-than-perfect, GPAs throughout high school, college and graduate school. At Stanford, he met his now-wife, neuroscientist Xue Han. They have published research together and are raising two young children. His "spare time" goes to his family.
In the last six years, he's found something new to calm himself and concentrate: meditation.
He meditates every day when he wakes up, incorporating a structure called "internal family systems," which involves looking at all the drives, thoughts and desires in your mind.
"By showing compassion for them, you can get them to become less polarized and work with them and negotiate between them and it's very powerful," Boyden said.
It takes him only a couple of minutes to get into a meditative state, which calms him and helps him analyze and address anxious thoughts.
I was surprised when he said he didn't know anything about the neuroscience behind meditation, which is a hot topic among other researchers, nor did he care about it as far as his own practice.
How meditation may help your brain
"I think that people should be careful about the need to see something in the brain to justify its worth," he said. "If you don't see anything in the brain, that might just mean the resolution of our brain (scans) just isn't good enough yet."
You can tell Boyden thinks a lot, for his own curiosity, and his vibe is more "grad student"-like than professorial. His frizzy beard resembles the thin branch-like connections between neurons that he's talking about. Every time he runs his hand through his curly brown hair, it falls down in a different unkempt way.
He's thinking these days about what is needed to "solve" the brain, which he believes could be done in 40 years. "Solving" it would probably involve understanding what gives rise to a thought or an emotion, as well as maps of molecules and circuits that allow the design of therapeutic interventions. Building technology platforms that enable this kind of research -- so "we can repair the brain" -- is one of his core hopes.
Stanley isn't sure all that can happen in the next few decades, but he does envision that it's possible that, just as cancer can be a chronically managed disease, disorders of the nervous system may be able to go in that direction.
Forest chimes in: "There are prosthetics today, and those will continue to improve; these things exist in a rudimentary form today," he said. "Over our lifetimes, we will see increasing roles for technology in managing disease states."
Boyden admitted to enjoying the prominence of his work insofar as it makes more people invested in turning these ideas at the frontiers of brain science into reality.
"I think you really need to understand how your mind computes your thoughts," Boyden said. "I think that's incredibly important, so I like the fact that the prominence means the field and people entering it (and), you know, the world, want to make this happen."
Attention doesn't seem to be what's driving him. He's a man of many ideas, and wants to understand the biology behind where ideas come from.
"I guess I'm still drawn by the philosophy," he said.-www.shafaqna.com/English
SHAFAQNA (Shia International News Association) -- As anyone who’s seen a yogurt commercial knows, our guts are teeming with bacteria. So, too, are our hands, feet, ears, and mouths.
But our brains?
Until recently, scientists would have said no way. The brain was long thought to be a kind of fortress, separated from the body by a virtually impenetrable barrier of specialized cells. Now, that view is beginning to shift, with increasing evidence that aliens can, and do, sneak in.
The latest evidence comes from a team of researchers in Canada, who found that a type of bacteria usually found in soil may make its way into some of our brains.
That possibility is “a mind-bending concept,” said Kathy Spindler, a professor of microbiology and immunology at the University of Michigan who was not involved in the new work. If confirmed, the study would “upset the dogma that the brain is normally a sterile site,” said Vincent Young, an infectious diseases physician and microbiologist also at the University of Michigan. If living bacteria help to maintain brain health in some way, disruptions to them, for example from antibiotics, could contribute to disease. (In other parts of the body, disruptions to native bacteria may play a role in some asthma, food allergies, inflammatory bowel disease, and even obesity, he added.)
The Canadian researchers weren’t looking for bacteria. But in the course of analyzing human brain tissue they came across genetic material typically linked to them. “That’s what tipped us off that there was something going on,” said Christopher Power, a professor of neurology at the University of Alberta who led the research group.
The surprise material turned out to be associated with alpha-proteobacteria, a kind of bug that normally hangs out in soil. The researchers found these bacterial molecules in brain samples from people with HIV, as well as people with no known infectious disease but who had undergone brain surgery. When they ground up human brain tissue and injected it into mice, bacterial molecules were detectable several weeks later in the mice’s brains, suggesting that something of the bugs had stuck around.
Does this mean that living, growing bacteria are crawling around in our white matter? Or that bacterial genetic fragments somehow persisted in the brain? Could be—which is kind of spooky.
“This is the kind of paper that raises fascinating questions,” said Young. A lot of people will now want to prove or disprove the presence of viable bacteria—and figure out what, if anything, the bacteria might be doing.
Power said there was no evidence that the alpha-proteobacteria he found caused disease. Nor was there evidence so far that it provided a benefit, as “good bacteria” do in the gut, for instance, by aiding digestion.
As recent work suggests, invaders have found all sorts of ways into the inner sanctum.
What’s remarkable, though, is that bacteria, or genetic material from bacteria, could be present in the brain at all. That’s because, in order to enter the brain, it would have crossed a boundary often viewed as nearly inviolable. This dogma dates back to the 1800s when researchers first noted that dyes injected into the bodies of animals tended not to show up in their brains, Spindler said. Dyes injected into the brain, meanwhile, tended not to appear in the body. So evolved the concept of a blood-brain barrier: that is, an interface that allows nutrients and certain key molecules to cross into the brain, while keeping most other compounds out. The brain is a castle and this is its moat, as experts have described it.
Yet as recent work suggests, invaders have found all sorts of ways into the inner sanctum. Scientists have discovered, for instance, that HIV hides inside white blood cells that enter the brain in order to look for pathogens; they call this the Trojan horse strategy. (Power speculates that alpha-proteobacteria might enter the brain the same way.) Researchers have also shown that the herpes virus sneaks into the brain by moving along the axons of nerve cells. Other viruses may simply break through blood vessel walls, slipping between or even through the cells, Spindler said. Some foreigners in the brain typically cause disease. Others seem mainly to be quiet, long-term visitors.
And others still are mystery guests whose potential effects are unknown. Last fall, for instance, researchers found male genetic material in the brains of women (who almost certainly were not born with it). Perhaps during pregnancy, the scientists suggested, cells from male fetuses had crossed the placenta and entered the women’s bodies. But how exactly did those fetal cells (or some of the DNA from them) cross the blood-brain barrier and enter the brain? How did they persist for so long, and what, if anything were they doing?
The work on bacteria raises similar questions. It also nudges us toward a different view of the brain—in which a bit more otherness is present, intertwined with self. Maybe deep in our brains, a few bacteria are nestled near some quiescent virus and a touch of fetal DNA? Maybe we really are hybrid creatures through and through.
SHAFAQNA (Shia International News Association) – Brain cells can live at least twice as long as the organisms in which they reside, according to new research.
The study, published Monday, Feb. 25, in the journal Proceedings of the National Academy of Sciences, found that mouse neurons, or brain cells, implanted into rats can survive with the rats into old age, twice as long as the life span of the original mice. The findings are good news for life extension enthusiasts.
"We are slowly but continuously prolonging the life of humans," said study co-author Dr. Lorenzo Magrassi, a neurosurgeon at the University of Pavia in Italy.
So if the human life span could be stretched to 160 years, "then you are not going to lose your neurons, because your neurons do not have a fixed lifetime."
While most of the cells in the human body are being constantly replaced, humans are born with almost all the neurons they will ever have. [10 Odd Facts About the Brain]
Magrassi and his colleagues wanted to know whether neurons could outlive the organisms in which they live (barring degenerative diseases like Alzheimer's).
To do so, the researchers took neurons from mice and implanted them into the brains of about 60 rat fetuses.
The team then let the rats live their entire lives, euthanizing them when they were moribund and unlikely to survive for more than two days, and then inspected their brains. The life span of the mice was only about 18 months, while the rats typically lived twice as long.
The rats were found to be completely normal (though not any smarter), without any signs of neurological problems at the end of their lives.
And the neurons that had been transplanted from mice were still alive when the rats died. That means it's possible the cells could have survived even longer if they were transplanted into a longer-lived species.
The findings suggest that our brain cells won't fail before our bodies do.
"Think what a terrible thing it could be if you survive your own brain," Magrassi told LiveScience.
While the findings were done in rats, not humans, they could also have implications for neuronal transplants that could be used for degenerative diseases like Alzheimer's disease or Parkinson's disease, Magrassi said.
But just because brain cells may be able to live indefinitely doesn't mean humans could live forever.
Aging is dependent on more than the life span of all the individual parts in the body, and scientists still don't understand exactly what causes people to age, Magrassi said.-www.shfaqna.com/English
Source: Fox News
SHAFAQNA (Shia International News Association) – Scientists are set to release the first batch of data from a project designed to create the first map of the human brain.
The project could help shed light on why some people are naturally scientific, musical or artistic.
Some of the first images were shown at the American Association for the Advancement of Science meeting in Boston.
I found out how researchers are developing new brain imaging techniques for the project by having my own brain scanned.
Scientists at Massachusetts General Hospital are pushing brain imaging to its limit using a purpose built scanner. It is one of the most powerful scanners in the world.
The scanner's magnets need 22MW of electricity - enough to power a nuclear submarine.
The researchers invited me to have my brain scanned. I was asked if I wanted "the 10-minute job or the 45-minute 'full monty'" which would give one of the most detailed scans of the brain ever carried out. Only 50 such scans have ever been done.
I went for the full monty.
It was a pleasant experience enclosed in the scanner's vast twin magnets. Powerful and rapidly changing magnetic fields were looking to see tiny particles of water travelling along the larger nerve fibres.
By following the droplets, the scientists in the adjoining cubicle are able to trace the major connections within my brain.
Arcs of understanding
The result was a 3D computer image that revealed the important pathways of my brain in vivid colour. One of the lead researchers, Professor Van Wedeen, gave me a guided tour of the inside of my head.
He showed me the connection that helped me to see and another one that helped me understand speech. There were twin arcs that processed my emotions and a bundle that connected the left and right sides of my brain.
Prof Wedeen used visualisation software that enabled him to fly around and through these pathways - even to zoom in to see intricate details.
He and his team hope to learn how the human mind works and what happens when it goes wrong
"We have all these mental health problems and our method for understanding them has really not changed for over a hundred years," he said.
"We don't have imaging methods as we do for the heart to tell what's really going on. Wouldn't it be fantastic if we could get in there and see these things and give people advice concerning what their risks are and how we could help them overcome those problems?"
The brain imaging technology is being developed for a US-led effort to map the human brain called the Human Connectome Project.
And just as with the Human Genome Project before it, the data will be publicly released to scientists as the scans are processed, with the first tranche of data from between 80 and a 100 people to be released in a few weeks' time.
The HCP is a five-year project funded by the National Institutes of Health. The aim the $40m programme is to map the entire human neural wiring system by scanning the brains of 1,200 Americans.
Researchers will also collect genetic and behavioural data from the subjects in order to build up a complete picture of the factors that influence the human psyche.
The brain's wiring diagram is not like that of an electronic device which is fixed. It is thought that changes occur after each experience, and so each person's brain map is different - an ever changing record of who we are and what we have done.
The HCP will be able to test the hypothesis that minds differ as connectomes differ, according to Dr Tim Behrens of Oxford University, UK.
"We're likely to learn a lot about human behaviour," he told BBC News.
"Some of the connections between different parts of the brain might be different for people with different characters and abilities, so for example there's one connection we already know about in people who like taking risks and (a different one) for people who like playing it safe.
"So we'll be able to tell the type of people who like skydiving and who would rather watch TV from their brain scans.
"It will be an amazing resource for the neuroscience community to help them in their work to understand how the brain works," he said.
Prof Steve Petersen, who works with the HCP at Washington University in St Louis, wants to identify the different parts of the brain involved with our ability to think about scientific problems, to concentrate and to hold information in our memory.
"The romance to me is that we are getting to our humanity," he said.-www.shfaqna.com/English
SHAFAQNA (Shia International News Association) – A new study is the first to link chronic pain to epigenetic changes in the brain.
And those changes can persist months after the injury, according to researchers at McGill University.
Epigenetic changes may be caused by environmental factors, including diet, exposure to contaminants, and social conditions such as poverty—and may have a long-term effect on the activity of our genes.
The crucial difference between “genetic” and “epigenetic” causes for disease is that genetic changes are inherited and fixed, while epigenetic changes in contrast are possibly reversible.
The team at McGill has discovered a mechanism that embeds the memory of an injury in the way the DNA is marked in the brain by a chemical coating called methyl groups or DNA methylation. They report in the journal PLOS ONE that if the symptoms of chronic pain are attenuated, the abnormal changes in DNA methylation could be reversed.
Previous research at McGill has shown that experiences and not solely chemicals alter the way genes are marked epigenetically, affecting our behavior and well-being. DNA methylation, an epigenetic mark on the gene itself, can therefore serve as a “memory” of an experience that will alter the way the gene functions long after the original experience is gone.
The new study is the first to link chronic pain to genome-wide epigenetic changes in the brain.
“Injury results in long-term changes to the DNA markings in the brain; our work shows it might be possible to reverse the effects of chronic pain by interventions using either behavioral or pharmacological means that interfere with DNA methylation,” says Moshe Szyf, a professor in the Department of Pharmacology and Therapeutics, who co-led the study with Laura Stone, a professor at the Faculty of Dentistry.
“Our findings have the potential to completely alter the way we treat chronic pain.”
In this study, the researchers show that behavioral interventions that reverse chronic pain also remove differences in DNA methylation in the brain.
The team report alterations in global DNA methylation are observed in the prefrontal cortex (PFC) and amygdala of mice many months following injury to a nerve, and that environmental enrichment reduces both the pain and the pathological changes in PFC global methylation. They also found that the total amount of global methylation in the PFC significantly correlates with pain severity.
“These results suggest that epigenetic modulation mediates chronic pain-related alterations in the central nervous system (CNS), forming a ‘memory trace’ for pain in the brain that can be targeted therapeutically,” says Stone.
Since epigenetics respond to environmental changes, these mechanisms represent a mind-body link between chronic pain and the brain at the genomic level.
“The implications of this work are wide reaching and may alter the way we think about chronic pain diagnosis, research, and treatment,” adds Stone.-www.shfaqna.com/English
SHAFAQNA (Shia International News Association) – For women about to embark on nine months of pregnancy or for cancer patients undergoing their first round chemotherapy, finding the perfect doctor is always crucial.
Having the right physician who makes a patient feel comfortable and safe can influence everything from patient satisfaction to even clinical outcomes.
While many studies have focused on how the patient benefits from the doctor-patient relationship, very few have focused on how the physician feels when he or she treats someone. As it turns out, the doctor is more in-tune with the patient experience than one might think.
Researchers from Massachusetts General Hospital (MGH) and Beth Israel Deaconess Medical Center/Harvard Medical School conducted a novel investigation into the minds of doctors, performing brain scans on physicians while they believed they were treating patients.
Walking a mile in patients’ shoes
Analysis of the scans revealed that doctors put themselves in their patients’ shoes – as their brains actually feel the pain their patients feel.
“Everyone studies the biology of the placebo effect in the patient,” lead author Ted Kaptchuk, director of the Program in Placebo Studies and Therapeutic Encounter (PiPS) at Beth Israel and associate professor of medicine at Harvard Medical School, told FoxNews.com. “We said, ‘Why don’t we change the camera, and turn it around and look at what happens inside the brains of physicians while they’re treating patients?’ … We know that physicians influence patients, so we wanted to know something about the biology that underlies that influence.”
Past research has found that certain areas of the brain are activated when a person experiences the placebo effect – when patients show improved health after being given medicine without any active ingredients. During this response, when a patient experiences pain relief, his or her right ventrolateral prefrontal cortex (VLPFC) is activated, and when he or she experiences reward, the rostral anterior cingulate cortex (rACC) activates.
Through an extensive social and medical experiment, Kaptchuk and his colleagues tested this effect on group of 18 physicians from different medical specialties who had all received their degrees within the past decade. They developed a unique scenario in which the physicians would undergo functional magnetic resonance imaging (fMRI) on their brains as they interacted and observed patients.
The first phase of the experiment involved conducting neuro-imaging of the physicians’ brains as they experienced pain first hand. The researchers introduced them to a pain-relieving electronic device, which was actually a “sham” machine, which simply administered “heat pain.” To make the researchers believe the device truly worked, they used the machine to administer the heat pain and then “treated” them with the machine to simulate pain relief; in reality they simply reduced the heat stimulation.
Once the doctor’s fMRI scans were finished, each physician was introduced to a patient and was asked to conduct a standardized clinical examination. This essentially mimicked a standard doctor’s appointment and help the individuals to establish a relationship – albeit small.
Finally, each doctor-patient pair was asked to enter a scanner room, where the patient was hooked up to the sham pain-relieving device from the earlier stage of the experiment. While hooked up to an fMRI, the doctors were given remote controls, which could either administer pain to their patient or relieve them of it. In reality, the patients – who were actually confederates of the research team – would grimace in order to pretend he or she was experiencing discomfort and then relax to simulate relief.
Through the use of mirrors, the physicians were able to see their patients’ faces as they believed the device administered pain or relieved it.
“During this, we looked at what was going on in the physician’s brain,” Kaptchuk said. “We saw that when the patient was in pain, the physician’s brain activated the regions that we had earlier imaged in the physician of their pain networks. Meaning when the patient was in pain, the physician felt pain. They shared the pain.”
As the researchers noted in the first phase of the experiment, the physicians activated the right VLPFC region when they believed they were treating their patients.
According to Kaptchuk, this is the first experiment of its kind to show a physical underpinning to the doctor-patient relationship. He hopes this will eventually provide the medical community with clues as to how they can better match doctors and patients.
“It highlights that what was once in intangible in terms of discussing health care – well we can actually see a biology to it,” Kapthcuk said. “We think that’s a way of putting increased value in it. We like the molecules, we like the actually physiology of things besides just how patients feel and doctors feel. And this is a way of saying there is a biology and we need to pay more attention… to the doctor-patient relationship.”-www.shfaqna.com/English
SHAFAQNA (Shia International News Association) -- In a world of multitasking and constant distractions — from the ping of texts and emails to everyone having to wear more hats at work than they used to — time management is one of the biggest challenges. We might feel like we’re doing more — and, in a way, we are — but we’re actually getting less done in the process. So, is it possible in this day and age to streamline your work style, be more productive and get back some time in your day to focus on big picture stuff, strategy and brainstorming, all of which will make you more effective at your job?
Yes, says Julie Morgenstern, a productivity expert and bestselling author of five books including Time Management from the Inside Out. Dubbed the “queen of putting people’s lives in order” by USA Today, Morgenstern has made it her life’s mission to help people get more out of everyday and find focus in their lives, both at work and at home. This month marks the launch of her new Circa Balanced Life Planner, a paper-based system for the digital age, designed to help people make good decisions about where to spend their time. Sign me up!
Morgenstern spent some of her valuable time talking to me about the email addiction epidemic, why being pulled in a million different directions and always being connected is bad for the brain, and sharing some great advice for how to manage your time more effectively this year.
JK: Why is multitasking ineffectual?
It has been scientifically demonstrated that the brain cannot effectively or efficiently switch between tasks, so you lose time. It takes four times longer to recognize new things so you’re not saving time; multitasking actually costs time. You also lose time because you often make mistakes. If you’re multitasking and you send an email and accidentally “reply all” and the person you were talking about is on the email, it’s a big mistake. In addition, studies have shown that we have a much lower retention rate of what we learn when multitasking, which means you could have to redo the work or you may not do the next task well because you forgot the information you learned. Everyone’s complaining of memory issues these days – they’re symptoms of this multitasking epidemic. Then, of course, there’s the rudeness factor, which doesn’t help develop strong relationships with others.
JK: Have distractions multiplied in recent years and, if so, how?
One is obviously the smartphone, which has made it so that you cannot get away. There are no safe zones where you can actually unplug. You feel like you’re busy and doing something – it’s a chemical addiction. There are so many things we can do through our screens now – stay in touch with friends, do business, entertainment, watch Netflix, do research, create a Pinterest board. The volume of tasks in our lives that we can now do through a screen rather than tactilely has increased exponentially. It’s more than just email. It’s all the things we can do on screens.
JK: Why is it so important to minimize interruptions and distractions in today’s world?
It’s important to use all parts of your brain instead of only one. That will help reduce mistakes and increase the satisfaction of engagement. The human being desires a sense of control and fulfillment and I’m seeing a swing. People of all ages are reaching a tipping point and need a ”screen break.” There’s comfort in the fact that the human spirit is saying “this is simply too much.”
JK: What are three ways in which people can work smarter?
1) Build “screen breaks” into your schedule, both at work and at home. The length should be a min of 1-3 hours at a time so you can engage in a deeper and different way on problems, studying, writing, thinking, talking, etc.
2) Avoid email and all screens for the first and last hour of the day so that you wake up and engage in a deeper, more focused activity of some sort. It’s easier to start deep and come up to the shallow. And at night, sleep studies show that being in front of a computer screen is an energy source and it stimulates rather than relaxes.
3) If you schedule your day between meetings and action to-do’s, plan every day plus tomorrow and the next day, it makes it easier not to get distracted. It’s best to keep track of everything in a single system – from meetings to to-do’s, both personal and professional – which will help you focus and prioritize. If you plan what to do and review it the night before, you’re less likely to get sucked into mindless distractions. The more specific you are, the more likely you are to combat distractions. Knock out the big things and the toughest stuff early in the day so you have the rest of the day to catch up with the buzz, the urgency, the distractions and the little stuff. Take advantage of the morning to complete the tasks that require more energy and discipline. If you divide your day in half, that works for most people.
JK: If people are daunted by adding several new rules to their lives (like trying to accomplish too many New Year’s resolutions and then giving up on all of them), what’s the one thing that is either most important or easiest to do?
The number one most powerful thing you can do to rediscover the power of focus is to control email use – scheduling when and how often you check your email. If you promise yourself that you’re going to check email only four times a day, between 9am-6pm, that will really help.
JK: How do you advise that people “addicted” to email and social media break the habit of always checking their mobile devices?
All of these distractions are mindless, so you might want to give yourself a little mantra or phrase that gets them to refocus or resist distractions. One idea is “Leave it!” — which is a dog-training term — or ask yourself, “Is this the best time to do this?” You can ask yourself or stop yourself when you feel the pull of a distraction. Also, when you’re having a screen break, don’t have the device nearby. When you’re supposed to be working on a report, turn off the dinger on your email or put the device away altogether. Track this for a month and see how well you’re doing at taking screen breaks and accomplishing bigger tasks. Assess each day how you did at it. You could even create an alarm every two hours to check your email.
Often, people try to change their habits, and they can’t get through a day without constantly checking email, so they give up. They didn’t realize how addicted they were. People who succeed give it a few days of discomfort, like a drug withdrawal, and then they can get through it. Sometimes people stay on track until a crisis and then they forget to go back. Overall, if you can make sure to give yourself time away from your “screens,” you will be more productive and fulfilled.
SHAFAQNA (Shia International News Association) -- Every minute, we blink our eyes 15 to 20 times. But we only need to blink two to four times a minute for adequate lubrication. So what's happening when we blink those other times? (Blink blink blink.)
“Many people have extensively investigated the eye movement, but most of them did not care about the eye blink,” writes Tamami Nakano, as associate professor at Osaka University in Japan, in an email. “The reason why we generate blinks so frequently has been unknown.” Even scientists assumed we only blinked to lubricate our eyes.
To understand why humans blink so much, Nakano and her colleagues asked 20 undergraduate students to watch “Mr. Bean” videos for 30 minutes while in an fMRI. (The students watched the popular British comedy because it's easy to follow without sound.) The researchers counted the blinks by measuring pupil size with near-infrared light; when someone blinks, pupil size is zero.
Nakano and her colleagues found that when we blink while paying attention to a task, we’re resetting our brain. Think of it like rebooting your computer.
When we engage in a task, such watching a movie, our brain's attention networks are triggered. Researchers once believed that as we performed an activity, our default network of the brain (which works during downtime and is responsible for those self-reflective thoughts about what we had for breakfast or when we might go to the grocery store) lessens its activity. Researchers including Dr. Marcus Raichle, professor of radiology and neurology at Washington University School of Medicine in St. Louis and the editor of this paper, found that when doing tasks our brains switched from default mode network to the areas of the brain responsible for the activity, in a see-saw-like manner.
Tamami’s study finds that a blink switches the brain from the dorsal attention network, which helps someone attentively watch a “Mr. Bean” episode, to the default mode network, showing that the default mode network might play more active roles in various tasks than previously understood. This only occurs when we unconsciously blink; we can’t force our brain to switch networks by blinking.
“This blinking might occur at predictable points in a story, so does this say something about the way the brain is engaging a story or movie?” Raichie wonders.
He adds: “I think [the paper] provokes you to think a little bit.”
And it increases what experts know about blinking and the default mode network.
“The present study indicates that even while we pay attention to the external world, the shift from the external attentional brain network to the internal processing brain network (default mode network) dramatically occurs every time we blink,” Tamami says. “I think that blink is closely related to resetting of the brain network and chunking the flow of visual information for memory.”