lunes, 12 de diciembre de 2016

Researchers uncover how hippocampus influences future thinking


Researchers from Boston University School of Medicine have determined the role of the hippocampus in future imaging in the process of constructing a scene in one's mind.
Credit: © memo / Fotolia


Source:
Boston University Medical Center

Summary:

Over the past decade, researchers have learned that the hippocampus -- historically known for its role in forming memories -- is involved in much more than just remembering the past; it plays an important role in imagining events in the future.
Yet, scientists still do not know precisely how the hippocampus contributes to episodic imagining -- until now.
Researchers have determined the role of the hippocampus in future imaging lies in the process of constructing a scene in one's mind

Over the past decade, researchers have learned that the hippocampus -- historically known for its role in forming memories -- is involved in much more than just remembering the past; it plays an important role in imagining events in the future.

Yet, scientists still do not know precisely how the hippocampus contributes to episodic imagining -- until now. Researchers from Boston University School of Medicine (BUSM) have determined the role of the hippocampus in future imaging lies in the process of constructing a scene in one's mind.

The findings, which appear in the journal Cerebral Cortex, shed important light on how the brain supports the capacity to imagine the future and pinpoints the brain regions that provide the critical ingredients for performing this feat.

The hippocampus is affected by many neurological conditions and diseases and it also can be compromised during normal aging. Future thinking is a cognitive ability that is relevant to all humans. It is needed to plan for what lies ahead, whether to navigate daily life or to make decisions for major milestones further in the future.

Using functional Magnetic Resonance Imaging, BUSM researchers performed brain scans on healthy adults while they were imagining events.

They then compared brain activity in the hippocampus when participants answered questions pertaining to the present or the future.

After that, they compared brain activity when participants answered questions about the future that did or did not require imagining a scene.

"We observed no differences in hippocampal activity when we compared present versus future imaging, but we did observe stronger activity in the hippocampus when participants imagined a scene compared to when they did not, suggesting a role for the hippocampus in scene construction but not mental time travel," explained corresponding author Daniela Palombo, PhD, postdoctoral fellow in the memory Disorders Research Center at BUSM and at the VA Boston Healthcare System.

According to the researchers the importance of studying how the hippocampus contributes to cognitive abilities is bolstered by the ubiquity of hippocampal involvement in many conditions.

"These findings help provide better understanding of the role of the hippocampus in future thinking in the normal brain, and may eventually help us better understand the nature of cognitive loss in individuals with compromised hippocampal function," she added.

Palombo believes that once knowledge about which aspects of future imagining are and are not dependent on the hippocampus, targeted rehabilitation strategies can be designed that exploit those functions that survive hippocampal dysfunction and may provide alternate routes to engage in future thinking.

sciencedaily.com/


martes, 7 de junio de 2016

Google has developed a 'big red button' that can be used to interrupt artificial intelligence and stop it from causing harm

Stuart Armstrong

The Future of Humanity Institute, University of Oxford

Stuart Armstrong is a philosopher at the University of Oxford and one of the paper's authors.

Machines are becoming more intelligent every year thanks to advances being made by companies like Google, Facebook, Microsoft, and many others.

AI agents, as they're sometimes known, can already beat us at complex board games like Go, and they're becoming more competent in a range of other areas.

Now a London artificial-intelligence research lab owned by Google has carried out a study to make sure that we can pull the plug on self-learning machines when we want to.

DeepMind, bought by Google for a reported 400 million pounds — about $580 million — in 2014, teamed up with scientists at the University of Oxford to find a way to make sure that AI agents don't learn to prevent, or seek to prevent, humans from taking control.

The paper — "Safely Interruptible Agents PDF," published on the website of the Machine Intelligence Research Institute (MIRI) — was written by Laurent Orseau, a research scientist at Google DeepMind, Stuart Armstrong at Oxford University's Future of Humanity Institute, and several others.

The researchers explain in the paper's abstract that AI agents are "unlikely to behave optimally all the time." They add:

If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation.

The researchers, who weren't immediately available for interviewing, claim to have created a "framework" that allows a "human operator" to repeatedly and safely interrupt an AI, while making sure that the AI doesn't learn how to prevent or induce the interruptions.

The authors write:

Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this.

The researchers found that some algorithms, such as "Q-learning" ones, are already safely interruptible, while others, like "Sarsa," aren't when they're off the shelf, but they can be modified relatively easily so they are.

"It is unclear if all algorithms can be easily made safely interruptible," the authors admit.

Nick Bostrom

srf
University of Oxford philosopher Nick Bostrom

DeepMind's work with the Future of Humanity Institute is interesting: DeepMind wants to "solve intelligence" and create general purpose AIs, while the Future of Humanity Institute is researching potential threats to our existence.

The institute is led by Nick Bostrom, who believes that machines will outsmart humans within the next 100 years and thinks that they have the potential to turn against us.

Speaking at Oxford University in May 2015 at the annual Silicon Valley Comes to Oxford event, Bostrom said:

I personally believe that once human equivalence is reached, it will not be long before machines become superintelligent after that. It might take a long time to get to human level but I think the step from there to superintelligence might be very quick.

I think these machines with superintelligence might be extremely powerful, for the same basic reasons that we humans are very powerful relative to other animals on this planet.

It's not because our muscles are stronger or our teeth are sharper, it's because our brains are better.

DeepMind knows the technology that it's creating has the potential to cause harm.

The founders — Demis Hassabis, Mustafa Suleyman, and Shane Legg — allowed their company to be bought by Google on the condition that the search giant created an AI ethics board to monitor advances that Google makes in the field.

Who sits on this board and what they do, exactly, remains a mystery.

The founders have also attended and spoken at several conferences about ethics in AI, highlighting that they want to ensure the technology they and others are developing is used for good, not evil.

It's likely that they will look to incorporate some of the findings from the "Safely Interruptible Agents" paper into their work going forward.

Sam Shead