08/20/2018 / By Zoey Sky
While technological advances in artificial intelligence aim to augment or even restore human capabilities, experts are worried that this can one day cause more harm than good, making a “call for ethics” necessary to protect our privacy.
Now that artificial intelligence and brain-computer interfaces are merging, we might not have to wait too long until it can “restore sight to the blind, allow the paralyzed to move robotic limbs, and cure any number of brain and nervous system disorders.”
However, a team of researchers spearheaded by Columbia University neuroscientist Rafael Yuste and University of Washington bioethicist Sara Goering, which includes University of Michigan biomedical engineering and rehabilitation scientist Jane Huggins, Ph.D., cautions that without regulation, innovation can still have negative implications for mankind.
Via an essay in Nature, more than two dozen physicians, ethicists, neuroscientists, and computer scientists stressed the need for “ethical guidelines” to regulate the evolving use of computer hardware and software that will be used to enhance or restore human capabilities.
Yuste, director of Columbia’s Neurotechnology Center and a member of the Data Science Institute, explains that the group only wants to ensure that this “exciting” technology, which can be used to “revolutionize our lives,” is only used for the betterment of mankind.
Huggins, director of the U-M Direct Brain Interface Laboratory, agrees. She said, “This technology has great potential to help people with disabilities express themselves and participate in society, but it also has potential for misuse and unintended consequences. We want to maximize the benefit and promote responsible use.”
Science fiction has often speculated about the possibilities, but now computers fusing with the human mind “to augment or restore brain function” is quickly becoming a reality. The group of experts predict that the for-profit brain-implant industry, which is led by Bryan Johnson’s startup Kernel and Elon Musk’s Neuralink, is currently worth $100 million. During President Obama’s term, the U.S. government spent another $500 million since 2013 for the BRAIN Initiative alone.
These investments can soon yield positive results, but the authors are wary about four main threats: “The loss of individual privacy, identity and autonomy, and the potential for social inequalities to widen, as corporations, governments, and hackers gain added power to exploit and manipulate people.” (Related: Artificial Intelligence ‘more dangerous than nukes,’ warns technology pioneer Elon Musk.)
Here are their suggestions to protect against these threats:
Earlier this year, Google’s London-based AI research counterpart DeepMind launched a new unit that will deal with ethical and societal questions concerning artificial intelligence. The company shared that the new research unit was formed “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.”
Google is collaborating with external advisers from academia and the charitable sector, such as Columbia development professor Jeffrey Sachs, University of Oxford’s AI professor Nick Bostrom, and climate change campaigner Christiana Figueres, who will advise the group.
You can read more articles about how to use technology wisely at FutureScienceNews.com.
Sources include:
Tagged Under: AI, artificial intelligence, computers, computing, DeepMind, ethics, free will, future science, hackers, human mind, innovation, machine learning, privacy, science, surveillance, technology
COPYRIGHT © 2017 COMPUTING NEWS