Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
By RealityShifters Blog
Contributor profile | More stories
Story Views
Now:
Last hour:
Last 24 hours:
Total:

How Best to Prepare for Superintelligent AI?

% of readers think this story is Fact. Add your two cents.


What happens when Artificial Intelligences gets loose in the world? 

Every parent wonders how their kids will turn out when they grow up and become independent in the world, and speaking from personal experience, it’s such a relief to see one’s children mature into wise, compassionate, genuinely good people.

Similar concerns are now on many peoples’ minds as we rush forward into the Quantum Age, getting closer and closer to creating a kind of intelligence far beyond anything we’ve yet seen on Earth before. Many are awaiting something known as the technological singularity, at which point artificial intelligence will have reached, “a predicted point in the development of a civilization at which technological progress accelerates beyond the ability of present-day humans to fully comprehend or predict.” Just what might happen when we reach such a point of technological breakthrough? What will such intelligence be capable of, and who will be in charge of ensuring its safe use?

Since I’ve been fascinated in this subject for years, I attended Douglas Hofstadter’s Symposium, “Will Spiritual Robots Replace Humanity by 2100?” at Stanford University in April 2000. Douglas Hofstadter and his eight guests (Bill Joy, Ralph Merkle, Hans Moravec, Ray Kurzweil, John Holland, Kevin Kelly, Frank Drake, and John Koza) talked for five hours about their vision of humanity’s future… as each panelist looked through a telescope with the lenses of his own particular area of expertise into the future. Many speakers cited Moore’s Law of the ever-increasing pace of technological changes to make the point that technology is changing faster than ever before, and that rate of change is expected to increase at an exponential rate–so it is difficult to predict where we will be in one hundred years from now. Douglas explained that he only invited guests who agreed that there is a possibility for robots to be spiritual. Douglas wanted to focus on the question of “Who will be we in 2093?”, since a visualization of who we will be is at the core of how we can understand how we might be utilizing new technologies. I wondered just how possible it was that robots might be thinking and acting on their own behalf by 2100–and I wondered that if this was so, might they be replacing us–with or without our consent and cooperation?

Over the past fifteen years, there has been increasing interest–and concern–about artificial superintelligence. Roman Yampolsky summarizes the Singularity Paradox (SP) as “superintelligent machines are feared to be too dumb to possess common sense.” Put in even more simple terms, there is a growing concern about dangers of Artificial Intelligence (AI) amongst some of the world’s best-educated and most well-respected scientific leaders, such as Stephen Hawking, Elon Musk, and Bill Gates. The hazards of AI containment are discussed in some detail in Artificial Superintelligence, yet in language easily understandable to the layman.

In his new book, Artificial Superintelligence, Yampolsky argues for addressing AI potential dangers with a safety engineering approach, rather than with loosely defined ethics, since human values are inconsistent and dynamic. Yampolsky points out that “fully autonomous machnines cannot ever be assumed to be safe,” and going so far as adding, “… and so should not be constructed.”

Yampolsky acknowledges the concern of AI escaping confines, and takes the reader on a tour of AI taxonomies with a general overview of the field of Intelligence, showing a Venn type diagram (p 30) in which ‘human minds’ and ‘human designed AI’ occupy adjacent real estate on this nonlinear terrain of ‘minds in general’ in multidimensional super space. ‘Self-improving minds’ are envisioned which improve upon ‘human designed AI,’ and at this very juncture arises the potential for ‘universal intelligence,’ and the Singularity Paradox (SP) problem.

Yampolsky proposes initiation of an AI hazard symbol, which could prove useful for constraining AI to designated containment areas, in J.A.I.L. or ‘Just for A.I. Location.’ Part of Yampolsky’s proposed solution to the AI Confinement Problem includes asking ‘safe questions’ (p 137). Yampolsky includes other solutions proposed by Drexler (confine transhuman machines), Bostrom (utilize AI only for answering questions in Oracle mode), Chalmers (confine AI to ‘leakproof’ virtual worlds), and argues for creation of committees designated to oversea AI security.

Emphasizing the scale and scope of what needs to be accomplished in order to help ensure safety of AI are points such as Yudkowsky having “performed AI-box ‘experiments’ in which he demonstrated that even human-level intelligence is sufficient to escape from an AI-box,” and even Chalmers “correctly observes that a truly leakproof system in which NO information is allowed to leak out from the simulated world into our environment is impossible, or at least pointless.”

Since one of the fundamental tenets in information security is that it is impossible to ever prove any system is 100% secure, it’s easy to see why there is such strong and growing concern regarding the safety to mankind of AI. And if there is no way to safely confine AI, then like any parents, humanity will certainly find itself hoping that we’ll have done such an excellent job raising AI to maturity, that it will comport itself kindly toward its elders. Yampolsky points out “In general, ethics for superintelligent machines is one of the most fruitful areas of research in the field of singularity research, with numerous publications appearing every year.”

One look at footage of a Philip Dick AI robot saying, “I’ll keep you warm and safe in my people zoo,” as shown in the 2011 Nova Science documentary What’s the Next Big Thing can be enough to jolt us out of complacency. For those hoping that teaching AI to simply follow the rules will be enough, Yampolsky replies that law-abiding AI is not enough. AI could still keep humans safe ‘for their own good,’ increasingly limiting human free choice in a sped-up kind of way, that superintelligent AI will be able to do.

For readers intrigued in what safe variety of AI might be possible, the section of Artificial Superintelligence early in the book will be of great interest. Yampolsky describes five taxonomies of minds (pp 31-34). Returning to re-read this section after having completed the rest of the book can be quite beneficial, as at this point readers can more fully understand how AI that is Quantum and Flexibly Embodied according to Goetzel taxonomy (p 31) with Ethics Self-Monitoring (p 122) might help ensure development of safe AI. If such AI systems include error-checking, with firmware (unerasable) dedication to preserving others and constantly checking to seek and resonate with highest-order intelligence with quantum levels of sensing through time-reversible logic gates (in accordance with quantum deductive logic), one can begin to breathe a sigh of relief that there might just be a way to ensure safe AI will prevail. While the deepest pockets of government funding are unlikely to ever make plans to develop such a system that would not be controlled by anything less than the greatest intelligence seekable by AI (such as God), it is conceivable that humanitarian philanthropists will step forward to fund such a project in time that all of us will be eternally grateful that its highest-order-seeking AI will prevail.

___________________________

Cynthia Sue Larson is the best-selling author of six books, including Quantum Jumps. Cynthia has a degree in Physics from UC Berkeley, and discusses consciousness and quantum physics on numerous shows including the History Channel, Coast to Coast AM, the BBC and One World with Deepak Chopra. You can subscribe to Cynthia’s free monthly ezine at: http://www.RealityShifters.com
RealityShifters®


Source: https://cynthiasuelarson.wordpress.com/2015/09/01/how-best-to-prepare-for-superintelligent-ai/


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Please Help Support BeforeitsNews by trying our Natural Health Products below!


Order by Phone at 888-809-8385 or online at https://mitocopper.com M - F 9am to 5pm EST

Order by Phone at 866-388-7003 or online at https://www.herbanomic.com M - F 9am to 5pm EST

Order by Phone at 866-388-7003 or online at https://www.herbanomics.com M - F 9am to 5pm EST


Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!

HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation.

Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.

MitoCopper - Bioavailable Copper destroys pathogens and gives you more energy. (See Blood Video)

Oxy Powder - Natural Colon Cleanser!  Cleans out toxic buildup with oxygen!

Nascent Iodine - Promotes detoxification, mental focus and thyroid health.

Smart Meter Cover -  Reduces Smart Meter radiation by 96%! (See Video).

Report abuse

    Comments

    Your Comments
    Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

    MOST RECENT
    Load more ...

    SignUp

    Login

    Newsletter

    Email this story
    Email this story

    If you really want to ban this commenter, please write down the reason:

    If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.