S# Computers Learning Morals, Cooperate Better Than Humans
Computers can play a pretty mean round of chess and keep up with the best of their human counterparts in other zero-sum games. But teaching them to cooperate and compromise instead of compete?
“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” said Crandall, whose study was recently published in Nature Communications. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”
Credit: Jaren Wilkey/BYU Photo
For the study, researchers programmed machines with an algorithm called S# and ran them through a variety of two-player games to see how well they would cooperate in certain relationships. The team tested machine-machine, human-machine and human-human interactions. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.
“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” Crandall said. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”
Researchers further fortified the machines’ ability to cooperate by programming them with a range of “cheap talk” phrases. In tests, if human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!”, “You will pay for that!” or even an “In your face!”
Regardless of the game or pairing, cheap talk doubled the amount of cooperation. And when machines used cheap talk, their human counterparts were often unable to tell whether they were playing a human or machine.
The research findings, Crandall hopes, could have long-term implications for human relationships.
“In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”
Contacts and sources:
Andrea Christensen
Writer: K.C. Miller
Brigham Young University
Source: http://www.ineffableisland.com/2018/01/s-computers-learning-morals-cooperate.html
Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.
"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.
Please Help Support BeforeitsNews by trying our Natural Health Products below!
Order by Phone at 888-809-8385 or online at https://mitocopper.com M - F 9am to 5pm EST
Order by Phone at 866-388-7003 or online at https://www.herbanomic.com M - F 9am to 5pm EST
Order by Phone at 866-388-7003 or online at https://www.herbanomics.com M - F 9am to 5pm EST
Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!
HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation.
Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.
MitoCopper - Bioavailable Copper destroys pathogens and gives you more energy. (See Blood Video)
Oxy Powder - Natural Colon Cleanser! Cleans out toxic buildup with oxygen!
Nascent Iodine - Promotes detoxification, mental focus and thyroid health.
Smart Meter Cover - Reduces Smart Meter radiation by 96%! (See Video).
Morals is about the actions you take as a Human being, thus AI morals is a fraud, not living, but the psychopaths love it.
We build them, they will remain under our control instead of us living under theirs, period. Same answer as royalty, we have enough parasites.