• December 8, 2021

This Program Can Give AI a Sense of Ethics—Sometimes

Delphi taps the fruits of recent advances in AI and language. Feeding very large amounts of text to algorithms that use mathematically simulated neural networks has yielded surprising advances.

In June 2020, researchers at OpenAI, a company working on cutting-edge AI tools, demonstrated a program called GPT-3 that can predict, summarize, and auto-generate text with what often seems like remarkable skill, although it will also spit out biased and hateful language learned from text it has read.

The researchers behind Delphi also asked ethical questions of GPT-3. They found its answers agreed with those of the crowd workers just over 50 percent of the time—little better than a coin flip.

Improving the performance of a system like Delphi will require different AI approaches, potentially including some that allow a machine to explain its reasoning and indicate when it is conflicted.

The idea of giving machines a moral code stretches back decades both in academic research and science fiction. Isaac Asimov’s famous Three Laws of Robotics popularized the idea that machines might follow human ethics, although the short stories that explored the idea highlighted contradictions in such simplistic reasoning.

Choi says Delphi should not be taken as providing a definitive answer to any ethical questions. A more sophisticated version might flag uncertainty, because of divergent opinions in its training data. “Life is full of gray areas,” she says. “No two human beings will completely agree, and there’s no way an AI program can match people’s judgments.”

Other machine learning systems have displayed their own moral blind spots. In 2016, Microsoft released a chatbot called Tay designed to learn from online conversations. The program was quickly sabotaged and taught to say offensive and hateful things.

Efforts to explore ethical perspectives related to AI have also revealed the complexity of such a task. A project launched in 2018 by researchers at MIT and elsewhere sought to explore the public’s view of ethical conundrums that might be faced by self-driving cars. They asked people to decide, for example, whether it would be better for a vehicle to hit an elderly person, a child, or a robber. The project revealed differing opinions across different countries and social groups. Those from the US and Western Europe were more likely than respondents elsewhere to spare the child over an older person.

helpful hints
helpful resources
helpful site
her comment is here
her explanation
her latest blog
her response
here
here are the findings
here.
his comment is here
his explanation
his response
home
home page
homepage
hop over to here
hop over to these guys
hop over to this site
hop over to this web-site
hop over to this website
how much is yours worth?
how you can help
i loved this
i thought about this
i was reading this
image source
in the know
index
informative post
inquiry
internet
investigate this sitekiller deal
knowing it
learn here
learn more
learn more here
learn the facts here now
learn this here now
like it
like this
link
[link]
linked here
listen to this podcast
look at here
look at here now
look at more info
look at these guys
look at this
look at this now
look at this site
look at this web-site
look at this website
look here
look these up
look what i found
love it
lowest price
made a post
made my day
more
more about the author
more bonuses
more help
more helpful hints
more hints
more info
more info here
more information
more tips here
more..
moreÂ…
moved here
my company
my explanation
my latest blog post
my response
my review here
my sources

Some of those building AI tools are keen to engage with the ethical challenges. “I think people are right to point out the flaws and failures of the model,” says Nick Frosst, CTO of Cohere, a startup that has developed a large language model that is accessible to others via an API. “They are informative of broader, wider problems.”

Cohere devised ways to guide the output of its algorithms, which are now being tested by some businesses. It curates the content that is fed to the algorithm and trains the algorithm to learn to catch instances of bias or hateful language.

Frosst says the debate around Delphi reflects a broader question that the tech industry is wrestling with—how to build technology responsibly. Too often, he says, when it comes to content moderation, misinformation, and algorithmic bias, companies try to wash their hands of the problem by arguing that all technology can be used for good and bad.

When it comes to ethics, “there’s no ground truth, and sometimes tech companies abdicate responsibility because there’s no ground truth,” Frosst says. “The better approach is to try.”

Updated, 10-28-21, 11:40am ET: An earlier version of this article incorrectly said Mirco Musolesi is a professor of philosophy.

Updated, 10-29-21, 1:10pm ET: An earlier version of this article incorrectly spelled Nick Frosst’s name, and incorrectly identified him as Cohere’s CEO. 


More Great WIRED Stories

Leave a Reply

Your email address will not be published. Required fields are marked *