Jump to content

Robots can become 'racist and sexist' all on their own, study finds


nir

Recommended Posts

Robots can develop prejudices like "racism and sexism" all on their own, a shocking new study has found.

 

Artificial intelligence experts performed thousands of simulations on robot brains, revealing how they split off into groups and treat "outsiders" differently.

 

Computer scientists and psychologists from Cardiff University and the USA's MIT teamed up to test how robots identify each other.

 

But they also tested how they copy and learn behaviors from each other, too.

 

The study, published in Scientific Reports, showed that virtual simulated robots would shun others, forming their own groups.

 

The experiment involved a "give and take" system, where robots could choose which of their peers to donate to.

 

As the virtual game unfolders, individuals would learn new donation strategies by copying other robots – to benefit themselves.

 

It found that robots would donate to each other within small groups, denying outsiders to improve their own takings "By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it," explained co-author Professor Roger Whitaker, of Cardiff University.

 

"Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivized in virtual populations, to the detriment of wider connectivity with others."

 

The professor explained that the study even showed how "prejudicial groups" accidentally led to outsiders forming their own rival groups, "resulting in a fractured population".

 

He added: "Such widespread prejudice is hard to reverse."

 

The study explained how learning these prejudicial behaviors didn't require a lot of mental power.

 

Instead, it was simply a matter of copying others based on their "give and take" game success, which inevitably led to prejudice.

 

According to the scientists behind the project, it's possible that once robots are widespread, they could pick up common human prejudices.

 

Cardiff University noted that robots risked exhibiting prejudice like "racism and sexism".

 

Professor Whitaker said: "It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population.

 

"Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behavior of devices is also influenced by others around them."

 

This isn't the first warning we've seen about robots going rogue.

 

Back in February, leading futurologist Dr. Ian Pearson told The Sun that robots would eventually treat us like "guinea pigs".

 

And in April, Dr. Pearson warned that Earth's robot population would grow to 9.4billion over the next 30 years – overtaking humanity by 2048.

 

"Today the global robot population is probably around 57million.

 

"That will grow quickly in the foreseeable future, and by 2048 robots will overtake humans.

 

"If we allow for likely market acceleration, that could happen as early as 2033.

 

"By 2028, some of those robots will already be starting to feel genuine emotions and to respond to us emotionally," he added.

 

"We'll have trained [artificial intelligence] to be like us, trained it to feel emotions like us, but it won't be like us. It will be a bit like aliens off Star Trek – smarter and more calculated in its actions," he explained.

 

"It will be insensitive to humans, viewing us as barbaric. So when it decides to carry out its own experiments, with viruses that it's created, it will treat us like guinea pigs."

 

The late Professor Stephen Hawking once said: "I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans."

 

And Tesla, PayPal and SpaceX founder Elon Musk warned that AI poses a "fundamental risk to the existence of civilization."

 

Source

Link to comment
Share on other sites


  • Views 434
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...