Jump to content

Opinion: Computers have learned to make us jump through hoops


nir

Recommended Posts

Machines are supposed to be tools that serve human ends, but the relationship is slowly shifting - and not in our favour

 

image.png

The boiling frog analogy: are humans smart enough to realise what’s happening to them? Photograph: Getty Images/Design Pics RF

 

The other day I had to log in to a service I hadn’t used before. Since I was a new user, the website decided that it needed to check that I wasn’t a robot and so set me a Captcha (Completely Automated Public Turing test to tell Computers and Humans Apart). This is a challenge-response test to enable a computer to determine whether the user is a person rather than a machine.

 

I was presented with an image of a roadside scene over which was overlaid a grid. My “challenge” was to click on each cell in the grid that contained a traffic sign, or part thereof. I did so, fuming a bit. Then I was presented with another image and another grid – also with a request to identify road signs. Like a lamb, I complied, after which the website deigned to accept my input.

 

And then the penny dropped (I am slow on the uptake). I realised that what I had been doing was adding to a dataset for training the machine-learning software that guides self-driving cars – probably those designed and operated by Waymo, the autonomous vehicle project owned by Alphabet Inc (which also happens to own Google). So, to gain access to an automated service that will benefit financially from my input, I first have to do some unpaid labour to help improve the performance of Waymo’s vehicles (which, incidentally, will be publicly available for hire in Phoenix, Arizona, by the end of this year).

 

Neat, eh? But note also the delicious additional irony that the Captcha is described as an “automated Turing test”. The Turing test was conceived, you may recall, as a way of enabling humans to determine whether a machine could respond in such a way that one couldn’t tell whether it was a human or a robot. So we have wandered into a topsy-turvy world in which machines make us jump through hoops to prove that we are humans!

 

The strangest aspect of this epochal shift is how under-discussed it has been. The metaphor of the boiling frog comes to mind, according to which if the creature is put suddenly into boiling water, it will jump out; but if it’s put in tepid water which is then brought to a boil slowly, it will not perceive the danger and will be cooked to death. As it happens, zoologists think that frogs are generally smarter than the metaphor supposes. The question, though, is whether humans are equally smart: have we become so subtly conditioned by digital technology that we don’t see what’s been happening to us? Have we been conditioned to accept a world governed by “smart” tech, trading convenience and cheap bliss to the point where we become a bit like machines ourselves?

 

In a recent startling and thoughtful book, two scholars – Brett Frischmann, a law professor, and Evan Selinger, a philosopher – argue that the answer to that question is “yes”. The book’s title, Re-Engineering Humanity, succinctly summarises their case. It is an exploration of how everyday practices – such as clicking to accept an app’s legal terms – are made so simple that we are effectively “trained” to not read the contents. Unless things change, the dominance of digital technology means that, over time, humans will lose their capacity for judgment, discrimination and self-sufficiency.

 

The carefully designed opacity of online end-user licence agreements (EULAs) provides an illuminating case study. These are, Frischmann says in an interview, “optimised to minimise transaction costs, maximise efficiency, minimise deliberation, and engineer complacency”, designed to “nudge people to click a button and behave like simple stimulus-response machines”. However, the “efficiency” thus obtained is not for humans, but for the machine behind the “accept” button.

 

“Seamless and friction-free are great optimisation criteria for machines, not for humans,” says Frischmann. “After all, machines are tools that serve human ends. Machines don’t set their objectives; humans do – or so we hope. To author our lives and not just perform scripts written by others, we need to sustain our freedom to be free from powerful techno-social engineering scripts.”

 

He’s right. And there’s nothing technophobic about that. In a way, Re-Engineering Humanity gives a book-length endorsement of the media scholar John Culkin’s oft-repeated insight that “we shape our tools and then our tools shape us”. Technology, as Frischmann says, is supposed to provide tools that serve human ends. But, as the machine-learning Captcha (not to mention the business models of Google and Facebook) demonstrate, a significant proportion of digital tech now sees (and uses) humans as means to ends that are not ours. In the process, they reduce us to the status of cheery rats running on treadmills designed by people who do not have our interests at heart.

 

So back to the frog metaphor. Are we smart enough to jump out before it’s too late? You don’t even have to Google it to know the answer.

 

Source

Link to comment
Share on other sites


  • Views 605
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...