Jump to content

At the mercy of AI: Your job, your health, your money


steven36

Recommended Posts

The focus on loss of privacy from Watson, Cortana, Google, Facebook, DeepMind, and Siri risks us missing an even greater threat

 

 

 

thinkstockphotos_518918015_100609780_lar

 

At the Gartner ITExpo this week, Microsoft CEO Satya Nadella faced tough questions on how Cortana and LinkedIn together could spy deeply on our work lives. (Microsoft is purchasing LinkedIn.) At the SoTech conference I attended this past weekend, an IBM Watson engineer faced similar questions about the data that Watson would gather to feed IBM's vision of Watson as an adviser to people in all sorts of work.

 

But privacy is not the only issue, and not necessarily the most important one. All of these artificial intelligent systems -- Watson, Cortana, Google's DeepMind and intelligent assistants, Facebook, and Apple's Siri -- are being proposed as all-knowing, objective advisers to people, companies, and governments. The AI will tell you who's a good job candidate, what's the best medical treatment, what car you should buy, where  you should live, what gas station you should frequent, and what you should eat.

 

That's supposed to be a good development because it's based on analysis of information that individuals don't have access to and couldn't process if they did -- plus, the AI has no inherent bias in the calculations it bases its recommendation on. Thus, AI systems using algorithms and data from who knows where, with who knows what degree of accuracy and who knows what degree of encoded biases, will make these decisions on the fundamental aspects of our lives.

Scary!

At the SoTech conference, I asked the IBM engineer of this coming future: How would people escape getting blacklisted algorithmically, or at least understand why their résumés never get to an HR pro, their insurance rates skyrocket, they don't get housing applications accepted in certain areas, and so on?

His answers:

  1. You don't know today why a company doesn't call you in for an interview or even doesn't hire you after an interview.
  2. "We comply with all government regulations."

 

That's even scarier.

Here's why: AI can make these judgments at scale, and because they are designed as centralized services, those judgments will be delivered over and over again, and many employers will get the same judgments about you. Or more likely, they won't get those judgments; HR departments use such services to screen out applicants, so only the best (however defined) get through. Employers may likely never even know you exist.

Think about how credit scores work: Three companies gather credit balances, payment histories, and income data on you, then calculate your likelihood of making payments (that's what your score means). Bad data can ruin your score, and because everyone uses it, you're cut off from credit everywhere.

 

It took years for the feds to require that companies disclose denial of credit based on such reports and to let you see the data about you. But even today, there's no regulated, assured system for correcting errors.

That's for factual data. But how do you "correct" a judgment about your cultural fit, job qualifications, and all the other subjective factors that go into hiring?

 

You can bet that a result will be exclusion by illegal factors such as race, age, or gender -- not from direct discrimination, of course, but due to the goals of such systems like "cultural fit," as the Watson engineer described. That too often means "people like us," which will easily creep into the "objective" criteria the AI uses. Economists and sociologists know well that those personal factors often correlate to putatively objective states, such as educational background, economic status, residence location, and social connections.

 

The Watson engineer said the final decision is up to the HR department, so any inadvertent results can be corrected. Except they can't: How would HR know to look for such examples when it gets only the 17 "best" résumés? And if someone broke through, how would HR know what the AI's judgment was really based on? And why would HR take the risk over overruling the AI, which has access to all that data and is objective in its judgments?

 

That's the evil lurking in AI: It's presented as more objective and more knowledgeable than people, so any opposition to it becomes a quixotic exercise. It takes a lot for people to question the system today. It'll be an order of magnitude harder when AIs rule the system.

 

Employment is one area where AI can redline you, with no real recourse -- assuming you even know an AI judgment was the cause. In medicine, doctors will play second fiddle to AI diagnoses and treatment recommendations -- an objective AI is as likely to let you die quickly to save you suffering and the hospital money as it is to let you live longer to have closure with your friends and family.

 

Less dire, AIs will correlate your medical history with that of your relatives to improve your treatment, but insurers can also use that information to price you -- and your relatives -- out of the market. That's illegal, of course, but insurers are already testing a way around that: Car insurers now promote tracking devices for your vehicles, so they can give you discounts for good driving. That's redlining inverted, but still redlining. I won't be surprised to see "good" family histories lead to discounts on medical insurance, though likely not stated that way.

 

What school your kids may go to will also be subject to AI judgments.

 

The financial industry commits all sorts of shady acts to extract money from you, lurching from one scandal to another while not changing the underlying behaviors. And they're exactly the kind of people who will use AI judgments to rig the system increasingly against you. The stock market will be even more a game for suckers, and your 401(k) will be even less valuable when you retire.

 

We can only imagine how governments will use AI to predict criminal behaviors, monitor and even define suspects, regulate behaviors for individuals and businesses, and more. We saw with Edward Snowden's revelations how far beyond the bounds of civil rights that progressive governments like the United States are willing to go with today's tools. Wait till they have their own AIs tapping into all the private and public systems they can.

 

It's all based on objective data, of course, and the judgments derived from the algorithms' view of that data. Never mind that much of the data is subjective, as are the algorithms and filters that users apply to them. AI engineers hate to admit that the world is not objective. 

 

Explicit discrimination is bad enough, systemic discrimination worse. Hidden, unacknowledged discrimination is the worst of all. AI will favor that worst kind -- at scale.

I don't know that we can do anything about it. After all, laws are regularly flouted when inconvenient -- and violations will be hard to detect. (Hmm, maybe an AI to find those?) But we can try. Some ideas:

 

  • Require all AI-assisted decisions on significant issues (employment, health care, education, housing, travel, professional licenses) be revealed to those they affect, with the reasons in the AI's judgment described. (This is like the current laws on credit scores.)
  • Forbid use of nonpublic data without explicit permission in any AI analyses not conducted directly by the individual or company.
  • Ensure that laws like HIPAA and HICAT disallow family-history-based health profiling for purposes of denial of care, substitution of lower-cost care, and price of care.
  • Ban the use of tracking devices that deliver behavioral patterns to determine insurance and similar rates. Actual-accident history is fine, but potential-accident history is not.
  • Disallow central storage and dissemination of AI judgments; each judgment should be a fresh one, so mistaken judgments aren't made into perpetual redlining. There should be no equivalent to a standard credit score for subjective evaluations.
  • Require all private data be marked as such and be rejected by AI correlation systems when used by third parties. Also make it illegal for a person to waive the privacy of such information (similar to how states made it illegal for employers to force applicants to share their social-media passwords so they could see what kind of people they were).

 

Source:

http://www.infoworld.com/article/3131098/artificial-intelligence/at-the-mercy-of-ai-your-job-your-health-your-money.html

 

Link to comment
Share on other sites


  • Replies 1
  • Views 428
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...