Wednesday, December 23, 2015

Day 129: Book Excerpt: Future Crimes


Black-Box Algorithms and the Fallacy of Math Neutrality

One and one is two. Two plus two equals four. Basic, eternal, immutable math. The type of stuff we all learned in kindergarten. But there is another type of math—the math encoded in algorithms—formulas written by human beings and weighted to carry out their instructions, their decision analyses, and their biases. When your GPS device provides you with directions using narrow AI to process the request, it is making decisions for you about your route based on an instruction set somebody else has programmed. While there may be a hundred ways to get from your home to your office, your navigation system has selected one. What happened to the other ninety-nine? In a world run increasingly by algorithms, it is not an inconsequential question or a trifling point.

Today we have the following:
•  algorithmic trading on Wall Street (bots carry out stock buys and sells)
•  algorithmic criminal justice (red-light and speeding cameras determine infractions of the law)
•  algorithmic border control (an AI can flag you and your luggage for screening)
•  algorithmic credit scoring (your FICO score determines your creditworthiness)
•  algorithmic surveillance (CCTV cameras can identify unusual activity by computer vision analysis, and voice recognition can scan your phone calls for troublesome keywords)
•  algorithmic health care (whether or not your request to see a specialist or your insurance claim is approved)
•  algorithmic warfare (drones and other robots have the technical capacity to find, target, and kill without human intervention)
•  algorithmic dating (eHarmony and others promise to use math to find your soul mate and the perfect match)

Though the inventors of these algorithmic formulas might wish to suggest they are perfectly neutral, nothing could be further from the truth. Each algorithm is saturated with the profound human bias of the person or people who wrote the formula. But who governs these algorithms and how they behave in grooming us? We have no idea. They are black-box algorithms, shrouded in secrecy and often declared trade secrets, protected by intellectual property law. Just one algorithm alone—the FICO score—plays a major role in each American’s access to credit, whether or not you get a mortgage, and what your car loan rate will be. But nowhere is the formula published; indeed, it is a closely guarded secret, one that earns FICO hundreds of millions of dollars a year. But what if there is a mistake in the underlying data or the assumptions inherent in the algorithm? Too bad. You’re out of luck. The near-total lack of transparency in the algorithms that run the world means that we the people have no insight and no say into profoundly important decisions being made about us and for us. The increasingly concentrated power of algorithms in our society has gone unnoticed by most, but without insight and transparency into the algorithms running our world, there can be no accountability or true democracy. As a result, the twenty-first-century society we are building is becoming increasingly susceptible to manipulation by those who author and control the algorithms that pervade our lives.

We saw a blatant example of this abuse in mid-2014 when a study published by researchers at Facebook and Cornell University revealed that social networks can manipulate the emotions of their users simply by algorithmically altering what they see in the news feed. In a study published by the National Academy of Sciences, Facebook changed the update feeds of 700,000 of its users to show them either more sad or more happy news. The result? Users seeing more negative news felt worse and posted more negative things, the converse being true for those seeing the more happy news. The study’s conclusion: “Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” Facebook never explicitly notified the users affected (including children aged thirteen to eighteen) that they had been unwittingly selected for psychological experimentation. Nor did it take into account what existing mental health issues, such as depression or suicidality, users might already be facing before callously deciding to manipulate them toward greater sadness. Though Facebook updated its ToS to grant itself permission to “conduct research” after it had completed the study, many have argued that the social media giant’s activities amounted to human subjects research, a threshold that would have required prior ethical approval by an internal review board under federal regulations. Sadly, Facebook is not the only company to algorithmically treat its users like lab rats.

The lack of algorithmic transparency, combined with an “in screen we trust” mentality, is dangerous. When big data, cloud computing, artificial intelligence, and the Internet of Things merge, as they are already doing, we will increasingly have physical objects acting on our behalf in 3-D space. Having an AI drive a robot that brews your morning coffee and makes breakfast sounds great. But if we recall the homicide in 1981 of Kenji Urada, the thirty-seven-year-old employee of Kawasaki who was crushed to death by a robot, things don’t always turn out so well. In Urada’s case, further investigation revealed it was the robot’s artificial intelligence algorithm that erroneously identified the man as a system blockage, a threat to the machine’s mission to be immediately dealt with. The robot calculated that the most efficient way to eliminate the threat was to push “it” with its massive hydraulic arm into the nearby grinding machine, a decision that killed Urada instantly before the robot unceremoniously returned to its normal duties. Despite the obvious challenges, the exponential productivity boosts, dramatic cost savings, and rising profits attainable through artificial intelligence systems are so great there will be no turning back. AI is here to stay, and never one to miss an opportunity, Crime, Inc. is all over it.

~~Future Crimes -by- Marc Goodman

No comments:

Post a Comment