Dastin, «Amazon Scraps Secret AI Recruiting Tool.»
Dastin.
This is part of a larger trend toward automating aspects of hiring. For a detailed account, see Ajunwa and Greene, «Platforms at Work.»
There are several superb accounts of the history of inequality and dicrimination in computation. These are a few that have informed my thinking on these issues: Hicks, Programmed Inequality; McIlwain, Black Software; Light, «When Computers Were Women»; and Ensmenger, Computer Boys Take Over.
Cetina, Epistemic Cultures, 3.
Merler et al., «Diversity in Faces.»
Buolamwini and Gebru, «Gender Shades»; Raj et al. «Saving Face.»
Merler et al., «Diversity in Faces.»
«YFCC100M Core Dataset.»
Merler et al., «Diversity in Faces,» 1.
There are many excellent books on these issues, but in particular, see Roberts, Fatal Invention, 18–41; and Nelson, Social Life of DNA, See also Tishkoff and Kidd, «Implications of Biogeography.»
Browne, «Digital Epidermalization,» 135.
Benthall and Haynes, «Racial Categories in Machine Learning.»
Mitchell, «Need for Biases in Learning Generalizations.»
Dietterich and Kong, «Machine Learning Bias, Statistical Bias.»
Domingos, «Useful Things to Know about Machine Learning.»
Maddox v. State, 32 Ga. 5S7, 79 Am. Dec. 307; Pierson v. State, 18 Tex. App. 55S; Hinkle v. State, 94 Ga. 595, 21 S. E. 601.
Tversky and Kahneman, «Judgment under Uncertainty.»
Greenwald and Krieger, «Implicit Bias,» 951.
Fellbaum, WordNet, xviii. Below I am drawing on research into ImageNet conducted with Trevor Paglen. See Crawford and Paglen, «Excavating AI.»
Fellbaum, xix.
Nelson and Kucera, Brown Corpus Manual.
Borges, «The Analytical Language of John Wilkins.»
These are some of the categories that have now been deleted entirely from ImageNet as of October 1, 2020.
See Keyes, «Misgendering Machines.»
Drescher, «Out of DSM.»
See Bayer, Homosexuality and American Psychiatry.
Keyes, «Misgendering Machines.»
Hacking, «Making Up People,» 23.
Bowker and Star, Sorting Things Out, 196.
This is drawn from Lakoff, Women, Fire, and Dangerous Things.
ImageNet Roulette was one of the outputs of a multiyear research collaboration between the artist Trevor Paglen and me, in which we studied the underlying logic of multiple benchmark training sets in AI. ImageNet Roulette, led by Paglen and produced by Leif Ryge, was an app that allowed people to interact with a neural net trained on the «person» category of ImageNet. People could upload images of themselves – or news images or historical photographs – to see how ImageNet would label them. People could also see how many of the labels are bizarre, racist, misogynist, and otherwise problematic. The app was designed to show people these concerning labels while warning them in advance of the potential results. All uploaded image data were immediately deleted on processing. See Crawford and Paglen, «Excavating AI.»
Yang et al., «Towards Fairer Datasets,» paragraph 4.2.
Yang et al., paragraph 4.3.
Markoff, «Seeking a Better Way to Find Web Images.»
Browne, Dark Matters, 114.
Scheuerman et al., «How We’ve Taught Algorithms to See Identity.»
UTKFace Large Scale Face Dataset, https://susanqq.github.io/UTK Face.
Bowker and Star, Sorting Things Out, 197.
Bowker and Star, 198.
Edwards and Hecht, «History and the Technopolitics of Identity,» 627.
Haraway, Modest_Witness@Second_Millennium, 234.
Stark, «Facial Recognition Is the Plutonium of AI,» 53.
In order of the examples, see Wang and Kosinski, «Deep Neural Networks Are More Accurate than Humans»; Wu and Zhang, «Automated Inference on Criminality Using Face Images»; and Angwin et al., «Machine Bias.»
Agüera y Arcas, Mitchell, and Todorov, «Physiognomy’s New Clothes.»
Nielsen, Disability History of the United States; Kafer, Feminist, Queer, Crip; Siebers, Disability Theory.
Whittaker et al., «Disability, Bias, and AI.»
Hacking, «Kinds of People,» 289.
Bowker and Star, Sorting Things Out, 31.
Bowker and Star, 6.
Eco, Infinity of Lists.
Douglass, «West India Emancipation.»
Particular thanks to Alex Campolo, who was my research assistant and interlocutor for this chapter, and for his research into Ekman and the history of emotions.
«Emotion Detection and Recognition»; Schwartz, «Don’t Look Now.»
Ohtake, «Psychologist Paul Ekman Delights at Exploratorium.»
Ekman, Emotions Revealed, 7.
For an overview of researchers who have found flaws in the claim that emotional expressions are universal and can be predicted by AI, see Heaven, «Why Faces Don’t Always Tell the Truth.»
Barrett et al., «Emotional Expressions Reconsidered.»
Nilsson, «How AI Helps Recruiters.»
Sánchez-Monedero and Dencik, «Datafication of the Workplace,» 48; Harwell, «Face-Scanning Algorithm.»
Byford, «Apple Buys Emotient.»
Molnar, Robbins, and Pierson, «Cutting Edge.»
Picard, «Affective Computing Group.»
«Affectiva Human Perception AI Analyzes Complex Human States.»
Schwartz, «Don’t Look Now.»
See, e. g., Nilsson, «How AI Helps Recruiters.»
«Face: An AI Service That Analyzes Faces in Images.»
«Amazon Rekognition Improves Face Analysis»; «Amazon Rekognition – Video and Image.»
Barrett et al., «Emotional Expressions Reconsidered,» 1.
Sedgwick, Frank, and Alexander, Shame and Its Sisters, 258.
Tomkins, Affect Imagery Consciousness.
Tomkins.
Leys, Ascent of Affect, 18.
Tomkins, Affect Imagery Consciousness, 23.
Tomkins, 23.
Tomkins, 23.
For Ruth Leys, this «radical dissociation between feeling and cognition» is the major reason for its attractiveness to theorists in the humanities, most notably Eve Kosofsky Sedgwick, who wants to revalorize our experiences of error or confusion into new forms of freedom. Leys, Ascent of Affect, 35; Sedgwick, Touching Feeling.
Tomkins, Affect Imagery Consciousness, 204.
Tomkins, 206; Darwin, Expression of the Emotions; Duchenne (de Boulogne), Mécanisme de la physionomie humaine.
Tomkins, 243, quoted in Leys, Ascent of Affect, 32.
Tomkins, Affect Imagery Consciousness, 216.
Ekman, Nonverbal Messages, 45.
Tuschling, «Age of Affective Computing,» 186.
Ekman, Nonverbal Messages, 45.
Ekman, 46.
Ekman, 46.
Ekman, 46.
Ekman, 46.
Ekman, 46.
Ekman and Rosenberg, What the Face Reveals, 375.
Tomkins and McCarter, «What and Where Are the Primary Affects?»
Russell, «Is There Universal Recognition of Emotion from Facial Expression?» 116.
Leys, Ascent of Affect, 93.
Ekman and Rosenberg, What the Face Reveals, 377.
Ekman, Sorenson, and Friesen, «Pan-Cultural Elements in Facial Diplays of Emotion,» 86, 87.
Ekman and Friesen, «Constants across Cultures in the Face and Emotion,» 128.
Aristotle, Categories, 70b8–13, 527.
Aristotle, 805a, 27–30, 87.
It would be difficult to overstate the influence of this work, which has since fallen into disrepute: by 1810 it went through sixteen German and twenty English editions. Graham, «Lavater’s Physiognomy in England,» 561.
Gray, About Face, 342.
Courtine and Haroche, Histoire du visage, 132.
Ekman, «Duchenne and Facial Expression of Emotion.»
Duchenne (de Boulogne), Mécanisme de la physionomie humaine.
Clarac, Massion, and Smith, «Duchenne, Charcot and Babinski,» 362-63.
Delaporte, Anatomy of the Passions, 33.
Delaporte, 48–51.
Daston and Galison, Objectivity.
Darwin, Expression of the Emotions in Man and Animals, 12, 307.
Leys, Ascent of Affect, 85; Russell, «Universal Recognition of Emotion,» 114.
Ekman and Friesen, «Nonverbal Leakage and Clues to Deception,» 93.
Pontin, «Lie Detection.»
Ekman and Friesen, «Nonverbal Leakage and Clues to Deception,» In a footnote, Ekman and Friesen explained: «Our own research and the evidence from the neurophysiology of visual perception strongly suggest that micro-expressions that are as short as one motion-picture frame (1/50 of a second) can be perceived. That these micro-expressions are not usually seen must depend upon their being embedded in other expressions which distract attention, their infrequency, or some learned perceptual habit of ignoring fast facial expressions.»
Ekman, Sorenson, and Friesen, «Pan-Cultural Elements in Facial Displays of Emotion,» 87.
Ekman, Friesen, and Tomkins, «Facial Affect Scoring Technique,» 40.
Ekman, Nonverbal Messages, 97.
Ekman, 102.
Ekman and Rosenberg, What the Face Reveals.
Ekman, Nonverbal Messages, 105.
Ekman, 169.
Eckman, 106; Aleksander, Artificial Vision for Robots.
«Magic from Invention.»
Bledsoe, «Model Method in Facial Recognition.»
Molnar, Robbins, and Pierson, «Cutting Edge.»
Kanade, Computer Recognition of Human Faces.
Kanade, 16.
Kanade, Cohn, and Tian, «Comprehensive Database for Facial Expression Analysis,» 6.
See Kanade, Cohn, and Tian; Lyons et al., «Coding Facial Expressions with Gabor Wavelets»; and Goeleven et al., «Karolinska Directed Emotional Faces.»
Lucey et al., «Extended Cohn-Kanade Dataset (CK+).»
McDuff et al., «Affectiva-MIT Facial Expression Dataset (AM-FED).»
McDuff et al.
Ekman and Friesen, Facial Action Coding System (FACS).
Foreman, «Conversation with: Paul Ekman»; Taylor, «2009 Time 100»; Paul Ekman Group.
Weinberger, «Airport Security,» 413.
Halsey, «House Member Questions $900 Million TSA ‘SPOT’ Screening Program.»
Ekman, «Life’s Pursuit»; Ekman, Nonverbal Messages, 79–81.
Mead, «Review of Darwin and Facial Expression,» 209.
Tomkins, Affect Imagery Consciousness, 216.
Mead, «Review of Darwin and Facial Expression,» See also Fridlund, «Behavioral Ecology View of Facial Displays.» Ekman later conceded to many of Mead’s points. See Ekman, «Argument for Basic Emotions»; Ekman, Emotions Revealed; and Ekman, «What Scientists Who Study Emotion Agree About.» Ekman also had his defenders. See Cowen et al., «Mapping the Passions»; and Elfenbein and Ambady, «Universality and Cultural Specificity of Emotion Recognition.»
Fernández-Dols and Russell, Science of Facial Expression, 4.
Gendron and Barrett, Facing the Past, 30.
Vincent, «AI ‘Emotion Recognition’ Can’t Be Trusted.’» Disability studies scholars have also noted that assumptions about how biology and bodies function can also raise concerns around bias, especially when automated through technology. See Whittaker et al., «Disability, Bias, and AI.»
Izard, «Many Meanings/Aspects of Emotion.»
Leys, Ascent of Affect, 22.
Leys, 92.
Leys, 94.
Leys, 94.
Barrett, «Are Emotions Natural Kinds?» 28.
Barrett, 30.
See, e. g., Barrett et al., «Emotional Expressions Reconsidered.»
Barrett et al., 40.
Kappas, «Smile When You Read This,» 39, emphasis added.
Kappas, 40.
Barrett et al., 46.
Barrett et al., 47–48.
Barrett et al., 47, emphasis added.
Apelbaum, «One Thousand and One Nights.»
See, e. g., Hoft, «Facial, Speech and Virtual Polygraph Analysis.»
Rhue, «Racial Influence on Automated Perceptions of Emotions.»
Barrett et al., «Emotional Expressions Reconsidered,»48.
See, e. g., Connor, «Chinese School Uses Facial Recognition»; and Du and Maki, «AI Cameras That Can Spot Shoplifters.»
NOFORN stands for Not Releasable to Foreign Nationals. «Use of the ‘Not Releasable to Foreign Nationals’ (NOFORN) Caveat.»
The Five Eyes is a global intelligence alliance comprising Australia, Canada, New Zealand, the United Kingdom, and the United States. «Five Eyes Intelligence Oversight and Review Council.»
Galison, «Removing Knowledge,» 229.
Risen and Poitras, «N.S.A. Report Outlined Goals for More Power»; Müller-Maguhn et al., «The NSA Breach of Telekom and Other German Firms.»
FOxACID is software developed by the Office of Tailored Access Operations, now Computer Network Operations, a cyberwarfare intelligence gathering unit of the NSA.
Schneier, «Attacking Tor.» Document available at «NSA Phishing Tactics and Man in the Middle Attacks.»
Swinhoe, «What Is Spear Phishing?»
«Strategy for Surveillance Powers.»
Edwards, Closed World.
Edwards.
Edwards, 198.
Mbembé, Necropolitics, 82.
Bratton, Stack, 151.
For an excellent account of the history of the internet in the United States, see Abbate, Inventing the Internet.
SHARE Foundation, «Serbian Government Is Implementing Unlawful Video Surveillance.»
Department of International Cooperation Ministry of Science and Technology, «Next Generation Artificial Intelligence Development Plan.»
Chun, Control and Freedom; Hu, Prehistory of the Cloud, 87–88.
Cave and ÓhÉigeartaigh, «AI Race for Strategic Advantage.»
Markoff, «Pentagon Turns to Silicon Valley for Edge.»
Brown, Department of Defense Annual Report.
Martinage, «Toward a New Offset Strategy,» 5–16.
Carter, «Remarks on ‘the Path to an Innovative Future for Defense’»; Pellerin, «Deputy Secretary.»
The origins of U.S. military offsets can be traced back to December 1952, when the Soviet Union had almost ten times more conventional military divisions than the United States. President Dwight Eisenhower turned to nuclear deterrence as a way to «offset» these odds. The strategy involved not only the threat of the retaliatory power of the U.S. nuclear forces but also accelerating the growth of the U.S. weapons stockpile, as well as developing long-range jet bombers, the hydrogen bomb, and eventually intercontinental ballistic missiles. It also included increased reliance on espionage, sabotage, and covert operations. In the 1970s and 1980s, U.S. military strategy turned to computational advances in analytics and logistics, building on the influence of such military architects as Robert McNamara in search of military supremacy. This Second Offset could be seen in military engagements like Operation Desert Storm during the Gulf War in 1991, where reconnaissance, suppression of enemy defenses, and precision-guided munitions dominated how the United States not only fought the war but thought and spoke about it. Yet as Russia and China began to adopt these capacities and deploy digital networks for warfare, anxiety grew to reestablish a new kind of strategic advantage. See McNamara and Blight, Wilson’s Ghost.
Pellerin, «Deputy Secretary.»
Gellman and Poitras, «U.S., British Intelligence Mining Data.»
Deputy Secretary of Defense to Secretaries of the Military Departments et al.
Deputy Secretary of Defense to Secretaries of the Military Departments et al.
Michel, Eyes in the Sky, 134.
Michel, 135.
Cameron and Conger, «Google Is Helping the Pentagon Build AI for Drones.»
For example, Gebru et al., «Fine-Grained Car Detection for Visual Census Estimation.»
Fang, «Leaked Emails Show Google Expected Lucrative Military Drone AI Work.»
Bergen, «Pentagon Drone Program Is Using Google AI.»
Shane and Wakabayashi, «‘Business of War.’»
Smith, «Technology and the US Military.»
When the JEDI contract was ultimately awarded to Microsoft, Brad Smith, the president of Microsoft, explained that the reason that Microsoft won the contract was that it was seen «not just as a sales opportunity, but really, a very large-scale engineering project.» Stewart and Carlson, «President of Microsoft Says It Took Its Bid.»
Pichai, «AI at Google.»
Pichai. Project Maven was subsequently picked up by Anduril Industries, a secretive tech startup founded by Oculus Rift’s Palmer Luckey. Fang, «Defense Tech Startup.»
Whittaker et al., AI Now Report 2018.
Schmidt quoted in Scharre et al., «Eric Schmidt Keynote Address.»
As Suchman notes, «‘Killing people correctly’ under the laws of war requires adherence to the Principle of Distinction and the identification of an imminent threat.» Suchman, «Algorithmic Warfare and the Reinvention of Accuracy,» n. 18.
Suchman.
Suchman.
Hagendorff, «Ethics of AI Ethics.»
Brustein and Bergen, «Google Wants to Do Business with the Military.»
For more on why municipalities should more carefully assess the risks of algorithmic platforms, see Green, Smart Enough City.
Thiel, «Good for Google, Bad for America.»
Steinberger, «Does Palantir See Too Much?»
Weigel, «Palantir goes to the Frankfurt School.»
Dilanian, «US Special Operations Forces Are Clamoring to Use Software.»
«War against Immigrants.»
Alden, «Inside Palantir, Silicon Valley’s Most Secretive Company.»
Alden, «Inside Palantir, Silicon Valley’s Most Secretive Company.»
Waldman, Chapman, and Robertson, «Palantir Knows Everything about You.»
Joseph, «Data Company Directly Powers Immigration Raids in Workplace»; Anzilotti, «Emails Show That ICE Uses Palantir Technology to Detain Undocumented Immigrants.»
Andrew Ferguson, conversation with author, June 21, 2019.
Brayne, «Big Data Surveillance.» Brayne also notes that the migration of law enforcement to intelligence was occurring even before the shift to predictive analytics, given such court decisions as Terry v. Ohio and Whren v. United States that made it easier for law enforcement to circumvent probable cause and produced a proliferation of pretext stops.
Richardson, Schultz, and Crawford, «Dirty Data, Bad Predictions.»
Brayne, «Big Data Surveillance,» 997.
Brayne, 997.
See, e. g., French and Browne, «Surveillance as Social Regulation.»
Crawford and Schultz, «AI Systems as State Actors.»
Cohen, Between Truth and Power; Calo and Citron, «Automated Administrative State.»
«Vigilant Solutions»; Maass and Lipton, «What We Learned.»
Newman, «Internal Docs Show How ICE Gets Surveillance Help.»
England, «UK Police’s Facial Recognition System.»
Scott, Seeing Like a State.
Haskins, «How Ring Transmits Fear to American Suburbs.»
Haskins, «Amazon’s Home Security Company.»
Haskins.
Haskins. «Amazon Requires Police to Shill Surveillance Cameras.»
Haskins, «Amazon Is Coaching Cops.»
Haskins.
Haskins.
Hu, Prehistory of the Cloud, 115.
Hu, 115.
Benson, «‘Kill ’Em and Sort It Out Later,’» 17.
Hajjar, «Lawfare and Armed Conflicts,» 70.
Scahill and Greenwald, «NSA’s Secret Role in the U.S. Assassination Program.»
Cole, «‘We Kill People Based on Metadata.’»
Priest, «NSA Growth Fueled by Need to Target Terrorists.»
Gibson quoted in Ackerman, «41 Men Targeted but 1,147 People Killed.»
Tucker, «Refugee or Terrorist?»
Tucker.
O’Neil, Weapons of Math Destruction, 288–326.
Fourcade and Healy, «Seeing Like a Market.»
Eubanks, Automating Inequality.
Richardson, Schultz, and Southerland, «Litigating Algorithms,» 19.
Richardson, Schultz, and Southerland, 23.
Agre, Computation and Human Experience, 240.
Bratton, Stack, 140.
Hu, Prehistory of the Cloud, 89.
Nakashima and Warrick, «For NSA Chief, Terrorist Threat Drives Passion.»
Document available at Maass, «Summit Fever.»
The future of the Snowden archive itself is uncertain. In March 2019, it was announced that the Intercept – the publication that Glenn Greenwald established with Laura Poitras and Jeremy Scahill after they shared the Pulitzer Prize for their reporting on the Snowden materials – was no longer going to fund the Snowden archive. Tani, «Intercept Shuts Down Access to Snowden Trove.»
Silver et al., «Mastering the Game of Go without Human Knowledge.»
Silver et al., 357.
Full talk at the Artificial Intelligence Channel: Demis Hassabis, DeepMind – Learning from First Principles. See also Knight, «Alpha Zero’s ‘Alien’ Chess Shows the Power.»
Demis Hassabis, DeepMind – Learning from First Principles.
For more on the myths of «magic» in AI, see Elish and boyd, «Situating Methods in the Magic of Big Data and AI.»
Meredith Broussard notes that playing games has been dangerously conflated with intelligence. She cites the programmer George V. NevilleNeil, who argues: «We have had nearly 50 years of human/computer competition in the game of chess, but does this mean that any of those computers are intelligent? No, it does not – for two reasons. The first is that chess is not a test of intelligence; it is the test of a particular skill – the skill of playing chess. If I could beat a Grandmaster at chess and yet not be able to hand you the salt at the table when asked, would I be intelligent? The second reason is that thinking chess was a test of intelligence was based on a false cultural premise that brilliant chess players were brilliant minds, more gifted than those around them. Yes, many intelligent people excel at chess, but chess, or any other single skill, does not denote intelligence.» Broussard, Artificial Unintelligence, 206.
Galison, «Ontology of the Enemy.»
Campolo and Crawford, «Enchanted Determinism.»
Bailey, «Dimensions of Rhetoric in Conditions of Uncertainty,» 30.
Bostrom, Superintelligence.
Bostrom.
Strand, «Keyword: Evil,» 64–65.
Strand, 65.
Hardt and Negri, Assembly, 116, emphasis added.
Wakabayashi, «Google’s Shadow Work Force.»
Quoted in McNeil, «Two Eyes See More Than Nine,» 23.
On the idea of data as capital, see Sadowski, «When Data Is Capital.»
Harun Farocki discussed in Paglen, «Operational Images.»
For a summary, see Heaven, «Why Faces Don’t Always Tell the Truth.»
Nietzsche, Sämtliche Werke, 11:506.
Wang and Kosinski, «Deep Neural Networks Are More Accurate Than Humans»; Kleinberg et al., «Human Decisions and Machine Predictions»; Crosman, «Is AI a Threat to Fair Lending?»; Seo et al., «Partially Generative Neural Networks.»
Pugliese, «Death by Metadata.»
Suchman, «Algorithmic Warfare and the Reinvention of Accuracy.»
Simmons, «Rekor Software Adds License Plate Reader Technology.»
Lorde, Master’s Tools.
Schaake, «What Principles Not to Disrupt.»
Jobin, Ienca, and Vayena, «Global Landscape of AI Ethics Guidelines.»
Mattern, «Calculative Composition,» 572.
For more on why AI ethics frameworks are limited in effectiveness, see Crawford et al., AI Now 2019 Report.
Mittelstadt, «Principles Alone Cannot Guarantee Ethical AI.» See also Metcalf, Moss, and boyd, «Owning Ethics.»