Print this article - AoIR Selected Papers of Internet Research

Selected Papers of Internet Research 16:
th
The 16 Annual Meeting of the
Association of Internet Researchers
Phoenix, AZ, USA / 21-24 October 2015
ALGORITHM = ?: THE CULTURAL POLITICS OF ALGORITHMIC DIFFERENCE John Cheney-­Lippold University of Michigan Introduction In 2009 Desi Cryer was working in the computer department of a Texas retail store. Along with his co-­worker Wanda Zamen, Cryer noticed something iffy in a newly released laptop released by Hewlett-­Packard. The in-­store HP Mediasmart computer with facial recognition technology was able to recognize Zamen, who happened to be white, but failed to recognize Cryer, who happened to be black. After several tests, they recorded a video and uploaded it onto YouTube on Zamen’s account, provocatively titling it “HP is racist” [Figure 1]. In “HP is racist”, “White Wanda” and “Black Desi” describe the racial disparity at work in HP’s new facial recognition algorithm. The video begins with a shot of Cryer staring into the camera. He introduces himself, his race, and the technology: “I’m black… I think my blackness is interfering with the computer’s ability to follow me” (Zamen, 2009). He moves his head to the left. He moves his head to the right. Nothing happens — the camera’s focus stands still. But when Zamen enters the frame, the camera jolts to life. The facial recognition software immediately recognizes her and zooms the camera in, following her face as she snakes to and fro. The video’s narrative hits its arc when Cryer reintroduces his own face and instantly breaks the recognition process. The camera returns to the default state. Figure 1 algorithmfigure1.tif ;; A still from “HP is Racist” Zamen’s face was intelligible while Cryer’s face was not. This is because HP’s facial recognition algorithm was able to recognize a “face”, but was entirely unable to recognize a face. An algorithm doesn’t see a face like we do: eyes, nose, mouth, jaw, ears, brow, forehead, nostrils, wrinkles, hair, and blemishes. Instead, algorithms see “faces” as represented by algorithmic interpretations of data. Pixelized polygons and hue contrast values are used to translate an image of a face into a datafied “face” through machine learning pattern analysis. If you show a computer hundreds of photos Suggested Citation (APA): Cheney-­Lippold, J. (2015, October 21-­24) Algorithm = ?: The Cultural th
Politics Of Algorithmic Difference. Paper presented at Internet Research 16: The 16 Annual Meeting of the Association of Internet Researchers. Phoenix, AZ, USA: AoIR. Retrieved from http://spir.aoir.org. of human faces, the computer will learn the commonalities that span across each face to create a “face”. Yet this video wasn’t really about faces or “faces”. Indeed, at the video’s end, Cryer smiles, stares directly into the camera that had previously ignored him, and states: “HP is racist. There. I said it” as he throws his hands up in dual playful/fed-­up defeat. From nearly all vantage points, a technology that recognizes whiteness and doesn’t recognize blackness is racist. It’s surely racist in how it lets the legacies of antiblackness seep into the construction of new technologies, as we’ll see below. It’s surely racist in terms of how its immediate effects seemingly privileges one racial group over another. And it’s surely racist in how subsequent use of the computer further emphasizes racial distinction, a fact made evident by Cryer’s admission at the end of the video: “I bought one of these for Christmas … I hope my wife doesn’t see this video” (Zamen, 2009). But HP’s algorithm was also “racist” in how it translated race into its datafied version of “race”, a phenomenon described by Simone Browne as “digital epidermalization”: “the exercise of power cast by the disembodied gaze of certain surveillance technologies … that can be employed to do the work of alienating the subject by producing a ‘truth’ about the body and one’s identity (or identities) despite the subject’s claims” (Browne, 2010: 135). This typographical offsetting of “race” isn’t to sarcastically air-­quote our way out of real racism. It’s instead to talk about a distinct ontological formation of “race” and “racism” that differs from how we historically understood race and racism. Race doesn’t exist for a computer. Humans being don’t exist. And faces don’t exist, either. These ideas have to be taught to the machine, made into something an algorithm can either identify or ignore. This is because algorithms, as part of a new grammar of knowledge construction, cannot magically translate race into a technological world. They instead make it from within. HP’s “face” isn’t a face, but a machine-­learned pattern of what is a “face” — and by its exclusionary corollary, what isn’t a “face”. Indeed, in all algorithmic productions, the world is never truly represented. It is only presented, created entirely with data, that then serves as the new, computational index for meaning in the world. Despite the publicized hype surrounding HP's “facial recognition” technology, the computer's algorithm is not really able to recognize a face. It instead makes a “face”, and then compare new visual data to that “face”. And when Cryer’s black face doesn’t fit within HP’s “face”, we are witness a new construction of “race”, a discursive causality of non-­
recognition articulated exclusively through data. It’s a form of raceless “racism" that has racist effects without immediate, racist practices of governance.1 So what is the difference between face and “face” or race and “race”? This is what I will call algorithmic difference, or the creation of a new cultural form based on algorithmic logic. And this algorithmic difference is precisely what I urge us to focus on when we conceive of what algorithms are and do in the field of cultural studies. Algorithms are often thought of as procedural, technical routines: “an effective method expressed as a finite list of well-­defined instructions for calculating a function” (Wikipedia, 2015). And contemporary critiques have led us to understand algorithms as automated decision-­
making systems that translate existing values (like antiblackness) into our computational worlds (McPherson, 2011). But algorithms also do something more. Algorithms create new structures of knowledge by building the world anew. They construct “things” from things, transcoding the world into 1s and 0s, a phenomenon widely described by Lev Manovich (2001) but practically corralled into different material forms such as digitization, or biomedia, or datafication (Negroponte, 1995;; Thacker, 2004;; Mayer-­Schönberger and Cukier, 2013). But something like race can’t exist in 1s and 0s. It’s quite impossible to define police brutality, systemic oppression, and the historical lineages of white supremacy within a neat, lossless digital format. Yet in this essay, I aim to delve deeper, to go beyond what Wendy Hui Kyong Chun calls the “obfuscation” of computer processing and look at how exactly a 1 and a 0 can make a “race”, a “face”, or “meaning” in general (Chun, 2005). With this perspective we can begin to think about “race” as both separate-­but-­connected to race, existing on the data/algorithm ontology of the computer in addition to the socio-­historical valuation of bodies, performance, and identity. Definitions To understand the idea of an algorithm in its most base form is to go directly to its technical roots: “an effective method expressed as a finite list of well-­defined instructions for calculating a function” (Wikipedia, 2015). This finite list of well-­defined instructions might follow as: Price[apple] = $0.50
Price[orange] = $0.60
While price[apple] > price[orange]
Buy orange
Else
Buy apple
End
In this most simple of algorithmic examples, the calculated function is the decision to buy either an apple or an orange. The instructions compare the price of an apple to an orange in order to buy the cheapest piece of fruit: apple. If the price of an apple rose to $0.70, the algorithm would buy the orange. This is heart of algorithmic processing, but such a superficial reading is void of many of the politics that accompany algorithms as they order the formative elements of our lives: who are our friends on Facebook, who are our potential sexual partners on OkCupid, and what kinds of bodies can/cannot use HP’s Mediasmart computer. Algorithms are not neutral, dispassionate machines that calculate functions. They are political, overdetermined machines imbued with intentional and unintentional assumptions about things, users, and the world. Even our above algorithm of apples and oranges rephrases the paragon of impossible relativity (it’s an apples and oranges sort of thing!) with its own, new dialect. The algorithm speaks its truths according to a scale that allows for a quantitative comparison between apples and oranges (price). This scale becomes the constructed milieu by which an algorithm makes sense of the world. And this making sense relies entirely on relative difference. Imagine an algorithm where the scale was changed from the quantitative value of price to the qualitative value of taste. While taste[apple] > taste[orange] might be made algorithmically possible by creating a numerical rating for fruit (1-­10), this change in the algorithm's well-­defined instructions requires the development of a new ordinal scale. This quantitative thinking might seem straightforward to any person who has programmed or thought through these kinds of logical/mathematical problems. But it is the formative way that we can understand how a computer creates a “face” or “race”. Relative difference and the creation of scale is not a haphazard byproduct of algorithmic processing, nor is it a smooth, pre-­existing and naturalized continuum. It is instead a socially-­constructed, value-­laden set of positionalities that correspond to a numerical hierarchy. Scales and their implied relative difference – and values – are the structuring logics for everything and anything that might be labeled “algorithmic”. Ted Striphas refers to algorithms as cultural decision-­making systems, a rephrasing of the concept’s technical definition into something more telling: the automated regulation of culture by a programmed, logical sequence (Striphas, 2012). For this sequence to be effective, the author of any algorithm must determine what is weighed, what isn’t, what is privileged, and what is ignored. This is seemingly apparent in the recognition/non-­
recognition of Cryer and Zamen’s HP experience, though it remains puzzling why a company like HP would intentionally decide to deny its computer to a considerable segment of their potential market. We might then approach the idea of decision-­making through another example. The microblogging service Twitter famously generates its “Trending Topics” list according to what words are used at the highest frequency relative to time. A bunch of people writing about the “#Oscars” will skyrocket the word “Oscars” into social media significance, but only the fact that relatively few people talked about the “Oscars” days before crowns it as a Trending Topic. These decisions makes sense in one light: the words “a”, “the”, and “I” in English tweets would always be trending if this variable of time isn’t taken into account. In another light, keywords and hashtags that grow in popularity over time, like “OWS” or “Occupy Wall Street”, will forever be unable to be statistically significant, and thus unable to trend (Gillespie, 2011). “What makes something algorithmic”, Tarleton Gillespie argues, “is that it is produced by or related to an information system that is committed (functionally and ideologically) to the computational generation of knowledge or decisions” (Gillespie, 2014). More directly, algorithms construct value by privileging certain information or processes over others (Annay, 2012). Sometimes these decisions are intentional, as Twitter’s case reinforces a knowledge production focused on immediacy and popularity. No topic will stay stale enough to grow old and out of fashion, as users' constant use of the site will change the topics of the day in accordance with new trending truths. But other times, these decisions are either unintentional or made in ignorance. For Desi Cryer, his non-­recognition may not have been an explicit algorithmic exclusion, but a recent incident in a long trail of technological privileging, a “notion of white prototypicality” that maintains whiteness as the norm for any politics of recognition (Gordon, 2006). Simon Browne usefully extends this conception to biometrics, where the dominance of white identity gets maintained through specific technologies as well as the more general practices of research and development (Browne, 2010). “Black Desi” Cryer certainly wasn’t the first person to find his skin color to be non-­ideal for camera recognition. Indeed, the history of photography is littered with examples that illuminate its inherited and perpetuating bias against “nonideal” bodies. Kodak’s “Shirley cards” in the 1950s to 1970s calibrated still photography color balances to the skin tone of a white Kodak employee named Shirley, a decision made according to the 1950s' conception of conventional white supremacist beauty. Unsurprisingly, a photograph of a black face with a camera calibrated to Shirley’s whiteness would be washed out, undifferentiated, and imprecise. Nonetheless, the industry standard of whiteness set the terms for visual production, making blackness unable to participate in visual culture on the same, level playing field. This standardization set the subsequent terms of identification (McFadden, 2014). Technology operators went to great lengths to “fix” black bodies to fit within these pre-­
existing, white-­centric prototypicality rather than fixing the technology itself (Coleman, 2009). Throughout the 20th century, white-­calibrated motion-­picture cameras had difficulty reading the contours of black actors faces. Rather than tweak the camera to perceive more than just whiteness in high-­fidelity, film crews dutifully applied Vaseline onto black actors’ faces in order to better reflect light back into the camera’s lens. Even in 2015, non-­Vaseline using practitioners manage the inherited bias of photogrpahy, like Instagtram filters, by manually opening their camera's apertures an extra one or two stops to let more light in (Jenkins, 2015). Users of technology adjust to racialized standards, an interactive asymmetry that didn’t forbid, but disfigured non-­ideal bodies. And as this asymmetry gets transcoded into the ontology of the computer, we encounter the same white prototypicality in HP’s telling response to Cryer and Zamen’s video: “We are working with our partners to learn more. The technology we use is built on standard algorithms that measure the difference in intensity of contrast between the eyes and the upper cheek and nose. We believe that the camera might have difficulty "seeing" contrast in conditions where there is insufficient foreground lighting” (Hing, 2009). In this overtly technical reply, HP suggests a strategy that fixes black bodies according to the decisions/values of what a “face” is as programmed into HP’s facial recognition camera. This “intensity of contrast between the eyes and the upper cheek and nose” points to the scales of relative difference that HP uses to locate “faces” in the camera’s sight.2 So we can ask, why did the camera have difficulty “seeing” Cryer? In line with the embedded racial normativity of the “Shirley cards”, we might presume that HP used only light-­skinned people to train their facial recognition algorithm. And/or we also might presume that HP used only light-­skinned people in their beta testing. And/or we might presume that HP engineers were largely light-­skinned, and questions of race/racial asymmetry were not taken into consideration.3 These presumptions are unprovable unless we have access to the data inputs and machine learned pattern that would identify for us what HP thought was a “face”. But the same presumptions also let us focus on what we do know – the output. In the output we see what is recognized, and how normative ideas of what makes a “face” operate as the silent corollary that also makes “race”. Zamen’s “face” was “normal” (or “white”). Cryer’s was “not normal” (or “black”/“not white”). The clearcut line of recognition/non-­recognition isn’t just a coincidence that parallels the historical black/white schisms of chattel slavery and post-­Reconstruction Jim Crow laws. Indeed, the fact that Zamen was recognized is largely the result of engineers unintentionally centering whiteness in quantitative terms and thus recasting that racialized white position as the default “face”. In another example, Asian individuals found themselves being asked, “Did someone blink?” by their Nikon S630 digital cameras after taking a photograph of themselves or their families (Wang, 2009). Here, the racialized terms of whiteness weren't about color hue and “face”, but about the shape of one’s “eyes”. Someone at Nikon had written an algorithm that identified users who “blinked”. In the technical implementation of this “feature”, those “eyes” that were not open to the standard of presumably “white” “eyes” were seen as problematic. While Asian users were not disfigured or denied presence in the resulting photograph, the overlaying insistence of Asian bodies’ difference-­from white standards reminds us of the racialized legacies that haunt all technological developments. Asian became “Asian”, much like HP is “racist” against “blacks”. In both HP and Nikon’s cases, Asian and black bodies did not fit within white prototypicality, and the camera’s lens subsequently considered their eyes and faces to be an error. But unlike their non-­algorithmic predecessors, there is no ruleset in Shirley cards that alerted non-­white subjects of their disfigurement. Their faces were washed out after the fact, a form of disciplinary non-­normativity that relies on gradations of illegibility — not non-­recognition. Shirley’s whiteness acted as the technological operationalization of a standard face, but a black person in front of the camera would still be recognized. HP’s “white” and Nikon’s “not-­Asian” acts as the algorithmic operationalization of a face, and a “black” or “Asian” person in front of the camera will not be recognized. In algorithmic difference, a new logic of inclusion/exclusion is created at the level of data and algorithm. Nikon’s “Did someone blink?” is not a polite message alerting the user to redo the photograph. It’s a euphemistic announcement of a new form of “race” that we have no control over, a form of difference that we as users are often unprepared to think through. Algorithms are decision-­making machines that create new cultural forms. And these new cultural forms are embedded with valuations at their very foundations. In the algorithm that begins this section, the price of an [apple] or [orange] determines everything that comes after. But imagine now that price[apple] is now HP’s [face]. How does this “face” get understood? What does the corollary “race” of the “face” mean in terms of not lived experience and racial legacies but the 1s and 0s of that make up this new form of “race”? All of these offset categories are based on the implicit valuations of the technical lineage of white supremacy. But all of these offset categories also indicate a new way of understanding difference. Algorithms are more than decision-­making machines. They are also, in a nod to Charles Babbage, difference machines – because they produce difference. Difference Difference is not naturally occurring in the wild. Our earlier example of apples and oranges only becomes different when we are forced to state what is different about those two classes of objects. They’re both pieces of fruit, they’re both sweet, they both have skin and they both have seeds. Their difference comes in how it gets assigned — price, taste, and availability of each. The value of deciding whether to eat an apple or an orange is based on which difference is produced, and how that relative difference makes subjective sense to us. I might be hungry and choose an apple because I like apples more then oranges. This qualitative difference makes sense. If I am hungry but only have $0.60 in my pocket, I will choose the apple because the orange costs more than I have. This quantitative difference also makes sense. But for most of our lives, the distinction between quantitative and qualitative difference is effectively ignored. We produce difference and value things and people efficiently and without much thought. We connect these qualitative and quantitative distinctions every time we go to the supermarket hungry. Yet what would happen if the world was only quantitatively different? And what would happen if these quantitative differences weren’t made in an ad hoc fashion every time we consider what to eat, but were part of a formative, programmed logical flow? And what would happen if this logical flow was the exclusive thing that determined use, recognition, or access? This is how algorithms see the world, and how algorithmic processing makes something like “race” materially distinct from race. While Kodak’s racism might have centered Shirley as the standard of whiteness, a black subject in a Kodak photograph was not seen as materially distinct from white, even though it was materially non-­ideal. But “racial” difference for HP’s algorithm is much more than the “racism” of recognizing Zamen and refusing Cryer. HP’s “racial” difference goes all the way down, going so far as to materially define what “white” and “not-­white” means in its algorithmic soul. Algorithmic difference is the construction of the world in terms of algorithmic logic. From “white” to “black” to 1s and 0s, this section will reread algorithms as not just automatic decision-­makers that privilege certain things over others, but creators of scales from which algorithms understand, and thus assign value to, the world. The scales of our algorithmic worlds are programmed from within the data/algorithm ontology of the computer. Algorithmic machines come with only one way to make valuations: is X greater than, equal to, or less than Y. This is the exclusive grammar of the algorithmic world, a feature that accompanies anything that we label with the adjective of “algorithmic”. And this feature’s lineage arrives to us from the theoretical origins of the digital computer. This theoretical digital computer is called a Turing machine, and was hypothetically developed by a 25 year-­old English mathematician called Alan Turing. It was initially conceived as three distinct parts: a long ticker-­tape celled strip that stores the input, results, and output;; a tape head that reads and writes a value of either 0 or 1 that is contained in each individual cell;; and a control unit that specifies the rules by which the inputs will be read, computed, and then outputted onto the tape. This machine, in all its abstract simplicity, is the conceptual lynchpin for how we to understand what algorithms are and what they do (Herken, 1995). The invention of this Turing machine led to legitimately ground-­breaking possibilities. The Church-­Turing thesis, published in a 1943 paper by Stephan Kleene, described a scholastic intersection between the work of Turing and mathematician, as well as Turing’s doctoral adviser, Alonzo Church. Every algorithm, Kleene proposed, can be processed by a Turing machine (Kleene, 1943). That is, every step of an algorithm can be, hypothetical, computed by a machine that reads, writes, and overwrites onto a long piece of tape. From addition, to subtraction, to the computation required to send a man to the moon, all can be — again hypothetically — calculated on a simple, tape-­fed computer (Hennie, 1965). Here, the question of scale is null. Every value is either a 1 or a 0. Kleene’s proof of concept became a proof of practice when the logic of the Turing machine evolved into the technical foundations for modern computation. The tape head, paper, and control component turned into the digital computer's data, memory, and CPU. And for this new computer to actually work, it needed to be fed a set of instructions. This is the input of an algorithm, or what is read by the machinic tape head. And for most computers, these input instructions come in the form of machine language. Machine language is merely a list of binary states, represented as either 1s or 0s, that are read in sequence by our computational Turing machine. For example, 0001 0010 0011 0100 would be understood as 1, 2, 3, 4. Most modern CPUs can perform millions of binary instructions a second, a dazzling feat that conceals the baseline simplicity of the machine and its language. Yet to be intelligible to a programmer, these binary sequences must be translated into another, more human-­friendly, language. This is what we call assembly, a “low-­level” language (it “talks” to the machine on behalf of another, higher-­level programming language like C++, Ruby, BASIC, etc.) that is straightforward and unassailable in meaning. There is no subjective wiggle room, nor poetic interpretation: 1 is 1 is 1 is 1. Assembly code is also linear — it reads down the “page” until its routine is completed. It is unambiguous, direct, and unsurprisingly not that pretty to read. A way to better understand this kind of code is to explore one of the operable commands that makes algorithmic processing possible. The suite of assembly language “jump” instructions momentarily removes the continuous linear proceeding of computation and relocates the computer’s focus to another part of the program's memory. Non-­computer people can picture a “Choose Your Own Adventure” book, where you turn to page 42 if you want to push the big red “Don’t Push” button or turn to 85 if you don’t. An example of a “jump” in assembly language would be the conditional JG instruction, which stands for “Jump if Greater”. JG 2 would “jump” if the given variable/flag was greater than 2. JLE stands for “Jump if Less or Even”, meaning that JLE 8 would “jump” if the given variable/flag was below or even to 8. Here, the employed scales for relative difference move from a binary distinction (1 or 0) to an integer distinction (-­1 < 1, 2 is > 0, and 9 = 9). It is this level of instruction that generates the origin plane of difference where algorithmic logic is constituted. In any form, in any context, and in any relationship, the measuring of greater, less, and equal become the cornerstone to navigating any computers’ memory, and thus the processing of any computer code. Assembly language is beholden to these basic instructions that, given the aid of higher-­level programming syntax, can make a CPU processor an extraordinarily powerful tool. But the primary language that translates these higher-­level codes to the processor is arbitrated by this relational “jumping” and its instantiated scale. The importance of relative difference should be apparent to those who might want to crisscross two different scales. If our above algorithm compared price[apple] with “HELLO”, the algorithm would result in an error. There is no way to relationally understand $0.50 to “HELLO”. Instead, we as programmers would have to decide ourselves how “HELLO” should be valued. Should it be valued according to how many characters are in “HELLO”? Or should we value it according to how many times “HELLO” occurs in a data set (like a web page)? These questions require much more than a normative, “um, do the number of characters one” response. They force us to answer questions of scale, to transcode qualitative distinctions into quantitative, valued difference (Porter, 1995). Algorithmic difference is the way this transcoded difference gets constructed at the level of the algorithm. It’s the way a “face” gets identified according to the hue contrast between the “eyes” and the “upper cheek and nose” of a potential “face”. HP’s “face” is a decision, just like when we transcode “HELLO” = 5, but one creates a new cultural form. If your “face” is like Cryer’s and doesn’t reach the hue threshold difference between “eyes” and “upper cheek and nose”, you are not seen. You do not have a “face”, and you are subsequently — yet unintentionally — also “not white”. But this isn’t about being black or white now. This is about having the algorithmic logic of relative hue difference be the new, transcoded index for “blackness” and “whiteness”. HP’s facial recognition algorithm could be very rudimentarily written as such: // This section sets the the range of RGB values (how we understand computers
to
// see color) that defines a “face”’s “eyes”.4
Face[eyes]
// this is the RGB value for very, very white
While RGB[eyes] <= 255, 255, 255
And
// this is the RGB value for a faded white
While RGB[eyes] >= 206, 206, 206
Eyes = “yes”
Else
Eyes = “no”
End
// This section sets the RGB range that defines a “face”’s “upper cheek and
nose”.
Face[uppercheekandnose]
// this is the RGB value for very pale skin
While RGB[uppercheeksandnose] <= 255, 255, 230
And
// this is the RGB value for olive skin
While RGB[uppercheeksandnose] >= 126, 86, 63
Uppercheekandnose = “yes”
Else
Uppercheekandnose = “no”
End
// This section compares the RGB ranges for Face[eyes] and
Face[uppercheekandnose],
// using the arbitrary value of 50 to determine the size of acceptable
difference
// between the “eyes” and “upper cheek and nose”
While Face[eyes] + (50, 50, 50) > Face[uppercheekandnose]
Face[target] = YES
Else
Face[target] = NO
End5
This algorithm is written in pseudocode and is not actually executable. Instead, it demonstrates three formative distinctions that are necessary for HP's algorithm to run. First, “eyes” are defined as having a certain RGB region (between 255, 255, 255 and 206, 206, 206) and second, “upper cheek and nose” is defined as having a similar certain RGB region (between 255, 255, 230 and 126, 86, 63). This lets the camera know what is an “eye” and a “upper cheek and nose”. Third, it compares the RGB difference of “eyes” to “upper cheek and nose”, noting that so long as the “eyes” are of higher RGB values (by at least 50 of red, green, and blue), a face will be seen as a “face”. If the “upper cheek and nose” is darker (and thus exceeds the value of 50), it will not be a “face”. HP’s suggestion for foreground lighting would give “upper cheek and nose” a higher HSV value, and thus a more likely chance to fit within the “while Face[eyes] + (50, 50, 50) > Face[uppercheekandnose]” scale. When difference is algorithmic, questions of asymmetrical racial access get answered from within the hidden, mathematical logic of the implemented scale. The programmed ruleset that creates the parameters for HP's facial recognition algorithm determines what is and what is not a “face”. Ergo, HP's ruleset is the exclusive factor that determines if or if not we exist. To be recognized requires that your face be made up of polygons with certain RGB values. And this 206, 206, 206 to 255, 255, 255 RGB spectrum becomes the material terrain by which existence is apportioned – it is the quantitative scale through which all subsequent meaning gets made. But when that scale encounters the real world, when the idea of a “face” is supposed to connect, 100% of the time, to a face, we see a slippage. This slippage is an always there constant to datafied life. Life cannot be perfectly datafied, in perfect fidelity, and with perfect coverage (Introna, 2011). This is because algorithms must make their own world in order to understand ours. They must define how difference is made truly different in order to produce algorithmic value. The underlying need to compare two values against each other, to be able to ask “while X > Y”, requires a fundamental shift in the cultural form of what is a “face” and a “race”. This shift is not just about the asymmetrical biases of whiteness over blackness. It’s also the rewriting of “face” in terms that negate the possibility of blackness. Difference, Power, and Mohanty “There is no difference without power, and neither power nor difference has an essential moral value” writes Ruth Wilson Gilmore in a paraphrasing of Foucauldian power (Gilmore, 2002). This quotation represents what I am trying to argue algorithmic processing does, albeit in less technologically opaque terms. The production of difference is itself a power move. At their most basic level, then, all algorithms operate through power due to the very fact that relational difference has to be produced for a judgement to take place. Foucault’s definition of power, we should remind ourselves, is neither a good nor bad thing. It is instead just a thing, a relation (Foucault, 1982). And algorithms do nothing more than produce relations of data, exclusively. At face value, it's like the liberal trope of multiculturalism represented by United Colors of Benetton advertisements, where racial difference is not about good or bad, but mere difference in skin tones (Bannerji, 2000;; Giroux, 1993). There’s no presumed valuation between models, just conventionally hot people who are phenotypically different while dressed the same. But when difference encounters the world outside the hypothetical linings of the control component, when a scale (how to connect “apples” with “oranges”, or how whiteness is valued vis-­a-­vis blackness) is defined, it necessarily creates degrees of relative belief, value, and meaning. Or, it’s when the attractive models in the United Colors of Benetton advertisements leave the photo shoot and walk down the street to grab a cup of coffee. Some will be profiled by police and some won’t based on how their race is valued by society. This is when the valuelessness of difference gets made valuable. In his analysis around cultures of racism, Stuart Hall describes what he calls the “fear of living with difference” (Hall, 1992: 17-­18). Hall isn’t speaking of difference as mere quantitative difference (whether a bit is valued as 1 or 2);; he’s talking about a difference from the norm, a difference outside hegemonic grasp. But algorithmic processing collapses the non-­
moral difference of 1 and 2 with the Hallian difference of white versus black and rich versus poor. It produces, without us knowing, a baseline technical difference that then serves as the terrain by which social difference is made. Certain strains of feminist theory can help us explore how difference is levied in terms of power;; terms that inform our political movements as well as our daily interactions with others. For example, Chandra Mohanty powerfully damns Western, white feminists in their emphasis that un-­nuanced notions of “sexual difference” be the heart of political struggle, where a “monolithic notion of patriarchy” helps produce the “reductive and homogeneous notion… [of] ‘third-­world difference’ — that stable, ahistorical something that apparently oppresses most if not all the women in these countries” (Mohanty, 1988: 63). That is, with the same brush by which patriarchy is classified as a singular, intelligible idea that reaches across national, cultural, racial, and class boundaries, Western, white feminists constructed similar differences between the Global South and the West. Mohanty continues, “the assumption of women as an already constituted and coherent group with identical interests and desires, regardless of class, ethnic or racial location, implies a notion of gender or sexual difference or even patriarchy which can be applied universally and cross-­culturally” (Mohanty, 1988: 64). In a passage that I will quote at length, Mohanty goes on to spell out the method by which this “third-­world difference” gets made: “Since discussions of the various themes I identified earlier (e.g., kinship, education, religion, etc.) are conducted in the context of the relative 'underdevelopment' of the third world (which is nothing less than unjustifiably confusing development with the separate path taken by the west in its development, as well as ignoring the unidirectionality of the first/third-­world power relationship), third-­world women as a group or category are automatically and necessarily defined as: religious (read 'not progressive'), family oriented (read 'traditional'), legal minors (read 'they-­
are-­still-­not-­conscious-­of-­their-­rights'), illiterate (read 'ignorant'), domestic (read 'backward') and sometimes revolutionary (read 'their-­country-­is-­in-­
a-­state-­of-­war;; they-­must-­ fight!'). This is how the 'third-­world difference' is produced” (Mohanty, 1988: 80). Mohanty’s critique of how Western white feminists’ erased the complexities within the Third World produces a difference according to a scale of what we can think of as “First” (1) and “Third World” (0). In the context of my essay, Mohanty is making a quintessential algorithmic move. So, and I swear, without relying on an underlying philosophical structuralism, let’s transcode the production of third-­world difference, detailed above, into algorithmic form: // Here I operationalize the idea of the First World as “developed”
// by assigning it modern, educated, and progressive (1)
var firstWorld = {
developed: true,
kinship: 1,
religion: 1,
education: 1
};
// And here I am operationalizing the idea of Third World
// “underdevelopment” as traditional, ignorant, and not progressive (0)
var thirdWorld = {
developed: false,
kinship: 0,
religion: 0,
education: 0
};
// This is me explaining what 0 and 1 mean for the algorithm
var attributes = {
kinship: {
0: "traditional,
1: "modern"
},
education: {
0: "ignorant",
1: "educated"
},
religion: {
0: "nonprogressive",
1: "progressive"
},
politics: {
0: "their-country-is-in-a-state-of-war; they-must-fight!",
1: "democracy"
};
// This section assigns “patriarchy” to the Third World if the value for
// any element in the Third World is less than the First World. Else,
// “patriarchy” is not assigned to the Third World.
//
// NOTE: It would be miraculous if these “else” sections ever ran,
// given how Western, white feminists operationalized the variables of
// firstWorld and thirdWorld, making such an assessment impossible.
if (thirdWorld.kinship < firstWorld.kinship) {
thirdWorld.patriarchy = true;
} else {
thirdWorld.patriarchy = false;
}
if (thirdWorld.religion < firstWorld.religion) {
thirdWorld.patriarchy = true;
} else {
thirdWorld.patriarchy = false;
}
if (thirdWorld.education < firstWorld.education) {
thirdWorld.patriarchy = true;
} else {
thirdWorld.patriarchy = false;
}
The scale used for this generation of meaning hinges on the socially constructed boundary made between the Third and First Worlds. But we can see how this initial structure rigidly casts the foundation for what comes after. When Western white feminists create a monolithic, singular notion of “underdevelopment” as the origin arbiter of the politics of difference, that scale can then be wielded and applied to other relations: if (thirdWorld.patriarchy = true) {, thirdWorld.politics = 0 };; (if the Third World has patriarchy, then the Third World is also revolutionary). From this hegemonic perspective, the complexity that makes up the Global South is collapsed into a single, unified signifier, which then is used as the index for all succeeding generation of meaning. The construction of a “World” scale between “First” and “Third” functionally erases the complex, historical discourses of colonialism, race, sexuality, and capital — e.g. the context that makes the Global South what it is. In this example, it becomes impossible to understand the particularities of power that repress these multivariate populations, politically as well as algorithmically. Declared at the top of our code is a Third World that lives on the terrain of First, producing difference that, in every well-­defined step, both emphasizes and reinserts these colonizing valuations. The algorithmic scale implicitly contains the traces of colonization, global capitalism, and white supremacy, but those traces are read as a singe output of thirdWorld.patriarchy = true. I am well aware that my own interpretation of Mohanty as algorithmic can be seen to be doing the same collapsing work. I plead with the reader to read this argument generously. I am not at all interested in making the claim that Mohanty is algorithmic outside the context of this essay. But I am interested in explicating how the heavily politicized differences that Mohanty describes as fundamental to Western, white feminist discourse can be seen as logically commensurate to what I call algorithmic difference. The production of Third World difference creates rarified meanings about the ideas of woman, Global South, and patriarchy which then regulates what could be said, what couldn’t be argued, and who and who didn’t count as a legitimate, intelligible subject. With this scale operationalized from the onset, what meaning comes afterwards is logically indebted to how “First World” and “Third World” defines the future. I will now slow down in my theoretical stair-­stepping in order to reproduce the above argument in a much finer line of thought. Because there is nothing in our digital world (data) that has universal, pre-­existing meaning, authors of algorithms must make value choices about how to address, perceive, and process data (an algorithm’s input). These choices are more than mere privileging one data element over another – they also create the scale upon which all subsequent meaning will be generated. If we go back to “HP is Racist”, the way HP constructs a “face” determines all that comes after. The scale of RGB values establishes the plane of relative difference from which users will be either recognized or not. For Mohanty, the way Western, white feminist construct the First and Third Worlds determines the allocation of political possibility. Both HP and Western, white feminists create the scale, and by corollary the type and dimensions of difference, that then either identifies certain kinds of (“white”) people or privileges a certain kind of worldview. No matter how much you scream, plead, or retreat — there will be no functional change until you speak on the terms of HP’s scale (like the suggestion that you use better lighting). And there is no functional change in political possibility until the Third World is dislocated from its relative value vis-­a-­vis the First. By operating according to an established scale, you exist within the confines of what HP decides and what HP thinks is important. And by operating according to an established scale of First and Third World perspective, you exist within the confines of what Western, white feminists decide and think is important. But unlike in radical politics, there is no way to avoid speaking on algorithm's terms. HP's scale is HP's scale. This is a much more structural, and I argue more important consequence, to what we mean when we use the word “algorithmic”. Conclusion HP's algorithmic error comes from the impossibility to transcode the complexities of the world, in perfect fidelity, by a computer, algorithm, or data structure. There is no clear reason why HP would want to keep dark-­skinned people from using their technology. But nonetheless, the historical divisions of white supremacy rear their head and disallow Cryer from the machine’s full use. Importantly, the same disallowance could happen to kids, women, the elderly, disabled, the poor (Magnet, 2011). Yet what I want to propose isn’t just that marginalized groups see their marginalization happen even when it’s unintentional through the logics of hierarchy. Algorithmic knowledge is producing new forms of difference, new scales by which valuation happens that, when that value interfaces with our world, will always be askew. Algorithmic difference makes the world in order to allow for computation: the creation of scales to make quantitative differentiation. Race then becomes a quantitative phenomenon, a clear effect of white supremacy but a wholly separate understanding of what is means to be racialized. How this racialization happens, though, is up to the engineers who program it – its a new brand of cybertype that transcodes race into the digital world (Nakamura, 2008). This is part of why “an effective method expressed as a finite list of well-­defined instructions for calculating a function” is insufficient to describe how algorithms reform culture. And it’s also why algorithmic culture is truly an unfamiliar phenomenon that requires new methods and practices to understand how it conditions our lives. To focus on the generation of meaning through algorithmic difference orients us to how algorithms determine the allocation, valuation, and distribution of culture. The first, origin etch of produced difference creates the quantitative, discursive axioms by which everything follows. And when these axioms get operationalized in computer code, installed as Twitter’s “Trending Topics”, or your Facebook NewsFeed, or the price of an Amazon purchase, the cultural politics of algorithmic processing must deal with them accordingly. HP's “race” isn't race, but it's all about race. Nikon’s “blinking” isn't about if you actual blink, but it does become the index for who is seen as “normal” and who isn’t. And even how I transcoded Mohanty's “third-­world difference” isn't about a white supremacist notion of sexual difference, it most definitely is, coded into the algorithm at its origin. Algorithmic authorship creates the quantitative scales upon which we produce new forms of culture. But HP's “race” isn't a substitute for race. It's just another layer, a new way to think about racialization that may not be immediately about racial logics, but becomes part of them as the inescapability of racism bleeds into the seemingly race-­neutral worlds of digital technology. Algorithmic difference is more than just saying “oh, that's racist” to HP being racist. It's to understand that racial – and all other measures of – differentiation is birthed in ways that are unintentional and often invisible. And these origin differences are the foundry from which all subsequent algorithmic culture is produced. Notes 1 This concept, taken originally from the “raceless racism” of Goldberg, 2011, gets extended to what I call raceless “racism”. This is how the “racism” of HP's facial recognition software still has “racist” effects, even when race is not explicit. 2 Facial recognition works by training algorithms to understand what is common in a “face”: polygons that we understand as eyes, a nose, a mouth, and the facial indentations that give our features dimensions. Once this “face” template is created, new images (like Black Desi's face) are compared. Black Desi's face wasn't understood as a “face” because it had “insufficient foreground lighting”, which disallowed these polygons to be understood by the computer. For more methodological explanation, check out (Wiskott et al, 1997;; Lyon et al, 1999;; Pantic and Rothkrantz, 2000). 3 A great analysis of the HP algorithm is available at (Sandvig et al, 2013). 4 While we are using RGB values to show color for ease of understanding, actual facial recognition uses HSV values (hue-­saturation-­value/brightness). 5 This is an (admittedly bad) example of pseudocode. Its aim is not to be an operable version of facial recognition, and rather helps us how to think about how bodies, when transcoded into RGB values, become understood as “faces”. References Ananny, M (2012) How associational algorithms do public work. Annual Meeting for the Society for Social Studies of Science (4S), Copenhagen, Denmark. Bannerji, H (2000) The paradox of diversity: The construction of a multicultural Canada and 'women of color'. Women's Studies International Forum 23(5): 537-­560;; Giroux, H (1993) Consuming social change: the 'United Colors of Benetton'. Cultural Critique, 26: 5-­32. Browne, S (2010) Digital epidermalization: race, identity and biometrics. Critical Sociology 36(1): 135. Chun, WHK (2005) On software, or the persistence of visual knowledge. Grey Room 18: 26-­51. Coleman, B (2009) Race as technology. Camera Obscura, 24(1): 177–207. Foucault. M (1982) The subject and power. Critical Inquiry 8(4): 777-­795. Gillespie, T (2011) Can an algorithm be wrong? Twitter trends, the specter of censorship, and our faith in the algorithms around us. Available at: culturedigitally.org/2011/10/can-­an-­algorithm-­be-­wrong. Gillespie T (2014) Algorithm [draft] [#digitalkeywords]. Available at: http://culturedigitally.org/2014/06/algorithm-­draft-­digitalkeyword. Gilmore, RW (2002) Fatal couplings of power and difference: notes on racism and geography. The Professional Geographer 54(1): 15-­24. Goldberg, DT (2011) The threat of race: reflections on racial neoliberalism. Hoboken: Wiley-­Blackwell Gordon, LR (2006) Is the human a teleological suspension of man?: A phenomological exploration of Sylvia Wynter's Fanoian and biodicean reflections. In Bogues, A (ed.) After man, toward the human: critical essays on Sylvia Wynter. Kingston, JA: Ian Randle. Hall, S (1992) Race, cultures and communications: looking backward and forward at cultural studies. Rethinking Marxism: A Journal of Economics, Culture & Society, 5(1): 10-­18, 17-­18. Hennie, FC (1965) One-­tape, off-­line Turing machine computations. Information and Control 8: 553-­578. Herken, R (1995) The universal Turing machine: a half-­century survey. Vienna: Springer Vienna. Hing, J (2009) HP face-­tracker software can't see black people. Available at: http://www.colorlines.com/archives/2009/12/hp_face-­
tracker_software_cant_see_black_people.html. Introna, L (2011) The enframing of code: agency, originality and the plagiarist. Theory, Culture & Society 28(6): 113-­141. Jenkins, M (2015) Revealing the hidden racism of Instagram filters. Available at: http://www.forharriet.com/2015/07/revealing-­hidden-­racism-­of-­instagram.html. Kleene, SC (1943) Recursive predicates and quantifiers. Transactions of the American Mathematical Society 53: 41-­73. Lyons, M, Budynek, J, and Akamatsu, S (1999) Automatic classification of single facial images. IEEE Transactions on Pattern Analysis and Machine Intelligence 12(21): 1357-­1362. Magnet, S (2011) Our biometric future: gender, race, and the technology of identity. Durham, Duke University Press. Manovich, L (2001) The language of new media. Cambridge: MIT Press. Mayer-­Schönberger, V and Cukier, K (2013) Big data: a revolution that will transform how we live, work, and think. New York City: Eamon Dolan/Houghton Mifflin Harcourt. McFadden, S (2014) Teaching the camera to see my skin. Available at: http://www.buzzfeed.com/syreetamcfadden/teaching-­the-­camera-­to-­see-­my-­skin. McPherson, T (2011) US operating systems at mid-­century. In: Nakamura, L and Chow-­
White, P (eds) Race after the internet. New York: Routledge. Mohanty, C (1988) Under Western eyes: feminist scholarship and colonial discourses. Feminist Review 30: 61-­88, 63. Negroponte, N (1995) Being digital. New York City: Vintage. Pantic, M and Rothkrantz, LJM (2000) Automatic analysis of facial expressions: the state of the art, IEEE Transactions on Pattern Analysis and Machine Intelligence 22(12): 1424-­1445. Porter, T (1995) Trust in numbers: the pursuit of objectivity in science and public life. Princeton: Princeton University Press. Sandvig, C, Karahaolis, K, and Langbort, C (2013) Re-­centering the algorithm. Governing Algorithms Conference. Striphas, T (2014) What is an Algorithm? Available at: http://culturedigitally.org/2012/02/what-­is-­an-­algorithm. Thacker, E (2004) Biomedia. Minneapolis: University of Minnesota Press. Wang, J (2009) Racist camera! No, I did not blink... I'm just Asian. Available at: https://www.flickr.com/photos/jozjozjoz/3529106844. Wikipedia. (2015) Algorithm. Available at: http://en.wikipedia.org/w/index.php?title=Algorithm&oldid=535423932. Wiskott, L, Fellous, JM, Krüger, N, and von der Malsburg, C (1997) Face recognition by elastic bunch graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7) :775-­779.