enculturation

A Journal of Rhetoric, Writing, and Culture

A Theory of Persuasive Computer Algorithms for Rhetorical Code Studies

Estee Beck, University of Texas - Arlington1

(Published November 22, 2016)

Imagine: A job candidate, Joe Smith, has spent a good deal of time, energy, and labor for the academic job market. Smith has applied to targeted universities, found favor with multiple first-round interviews, and embarked on a handful of campus visits. At Dream University, the human resources department requests a background check that includes criminal, financial, and social media reporting. The background check company requests places he has lived in the past seven years, his social security number, and his social media names/accounts. One week later, Smith learns the background check produced the correct accounting for the criminal and financial areas but the social media section shows predictors of anti-social and/or hostile behavior. Dream University declines to hire Smith based, in part, on the social media report created by computer algorithms by the background check company. 

Imagine: Jane Johnson has an excellent credit score, an on-time payment history, and an open credit line of $10,000 US dollars with a major American credit card company. As Johnson makes last minute errands around town for a vacation out of the country, she purchases an item at a store she does not shop with on a routine basis. Johnson goes on vacation and returns one week later to a letter from the credit card company. The company has reduced her $10,000 credit line to $3,000 citing that other customers who shopped at the store Johnson recently made a purchase from had a poor repayment history with the company. Computer algorithms automatically reduced the credit limit and an hourly employee printed the mailing in a batch process with 3,000 similar letters to customers. 

While the former scenario is fictionalization, as of this writing, the news media reports on algorithms used in hiring decisions, including exercising social media analysis for behavioral predictors (All Things Considered). The latter description, however, occurred to an American Express customer in 2009 (Cuomo, et al.). Based on the algorithms American Express employed, the complex mathematical equations/procedures developed an invisible digital identity (Beck), i.e., an identity developed by data trails, analytics, and other digital tracking technologies invisible to individuals about this customer. The algorithms ranked geographical and database information about other customers higher than this customer’s FICO score and payment history.

In all fairness, after an American television news and entertainment company, ABC News, reported the story in national and regional media outlets, American Express rectified the situation and reported refinement of its computer algorithms. However, this example and the other fictionalized account illustrate the impact computer algorithms have on everyday lives. 

Routinely, industry leaders, news media journalists, [h]activists, and academics hail and villainize algorithms as methods to make life better or worse for the public. Take, for example, a whitepaper executive summary report by international consulting firm, McKinsey & Company, valuing sensor technologies2 in the Internet of Things movement (Manyika). The report, “The Internet of Things: Mapping the Value Beyond the Hype” projects 3.9 to 11.1 trillion dollars in economic impact by 2025 from the Internet of Things movement. For business and industry leaders this projection is a win/win. Elsewhere, news and scholarly reports paint another landscape. As reported by senior reporting fellow Lauren Kirchner at ProPublica, and reprinted in The Atlantic, algorithmic bias creeps into data mining models leading to financial, social, and cultural discrimination. Additionally, Zeynep Tufekic, Assistant Professor in the School of Information & Library Sciences at University of North Carolina addresses how algorithmic manipulation raises questions about computation, and its role in legal, financial, and social laws, norms, and customs.  

Regardless of stakeholder values about algorithms’ role in culture and society, the following statement is undeniable: algorithms play a significant function in how people experience online and networked culture. Algorithms, according to communications and media scholar Tarleton Gillespie, are “...now a key logic governing the flow of information on which we depend....” For example, Facebook uses proprietary algorithms to curate some friend’s posts over others (cf. Facebook; Pariser). Amazon uses algorithms to provide tailored shopping recommendations (cf. “Improve Your Recommendations”; Mangalindan). And, Google uses algorithms for personalized search results (cf. “Personalized Search For Everyone”; Colborn). Whatever views a person or organization holds about algorithms, make no mistake: Algorithms are conductors orchestrating interface happenings. They make things happen and affect change within machine processes and human behaviors. 

For all the data algorithms calculate and the resulting impact upon people’s social, legal, and financial lives, I tend to think of these language objects as quasi-rhetorical agents with persuasive abilities. Although the traditional ways of thinking about computer algorithms do not usually include discussions of persuasion, I nevertheless maintain algorithms are persuasive because of their performative nature and the values and beliefs embedded and encoded in their structures. In this article, I theorize computer algorithms are persuasive because of their performative nature and the cultural values and beliefs embedded/encoded in their lingual structures. I use persuasion because of the pervasive historical and modern association of shaping thoughts and actions. Granted, rhetoric is not all about persuasion and is more expansive than just changing an audience’s beliefs. However, in building a case for rhetoricians to include the processes of coded and written-only language, i.e., computer code in rhetorical scholarship, there has to be a starting point. 

Thus, after discussing computers & writing and digital rhetoric—two historical considerations that inform this work, I provide brief definitions of algorithms for readers. Next, I chart a historical tracing of conversations in media studies, communications, philosophy, and rhetorical theory that inform this argument. In the next section, I define persuasive computer algorithms, and I introduce three features of persuasive computer algorithms: algorithmic processing, algorithmic inclusion/exclusion; and algorithmic ideology. The resulting theory positions what rhetorician Kevin Brock calls a “rhetorical code studies”3 for future development in digital rhetoric, but also speaks back to critical code studies by introducing another lens (rhetoric) for code studies, writ large. 

A Background Discussion of Computers & Writing and Digital Rhetoric

The theory presented herein draws upon historical work from computers & writing specialists, and more recently, digital rhetoricians in rhetoric and composition. Under consideration first is the historical movement of the emergence of computers and writing in the late 1970s and early 1980s. This approach materialized in response to the development of early electronic and networked computing machines, with the rise in coding software for writing classroom instruction (see Hawisher, et al.). With the development of the newsletter, later turned journal, Computers & Composition, writing scholars and teachers like Hugh Burns, Lisa Gerrard, Kathleen Kiefer, Helen Schwartz, Mimi Schwartz, Cynthia Selfe, James Strickland, Billie Wahlstron, and William Wresch turned their attention to the functional and critical issues with software development by sharing their results in the early editions of journal. These efforts formed into a sub-discipline, known as computers & writing, focused on the histories, methods, and pedagogies associated with computing technology and writing instruction nationally and internationally. These early developments seeded advancements in theoretical and pedagogical scholarship, especially with an increased pace over the last two decades as advances in technology have arisen in computing sectors. While the progress of those allies of computers & writing shed light upon the value of technology, advancements into computation have opened scholarship into the underlying infrastructure of computing technologies. More recently, Karl Stolley has called for writing scholars to dig into source code. Rhetorician James Brown, Jr. has theorized the “robot rhetor” to understand computation and rhetorical education. Annette Vee has defined a computational literacy. And, Bradley Digler and Jeff Rice edited a collection on the development of web technologies. Certainly, advancements in computers & writing scholarship force teachers and scholars to reflect upon the role of technology in writing classrooms, but also push forward how we tend to think about rhetorical implications of technology and writing at philosophical levels. 

The second background framing this work is the development of digital rhetoric, first discussed by Richard Lanham in his treatment of visual arts and electronic texts in the 1990s. In his chapter “Digital Rhetoric and the Digital Arts,” Lanham largely describes the development of visual arts in electronic spaces and provides examples from print and electronic arts and animations as foregrounding material for a digital rhetoric concerned with aesthetics and technology. Indeed, as Douglas Eyman has noted, Lanham’s work largely focuses on artistic and literary expression, not necessarily with intention toward defining a digital rhetoric. Thus, central to defining a digital rhetoric, Eyman provides an in-depth overview of scholarly discussions of digital rhetoric since Lanham’s work. In reading Eyman’s treatment, the inclusion of disciplinary and interdisciplinary narratives describe digital rhetoric from what I consider as contextual designs from each scholar, who provides definitions connected from specific research orientations. In just three brief examples, Elizabeth Losh’s definitions of digital rhetoric account for government and mathematical matrices, thus stemming from the focus of ideologies in government electronic communications. James Zappen’s article expresses a desire to locate digital rhetoric’s definition amongst new digital media. And, Ian Bogost argues for digital rhetoric to take on computational processes as procedural to the analytics and methods of computation. In surveying how scholars have defined digital rhetoric, Eyman explains historical formations to imagine an encapsulating theory of digital rhetoric as “. . . the application of rhetorical theory (as analytic method or heuristic for production) to digital texts and performances” (web). 

Although computers & writing and digital rhetoric employ different methodologies for working within electronic computer-mediated spaces, both fields form around a sense of searching for how people and machines interact with each other. While the benefits apply to classroom based writing practices along with research and scholarship, the ultimate quest provides insight into a knowledge and information exchange economy through and with digital technologies. As people make advancements with digital technologies, especially with movement in the multi-million dollar Internet of Things industry, the relationship of not just human-to-machine interaction, but also machine-to-machine interaction will become important for rhetoricians to address. Again, understanding the function of rhetoric in algorithmic processes is just one step in support of positioning a rhetorical code studies as central to rhetorical scholarship. 

Computer Algorithms: Brief Definitions 

While this article aims to address the persuasive nature of computer algorithms, the two foci areas of this work speaks back to intellectual thought in rhetorical theory and invites digital rhetoricians to form a connective “rhetorical code studies.” It is my inkling that persuasion and agency finds its way in non-human language objects, i.e., computer code. Without necessarily delving in the historical literature covering the art of persuasion and human agency, I hope this contribution will offer commentary relevant to conversations in digital rhetoric, rhetorical theory, and writing studies. 

To that end, a definitional understanding of computer algorithms is in order before moving onto the theoretical development and goals of this article. Since those in computer science circles regard Donald Knuth as the father of computer science, it is only appropriate to include his understanding of algorithmic features. In his multi-volume the Art of Computer Programming, Knuth describes algorithms with five features: 

1) finiteness or how an algorithm terminates after a certain number of sequences; 
2) definiteness or the precise definitions an algorithm carries out in its function; 
3) input or “qualities” given to an algorithm before the operation begins; 
4) output or the relation of qualities to an input; and 
5) effectiveness or the most efficient and or basic method of performance. 

It is Knuth’s goal to provide the most basic of definitions of what algorithms do. What matters is the functionality of the algorithm to carry out commands. Conversely, computer scientist, Robert Sedgewick who surveyed computer algorithms in his book aptly titled, Algorithms, defines them as elements “...used in computer science to describe a problem-solving method suitable for implementation as computer programs.” Sedgewick, who completed his doctoral work under the training of Donald Knuth, and is a faculty member at Princeton University, and a member of the board at Adobe Systems further describes algorithms in terms of their performance and running times. For Sedgewick understanding algorithms in terms of the size of data needed for processing along with the average run time a computer scientist expects an algorithm to run are necessary components in developing the mathematical procedures. 

If one was to form an impression from these two definitions, one might conclude algorithms as agnostic language objects. In computer science and mathematics, knowing run times and hardware properties provides a structure for analyzing and estimating how programs will crunch data. Far from a mindless process, the structure requires computational and procedural schemata for recognizing the benefits and constraints of implementing programs. One might also conclude that conceptions about algorithms run deep in Western values of logic and organization. Within the definitional parameters is evidence of how people in Western societies think and communicate. There is evidence of a basic relationship of problem and solution. There is also evidence of the limiting factors of what algorithms can and cannot do, and what algorithms perform and “know” about the real and artificial of knowledge exchange. 

The Theory   

Algorithms in a rhetorical sense, seen from the view of the author or team of creators are invented, arranged, styled, memorized (in multiple senses), and delivered. Does it necessarily follow then that these non-human language objects, that influence real change in stock markets, financial data, dating sites, and social media spaces, are persuasive in and of themselves? Do computer algorithms persuade?

In order to suggest a persuasiv-[ity] of computer algorithms, I first turn to a discussion in critical media studies that has theorized the performativity of algorithms. As a discipline, scholars and researchers in critical media studies explore and examine the critical stances (power and its effects) associated with the infrastructure and content of media. Drawing upon social science and humanities based methods, the discipline aligns with communications, but some theorists may draw from allied-fields like philosophy, rhetoric, sociology, and anthropology—just to name a few. In working within critical media studies, researchers examine a constellation of theoretical, methodological, and practical positions along an axis of what Brian Mott and Robert L. Mack defines as “doing critical media studies” by employing a “skeptical attitude, humanistic approach, political assessment, and commitment to social justice” (15). The outcomes of such studies illustrate depth and dimensionality of hermeneutical interpretation of media literature, network analyses of Internet configurations, and cultural studies approaches to lived experiences in media spaces. 

In early critical media scholarship, N. Katherine Hayles explores how computer code impacts the everyday lives of a global citizenry through “intermediation,” or the blurring of human and non-human technologies boundaries in language acts. In her investigation of the modes and properties of old and new media along with human speech and written acts, Hayles argues for an imaginary of a computational human consciousness, furthering the “erasure of embodiment” (xi) of posthumans in an age of computing technology. In her theorization of computer code, Hayles claims a performativity, asking readers to accept such a premise because code makes things happen, and, I will add, in the way its creators/programmers designed the code to perform. Part of Hayles’ argument consists of acknowledging the linguistic features of computer code in relation to natural languages, but the other part figures the transaction and mediation of language use among machines and humans as different modalities (in this sense, experience). In Hayles’ words:  

Code that runs on a machine is performative in a much stronger sense than that attributed to language. When language is said to be performative, the kinds of actions it “performs” happen in the minds of humans, as when someone says, “I declare this legislative session open” or “I pronounce you husband and wife.” Granted, these changes in the mind can and do result in behavioral effects, but the performative force of language is nonetheless tied to the external changes through complex chains of mediation. By contrast, code running in a digital computer causes changes in machine behavior and, through networked ports and other interfaces, may initiate other changes, all implemented in the transmission of code. (49–50)

The performative nature of code, in Hayles’ sense, derives from the command logics of the written-only language, i.e., code is finite, exacting, and procedural. Elsewhere, critical media theorist Alexander Galloway lends support to Hayles’ assertions about its performative nature, “...Code is machinic [sic] first and linguistic second; and intersubjective infrastructure is not the same as a material one...” (71). 

Hayles seems to suggest a rupture of the variances in symbolic mediation. At philosophical and theoretical levels, the framework for performativity, as discussed by Judith Butler, and Derrida in “Signature, Event, Context” provides background on the differences among speech and written acts. However, as Hayles argues in My Mother Was a Computer, code “exceeds both writing and speech, having characteristics that appear in neither of these legacy systems” (40). Turning to the philosophical positions advanced by Butler and Derrida, language connects intimately with force—with force being the intentionality, the pre-mediation or awareness that brings rise to an act and the follow through of such energy in an exchange. The main difference between Butler and Derrida is how they differentiate force and intentionality within speech acts. For Butler, the notion of performativity draws from John Searle’s work on illocutionary speech acts or the business of doing rather than representing or describing within context. Whereas for Derrida, who privileges writing over speech, questions context, and claims the written word ruptures context, because, in part, of its divorce from the author upon the moment of the word’s production. While Derrida contends that meaning in speech acts and in writing as an ambiguous act, thus escaping context, the absence of author in writing provides endless variables of meaning for a reader when interacting with the written word. As Hayles discusses in connection with Derrida’s différance, code does not perform in such theoretical ways because at the machinic level of computation, the work of computers rests upon precision—not ambiguity of language. Through all of the layers of the interface, from the binary code of 1s and 0s at the machine level, through compliers, backend and frontend languages, code functions within specific registers and contexts to make things happen. Code cannot perform its functions without context. 

For example, if I were to say use a ‘block quote’ for a lengthy direct quote citation, a person may conjure up any of the following: notions of MLA versus APA formatting; left versus justify of typeface in a document; or even, for the more creatively inclined, imagine a physical block with a quote etched or glued on the surface. However, in HTML5 the element for block quote, <blockquote></blockquote> has specific features endemic to the element in code. The machine does not summon creative variances of <blockquote> based on histories and contexts, but instead performs the programming of the element in the mode the programmers designed <blockquote> to perform—through all of the frontend and backend languages, compilers, and machine code. However, in thinking through the intersubjective mediation of humans attaching multiple meanings to a single word or word-concept, and only one meaning attached to the element <blockquote>, there is still meaning attached to the element/code when the machine performs the function, however limited that may be in material/computational spaces. 

The narrow modality of computer code might lead some to consider computer code as a less persuasive (if at all) form of rhetoric. Because of the functionality and performativity of code, for what code does--its power to affect change in machines and human behaviors—there is always some type of meaning attached at the machinic and human levels of understanding.  In application of computer code, specifically algorithms, the context of the production of code ruptures from an authorial moment of creation, this is given. However, the routine application of the code—the regularity of the code’s application through its command processes—produces its own context in operation; its own residual performative act within precise contexts. At the same time, a reader of the code (whether it is at the source level or through indirect use at the visual interface of a software application) produces their own set of contexts or iterations. In sum, even at the machinic level, while Derrida may have it that the written word ruptures from its original context of production from coder to algorithm, code—as Hayles would have it—performs, and when algorithms go live, they produce their own machinic contexts. 

In service to the philosophical and theoretical considerations of persuasion, if computer algorithms produce their own machinic contexts, which aren’t as highly unique as human contexts when associating concepts with words, then consideration of the performativity of code—of algorithms—takes root. Within the machinic context, the relationship of language and force binds in an ephemeral state. By ephemeral, I mean giving rise to a context for a written act to occur and with the close of the act, the context of the written act concluding, with the spatial and temporal elements of the context dissipating. Such theoretical ground calls to mind Thomas Rickert’s discussions of khôra. In examining the works for Kristeva, Derrida, and Ulmer, Ricket notes that “chōra transforms our senses of beginning, creation, and invention by placing them concretely within material environments, informational spaces, and affective (or bodily) registers” (252). This view illustrates another way of locating code and algorithms within rhetoric’s sphere as a khôraic being and non-being of what rhetoric encapsulates. While Rickert notes this conversation gives rise to rhetorical invention—for Derrida, rhetoric’s place in invention also calls attention—albeit briefly—to the spatial and temporal dimensions which discourse (and I would add the performativity of computer code) resides.

These conversations taken together provide an imaginative inquiry into the locative and performative dimensions of computer algorithms. Algorithms, bound in computer code are nestled inside the soft and hardware of machines. When data inputs into the structure of the algorithm, a type of transactional invention occurs, where for a moment, the spatial, temporal, and lingual dimensions of the data, algorithm, and circuits come, in some sense, [a]live—or live, thus performing ritualized and routine acts for the processing and exchange of information. The reciprocal and reflexive nature of algorithms in transactional inventions highlights the relationships in the network of actants, drawing upon Latour’s actor-network theory. The non-human actant computer algorithm performs in the interaction of inputs and outputs of data. In some sense, Latour’s idea of the “delegation to non-humans” wherein objects define how people or other objects move or respond, hints at a type of agency. 

Admittedly, even as a digital rhetorician, I struggle with the theoretical transfer of agency to non-human objects because of how agency has been traditionally defined in rhetoric’s scholarly stage. Even in working through the ideas herein, I remain theoretically challenged and suspicious about assigning a persuasive feature to written language. Casting such an argument almost anthropomorphizes code, to embellish and conflate the performative aspects of a non-human language objects and procedure. Thus, consider this: If language becomes encoded and attached to a spatial and temporal dimension, and computer algorithms are encoded as active procedures after the input of data, then does it necessarily follow that persuasion takes place? 

Interdisciplinary theorist Lucas Introna, who routinely publishes on the ethics of technology and surveillance, argues that the performativity of computer code should be understood in terms of the encoding process from agent—to transmission—to receiver, and as such encoded language objects are extensions of agency. The process of encoding, according to Introna, “translates agency (becoming) from one event to another, thereby extending agency/becoming of actors beyond the boundaries of the singular local event” (117). In the example of the creation and development of computer algorithms and computer code, this encoding also carries forward the intentions and designs of the mathematicians and programmers. When the algorithms and/or code execute, the past agency and designs of the creators are carried forward in a transactional invention giving way to a transformative itineration of old and new contexts. In the age of digital rhetoric, we may become to think of algorithms as quasi-agents carrying forward the agency of human symbolic action. But, the changes algorithms produce and affect as a force go deeper than agency and cut at persuasive design. 

If digital rhetoricians use what Introna calls “encoded agency,” then the question of how persuasion operates or functions, if it all, inevitably arises. The three classical appeals—ethos, logos, and pathos—provides interesting cases for a theoretical case of algorithmic persuasion. If the mathematician or programmer’s agency is encoded and extended into their language acts, which include algorithms and programming languages—then one might ask: What else is encoded in those acts?

In Technofeminism, sociologist Judy Wajcman describes how the encoding of gender occurs in technological products. She traces the design, development, manufacturing, and marketing of the microwave as an example of the gendered nature of material objects. First developed and marketed to single men for quick meals, advertisers marketed it as a “brown” good and placed the product alongside hi-tech technological and computer devices in department stores. As men rejected the microwave, marketers repackaged the product as a “white” good for homemakers and sold the microwave beside home appliances. The gender encoding brings to mind the transmission of intent. In illocutionary acts, an intention in a communicative act figures as strongly as the successful transmission of the message. When programmers, or marketers in Wajcman’s example, transmit conscious and unconscious ideological values in the creation of the code or objects, the intended force becomes blurred, as Wajcman noted, “Marketing and retailing play a key role in framing demand: ‘there is an unclear dividing line between accurately representing the customer, constructing the customer, and controlling the customer’” (47). In service to computer code, one might question the encoded messages within algorithms from a perspective of representation, construction, and control of information and knowledge exchange. 

The process of encoding is further reflected in the work of Tara McPherson in her chapter, “Why are the Digital Humanities so White? Or Thinking the Histories of Race and Computation” in Debates in the Digital Humanities. McPherson illustrates the relationship between the rule of modularity used in UNIX (discrete code that can be chunked and interchanged in clean interfaces) and social and political segregation of races in the United States. She calls attention to how “...the organization of information and capital in the 1960s powerfully responds—across many registers—to the struggles for racial justice and democracy that so categorized the United States at the time.” From this view, UNIX programmers were historically, socially, and politically situated in a powerfully raced and classed political landscape of their time, much of which is subsumed—albeit—unconsciously in the encoding of a program. 

These two examples help illustrate both conscious and unconscious encoding of social, cultural, and political ideologies in non-human objects. Whether it is gender or race, ableism, class or Western values or organization and logic, the suasive appeals of persuasion attach during the encoding process of writing computer code. Computer algorithms and code operate by transmitting cultural values and beliefs of the programmers through the structure of code language to the execution of code.

Three features of Persuasive Computer Algorithms

In a vision of defining persuasive computer algorithms, a broad definition of such is now in order. Persuasive computer algorithms are written-only language objects with encoded agency, transactional invention, and embedded values, beliefs, and logics of the three rhetorical appeals performing functions that provide the grounds for human and non-human change. However, persuasive computer algorithms have three additional definitions that I share for future theoretical treatment and discussion. 

First, algorithms can be a systematic way of processing and organizing information for persuasive means that logic aids in ordering how humans and machines experience the world around them. Second, algorithms are decisive in inclusionary and exclusionary practices of using or discarding data that does not fit the structure of the algorithmic model. In this sense, if an algorithm’s structure only allows for data collection from websites with over 20 hyperlinks, the websites with less than 20 links will be excluded from the dataset. Finally, algorithms are quasi-objective ideological structures, in that the creation of an algorithmic structure relies upon the knowledge and experience base of the creator(s), and ideological bias will always seep through in the creation of the structure for better or for worse. 

Algorithms and computer code are rhetorical in that algorithms facilitate and function in complex acts for information exchange and retrieval across human life domains of thought and action. Because algorithms embody logical procedures for action, they have persuasive abilities in their systematic functionalities. The sequence of operations built into an algorithm leads a machine or human to perform and collect data that fit the parameters of the algorithm. Algorithms, like syllogisms as Kevin Brock discusses in his research, guide human and machine thought and action. Algorithmic persuasion is grounded in the procedures of algorithm, including how they include and exclude information. 

Along with algorithmic processes, algorithms have a design of inclusion and exclusion built into their command registers or, if you will, their linguistic structure. This orientation is somewhat different then Knuth’s description of input and output in his five descriptions of algorithms. For Knuth, input and output rely upon external data to fill the algorithm (input) and the quantities of data produced because of the algorithmic process (output). At an abstracted and primitive level, an algorithm is a stand-in for a function of inputted properties.

At a linguistic level, a basic equation (which fits the definition of a basic algorithm) has predetermined rules of governance for its operation. The structure of the equation may allow for any number of infinite variables for the letters; however, the representation of certain signs, for example, suggests an exclusion of all other types of mathematical functions. At the most basic levels, algorithmic logic operates to include and exclude by its very structure and functions for inputted data—hinting at logos.

The creator or developers of an algorithmic structure also rely upon their knowledge and experience in developing a process for computation. When Facebook developed its newsfeed feature in 2006, the computer scientists and engineers working with the social media company also created a method for ranking what posts a user would experience in their newsfeed more often than others rank. Facebook’s EdgeRank algorithm [Σ = Ue x Wx De (Σ = rank; U = affinity; W = weight; D = decay; e = edge)] calculated data based on: 1) affinity—how close or many times users came into contact (the edge) with each other through viewing pages, commenting on posts, etc.; 2) weight—the rank of the type and frequency of contact from the edge; and 3) decay—the time from last contact on the edge. This algorithm allowed users to experience a personalized newsfeed based on measured criteria (or data) of friendship settings under the “notifications” drop-down menu, viewing other users’ timelines, hiding posts, the types of posts users “like,” and even the connection speeds from a user’s network and device type (see McGee). Facebook engineers based these criteria on beliefs about experiencing contact that is more frequent with people along with calculating device information for the ability to view content in newsfeeds while on slower connections. 

On the surface, this type of personalization founded from ideological beliefs of relationships may seem beneficial to keeping connected to those on Facebook more frequently. As activist Eli Pariser reported in his book, the Filter Bubble, this personalization creates more dissonance between users and perceptions about others because Facebook filters out content from other users who may have dissimilar views or perspectives—perhaps as a result of the affinity part of the algorithm working too well. The concern about this filtering rests with users experiencing myopia of information where the potential for dissent and public discourse becomes a lesser priority than uploading images of the family pet. Ideology works strongly within algorithms since individuals create, order, and structure their design to parse data. 

In my view, these three additional definitions—algorithmic processes, inclusion/exclusion, and ideology—seed future theoretical scholarly work on persuasive computer algorithms. Digital rhetoricians could educate and advocate a good deal more about computer algorithms if we thought about them in terms of agency and persuasion—thereby elevating the theory and practice of our subfield with other disciplines, including the digital humanities, but also sociology, political science, communications, and media studies. We can also rely upon on historical scholarship of critical engagement—whether that is critical literacy or theory in practice to help our colleagues and students question, structure, and reshape the design structures consciously and unconsciously embedded in computer algorithms. 

Rhetorical Code Studies 

Another reason why I cast this argument rests with the formation and sustained scholarship in critical code studies. The emergence of critical code studies (“CCS”) provided critical theorists a sub-discipline to investigate computer code’s relationship with cultural, social, and political content through hermeneutical analysis. Described by Mark Marino as “a sign system with its own rhetoric...,” code is a lingual mode for humanities scholars and researchers to examine. Since Marino’s call, scholarship in critical code studies has flourished with multiple conferences, articles, and a book-length CCS manuscript. 

While critical code studies methodological focus with hermeneutics has provided valuable and timely critiques and theories, getting back to Marino’s timely contribution of code and rhetoric invites work from rhetoricians. As a digital rhetoricians, treating code rhetorically simply cannot be ignored. Some of the scholarship in critical code studies analyzes code from a hermeneutical perspective, which casts code as a static object for examination. While it’s not my position in closing to providing a critique of the use of hermeneutics with code, I’m beginning to appreciate how such an application separates the action of code in its natural environment and places code in word processers for scholarship, poetry, and play. Certainly, there’s value and merit in such scholarly models. However, if our colleagues from literature and critical theory use the training and tools developed from their home disciplines in the field of critical code studies, then why not have digital rhetoricians use their social scientific and humanities-based theories and methodologies to study, theorize, and play with algorithms and code? 

Turning to a rhetorical code studies in rhetoric and composition research, one might ask after the focus of such scholarly and teacherly attention. What makes a rhetorical code studies different (if at all) from a critical code studies? Certainly one variance would come from different methodological approaches, with critical code studies relying upon hermeneutics and rhetorical code studies also offering such an approach along with social science methodologies. But surely, the sub-discipline digital rhetoric is primed to consider the embedded social and cultural values in algorithms—how they operate, and how they affect change in machine and human behaviors. Thus, how might a rhetorical code studies treat social and cultural theories alongside non-human theories of machinic contexts? Additionally, how might focusing scholarly attention toward rhetorical and theoretical treatments of computer algorithms open interdisciplinary conversations and relationships? How might such perspectives attract complementary and divergent views? Since algorithms affect changes in machine and human behaviors, as the two scenarios that frame this article illustrate, how might those allied with rhetoric and writing studies gift a path toward greater knowledge about the formation, creation, and use of computer algorithms in myriad digital and scholarly spaces?

  • 1. Estee thanks Justin Hodgson and Scot Barnett for the care, consideration, and work of organizing the Indiana Digital Rhetoric Symposium of 2015 and for placing this collection together. She is also grateful for the kind and generous reviewer feedback, along with the editorial team's copyedits and adjustments.
  • 2. The report authors explicitly define sensor technologies in the Internet of Things as, “Internet of Things as sensors and actuators connected by networks to computing systems. These systems can monitor or manage the health and actions of connected objects and machines. Connected sensors can also monitor the natural world, people, and animals” (1). While the report does not make mention of algorithms, the computational processes drive data collection in sensing technologies.
  • 3. In his 2013 doctoral dissertation, “Engaging the Action-Oriented Nature of Computation: Towards a Rhetorical Code Studies” Kevin Brock gives the name of “rhetorical code studies.” Brock defines this sub-discipline at the intersection of “rhetoric, software, and critical code studies” for the purpose of “articulat[ing] how rhetoric's interest in persuasive action could be located in software and code as objects of study” (abstract).
Works Cited

All Things Considered. “What Makes Algorithms Go Awry?” NPR.org, 7 June 2015, http://www.npr.org/sections/alltechconsidered/2015/06/07/412481743/what-makes-algorithms-go-awry.

Amazon.com Help. Improve Your Recommendations. https://www.amazon.com/gp/help/customer/display.html?nodeId=13316081. Accessed 4 July 2016.

“Amazon’s recommendation secret.” Fortune, 30 July 2012, http://fortune.com/2012/07/30/amazons-recommendation-secret/.

Beck, Estee N. “The Invisible Digital Identity: Assemblages in Digital Networks.” Computers and Composition, vol. 35, Mar. 2015, pp. 125–140.

Bogost, Ian. Persuasive Games: The Expressive Power of Videogames. MIT P, 2007.

Brock, Kevin. Engaging the Action-Oriented Nature of Computation: Towards a Rhetorical Code Studies. - NCSU Digital Repository. North Carolina State University, 2013, http://repository.lib.ncsu.edu/ir/handle/1840.16/8460.

Brown, James J.Jr. “The Machine That Therefore I Am.” Philosophy and Rhetoric, vol. 47, no. 4, 2014, pp. 494–514.

Butler, Judith. Gender Trouble: Feminism and the Subversion of Identity. Routledge, 2006.

Colborn, Ken. “Guide to Personalized Search Results.” Portent, 28 Aug. 2014, https://www.portent.com/blog/seo/personalized-search-results.htm.

Cuomo, Chris, et al. “‘GMA’ Gets Answers: Some Credit Card Companies Financially Profiling Customers.” ABC News, 28 Jan. 2009, http://abcnews.go.com/GMA/TheLaw/gma-answers-credit-card-companies-financially-profiling-customers/story?id=6747461.

Dilger, Bradley, and Jeff Rice. From A to <A>: Keywords of Markup. U of Minnesota P, 2010. www.upress.umn.edu, https://www.upress.umn.edu/book-division/books/from-a-to-a.

Eyman, Douglas. Digital Rhetoric: Theory, Method, Practice. U of Michigan P, 2015. muse.jhu.edu, https://muse.jhu.edu/book/40755.

Facebook. “How News Feed Works.” Facebook, https://www.facebook.com/help/327131014036297. Accessed 4 July 2016.

Galloway, Alexander R. The Interface Effect. 1 edition, Polity, 2012.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media Technologies: Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie et al., MIT Press, 2014, http://en.youscribe.com/catalogue/tous/professional-resources/it-systems/the-relevance-of-algorithms-1979313.

Hawisher, Gail E., et al. Computers and the Teaching of Writing in American Higher Education, 1979-1994: A History. Ablex Pub., 1996.

Hayles, N.Katherine. My Mother Was a Computer: Digital Subjects and Literary Texts. First Edition edition, University Of Chicago Press, 2005.

Hopkins, Jerry. Strange Foods. Periplus Editions, 1999.

Introna, L. D. “The Enframing of Code: Agency, Originality and the Plagiarist.” Theory, Culture & Society, vol. 28, no. 6, Nov. 2011, pp. 113–141.

Kirchner, Lauren. “When Discrimination Is Baked Into Algorithms.” The Atlantic, Sept. 2015. The Atlantic, http://www.theatlantic.com/business/archive/2015/09/discrimination-algorithms-disparate-impact/403969/.

Knuth, Donald E. The Art of Computer Programming. 1 edition, vol. 1, Addison-Wesley Professional, 2011.

Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press, 2005.

Losh, Elizabeth M. Virtualpolitik: an Electronic History of Government Media-Making in a Time of War, Scandal, Disaster, Miscommunication, and Mistakes. MIT P, 2009.

Manyika, James, et al. “Unlocking the Potential of the Internet of Things.” McKinsey & Company, June 2015, http://www.mckinsey.com/business-functions/business-technology/our-insights/the-internet-of-things-the-value-of-digitizing-the-physical-world.

Marino, Mark C. “Critical Code Studies | Electronic Book Review.” Electronic Book Review, http://www.electronicbookreview.com/thread/electropoetics/codology. Accessed 4 July 2016.

McGee, Matt. “EdgeRank Is Dead: Facebook’s News Feed Algorithm Now Has Close To 100K Weight Factors.” Marketing Land, 16 Aug. 2013, http://marketingland.com/edgerank-is-dead-facebooks-news-feed-algorithm-now-has-close-to-100k-weight-factors-55908.

McPherson, Tara. “Debates in the Digital Humanities.” Debates in the Digital Humanities, edited by Matthew K. Gold, University of Minnesota Press, 2012, http://dhdebates.gc.cuny.edu/debates/text/29.

Ott, Brian L., and Robert L. Mack. Critical Media Studies: An Introduction. Wiley-Blackwell, 2010.

Pariser, Eli. The filter bubble: what the Internet is hiding from you. Penguin P, 2011.

“Personalized Search for Everyone.” Official Google Blog, https://googleblog.blogspot.com/2009/12/personalized-search-for-everyone.html. Accessed 4 July 2016.

Rickert, Thomas J.(Thomas Joseph). “Toward the Chōra: Kristeva, Derrida, and Ulmer on Emplaced Invention.” Philosophy and Rhetoric, vol. 40, no. 3, 2007, pp. 251–273.

Searle, John R. Speech Acts: An Essay in the Philosophy of Language. Reprint edition, Cambridge UP, 1970.

Sedgewick, Robert, and Kevin Wayne. Algorithms. 4th edition, Addison-Wesley Professional, 2011.

Stolley, Karl. “Source Literacy: A Vision of Craft.” Enculturation, vol. 14, http://enculturation.net/node/5271. Accessed 4 July 2016.

Tufekci, Zeynep. “Algorithmic Harms beyond Facebook and Google: Emergent Challenges of Computational Agency.” Colorado Technology Law Journal, vol. 13, 2015, p. 203.

Vee, Annette. “Understanding Computer Programming as a Literacy.” Literacy in Composition Studies, vol. 1, no. 2, Oct. 2013, pp. 42–64.

Wajcman, Judy. TechnoFeminism. Polity, 2004.

Zappen, James P. “Digital Rhetoric: Toward an Integrated Theory.” Technical Communication Quarterly, vol. 14, no. 3, July 2005, pp. 319–325.