Community Guides: Disrupting Oppression in Comment Threads on Social Sites
User comments online often devolve into vituperative attacks on race, gender, sex, and other forms of identity. From name-calling, flaming, and trolling, this kind of incivility online is a point of tension for participants and of interest to communication scholars. In this critical discourse analysis of technology, documentation, and users, we examine uncivil discourse regarding weight-based discrimination, often called “fat shaming.” We focus on fat shaming here because incivility online, particularly for women, often attends to the physical body. Given the real-world consequences of incivility and the prevalence of online social sites where these can occur, in this article, we look at discursive practices of online social sites and their users. We aim to understand how, in the words of van Dijk, “social power abuse, dominance, and inequality [is] enacted, reproduced, and resisted” (352). What we learned about discursive structures by examining online comments is particularly revealing of the role of what we call community guides (detailed explicitly in this article) in bounding discourse in social media spaces (we examined Reddit.com, Jezebel.com, and YouTube.com but an analysis of community guides can be applied to any such site). As a result, activists may find opportunities to use technology as an agent of change through the creation of sites that allow for community guides to control uncivil discourse.
“Bias is communicatively constituted” (Kinser 246); that is, our discursive choices—the language we use, the words we choose, and so on—help construct ideals about the world. While Amber E. Kinser speaks specifically about gender, taken further, discursive choices establish and maintain identity generally. These constructs can be positive, such as efforts to replace outdated, sexist language with gender-neutral choices. Or, alternatively, they can be negative, serving to reinforce bias through oppressive discourse. One of the online places we see this sort of bias presented most overtly is in comment threads. User comments online often devolve into vituperative attacks on race, gender, sex, and other forms of identity (see Jane, 2014, for an extended discussion of what she terms “e-bile”). In 2013, PopularScience.com shut off comments for most articles because of “the nasty effect,” where uncivil comments “not only polarized readers but often changed a participant’s interpretation of the news story itself” (Brossard and Scheufele). Weary of “undermining bedrock scientific doctrine…within a website devoted to championing science,” PopularScience.com aimed to block consequences of online incivility (LaBarre). Similarly, the recent #gamergate scandal began as an online personal attack and devolved into rape and death threats via social media, resulting in noted feminist scholar Anita Sarkeesian canceling a speaking event where she planned to discuss misogyny in game culture because of threats of gun violence. Even earlier, blogger Kathy Sierra received death threats after writing as a woman in the heavily male-dominated field of technology.
In Technologies of the Gendered Body: Reading Cyborg Women, Anne Basalmo argues that in its “production” the body is both a product and a process: “As a product, it is the material embodiment of ethnic, racial, and gender identities, as well as a staged performance, of beauty, of health…[as] a process, it is a way of knowing and marking the world, as well as a way of knowing and marking a 'self'” (3). Susan Bordo adds that “through routine, habitual activity, our bodies learn…which gestures are forbidden and which required, how violable or inviolable are the boundaries of our bodies, how much space around the body may be claimed, and so on” (16). Bodies, and specifically women’s bodies, have long been sites where struggles over language choices and discursive patterns have played out. Indeed, the attention paid offline to women and women’s bodies carries over and in some cases devolves into violent rhetoric online, perhaps because technological spaces often disembody us, as we are frequently represented only by avatars or screen names. From name-calling, flaming, and trolling, this kind of incivility online is a point of tension for participants and of interest to communication scholars. Incivility in social sites, or the “nasty, offensive, and democratically damaging language that adds little substantive value” to discussions (“Engaging News” 4) hampers the free exchange of ideas in a community. Some comment threads “degenerate so much that the female participants might have had viable hostile environment sexual harassment claims if the same exchanges had occurred in a discussion forum maintained by their employers” (Brooks 51).
Such observations lead us to examine ways discursive practices support, maintain, and disrupt uncivil discourse regarding identity in online comment threads on three sites: Reddit.com, Jezebel.com, and YouTube.com. While our approach could be applied to comment threads in most sites and to any number of identities, in this critical discourse analysis of technology, documentation, and users, we examine uncivil discourse regarding weight-based discrimination, often called “fat shaming.” Aside from our own concerns about how people construct women via institutional, educational, and social discourses, conversations about weight (an issue connected to constructions of women) provide sites for observing uncivil discourse. We focus on fat shaming here because incivility online, particularly for women, often attends to the physical body: A study of 2.2 million social media posts by Chou, Prestin, and Kunath reveals that posts in Twitter and Facebook “were dominated by derogatory and misogynist sentiment, pointing to weight stigmatization,” while blog and forum posts “contained more nuanced comments [with themes that] included humor, education, and positive sentiment countering weight-based stereotypes” (314). Their study illustrates that fat shaming is a common, yet complicated, occurrence across social media and thus an area rich for further research.
Although we look at fat shaming in comment threads, these examples show how uncivil discourse in online spaces works and how different identities come under attack. (For example, see Bruce Drushel’s analysis in this journal of how the media frames LGBT individuals, often negatively, through discursive elements such as “myths, metaphors, narratives, catchphrases, exemplars, depictions, and aural and visual imagery.”) Influential discourse about weight comes from people of authority, including governments, medical professionals, and educators, and while messages from these sources are generally received as wisdom, a growing body of literature points to tensions among institutionalized discourses delivered with the expressed purpose of improving health, bodies, and weight. Such messages can be detrimental: The regulatory discourse of campaigns for “healthy eating” and “active living,” meant to instill good choices, actually foster “disordered eating, fat hatred and generalized destructive practices and obsessive anxieties about the body” (Beausoleil 102). BMI charts and the USDA’s Choose My Plate (an update of the former food pyramid) are examples of how powerful groups influence norms, habits, and thoughts through discourse. Messages about weight come from medical professionals during office visits and government programs meant to inform the public; educational institutions repeat them in school lessons, all working to indoctrinate bodies from early childhood on. Geneviève Rail's interviews with teens demonstrate that their constructions of health are linked not to well-being but to body shape and that those teens use those views to regulate themselves and others. To them, health means not overweight, particularly for females, and not too skinny, particularly for males. Thus institutions entrusted with the health and best interests of citizens can actually dispense potentially harmful communications about individual bodies.
The confluence of messages from authorities along with those from people with social power turn concerns about health into social judgments about weight. Recent longitudinal research illustrates that weight-based discrimination results in declines in health and life satisfaction alongside increases in disease and loneliness (Sutin, Stephan, Carretta, and Terracciano). Online sites now provide places for those with social power to normalize expectations about weight through emotional, sometimes judgmental, messages. Though sometimes these messages are presented to relate to or educate people, others appear to emerge out of hate. While messages about weight should be examined more closely, in general, discursive choices related to oppressive and activist language online should also be a focus for research—searching, as Jane suggests, for “material interventions and remedies” (542). That is, feminist activists have indeed examined weight-related messages disseminated through or discussed in online technologies, but more generally, these activists have critiqued the ways that digital and online technologies can be both spaces where oppression occurs and is disrupted (see Edell, Brown, and Tolman, 2013; Keller, 2012; Perez and Williams, 2014; Rapp, Button, Fleury-Steiner, and Fleury-Steiner, 2010).
Given the real-world consequences of incivility and the prevalence of online social sites where these can occur, in this article, we look at discursive practices of online social sites and their users. We aim to understand how, in the words of van Dijk, “social power abuse, dominance, and inequality [is] enacted, reproduced, and resisted&edquo; (352). We hope that by doing so, we might denaturalize or at least disrupt practices that contribute to oppression. We chose to examine online comments surrounding one episode of the popular television show Louie that dealt with weight and physical bodies (both men’s and women’s). That is, to understand how activism might occur in environments where governing discourses are distributed individually rather than wielded by a powerful entity (e.g., employer or media conglomerate) and to look at the discursive power of acts against incivility, we analyze community guides and comment threads related to Season 4, Episode 3 of Louis C.K.’s show Louie, “So Did the Fat Lady” (aired May 12, 2014). This episode deals with the title character’s inability to accept an overweight woman’s romantic advances, instead believing he is better suited for a more conventionally attractive mate. In numerous comment threads related to this episode (on the websites Jezebel, Reddit, and YouTube), discussion revolved around issues of weight and obesity, health, sexual attraction, etc. The Academy of Television Arts & Sciences recognized the importance of the messages circulating within this particular episode when it awarded it the 2014 Emmy for Outstanding Writing-Comedy Series.
Our initial interest was in attending to the potential “fat shaming” discourse that could emerge in social media regarding this episode. Ultimately, we use this episode as a case study in point; any number of other media or episodes could be analyzed for the presence of uncivil discourse as well. Similarly, our chosen methodological framework, critical discourse analysis (CDA), could be applied to numerous other topics and subcultures. What we learned about discursive structures by examining online comments is particularly revealing of the role of what we call community guides (detailed more explicitly later in this article) in bounding discourse in social media spaces. As a result, activists may find opportunities to use technology as an agent of change through the creation of sites that allow for community guides to control uncivil discourse.
We chose CDA to guide us because it is generally used to understand, expose, and ultimately challenge injustice. Critical discourse analysts look at how those in power use discourse and contexts to form shared cognitions that contribute to people's perception of normality. In one approach to CDA, van Dijk suggests that normalizing happens in three stages: control of access to discourse, control of discourse interactions and structures, and then control of contexts and strategies that contribute to shared thoughts and values. For example, CDA has been applied to discursive choices related to women such as news media and women’s health messages (McGannon and Spence, 2012), representations of feminism in Estonian print media (Marling, 2010), and maternity care politics in Australia (McIntyre, Francis, and Chapman, 2012). These case studies point to the versatility of CDA as a methodological approach to studying representations of normalcy (particularly with regard to women, women’s bodies, and women’s health) in textual discourse. Within rhetoric and composition especially, Huckin, Andrus, and Clary-Lemon argue that CDA is a valuable methodology because it “explicitly draws our attention to issues of power and privilege in public and private discourse” and it “provides a lens with which the researcher can coordinate the analysis of larger (macro) political/rhetorical purposes with the (micro) details of language” (111). Because of CDA’s abilities to help us focus on issues of social justice and activism, we found it a useful framework for our study of discursive choices that support or disrupt uncivil discourse in online technologies.
Critical discourse analyses frequently focus on uncovering institutionalized abuses of power. They start from and then demonstrate what they see as a problem, often a socio-political one, leading critics like Harry G. Widdowson to condemn such analyses as biased commentary based on cherry-picking details to support the analysts' points. To remedy these, Widdowson argues that authors explicitly state their methodologies, that those procedures be replicable, and that they are applied consistently. Our analyses begin from awareness of a problem that institutions govern bodies, as evidenced by studies mentioned earlier and, as an example, pervasive use of BMI charts. That the public has been socialized to govern bodies is evidenced in the presence of fat shaming in, for example, a YouTube comment thread we analyze in this article that includes 331 references to fat or obese women (often embedded within personal insults). In order to determine what to analyze in the content of the comment threads we selected from Jezebel, Reddit, and YouTube, we each individually coded the texts, discussed them, and then refined our themes. We repeated this process three times. To further indicate for readers that we did not cherry-pick our examples, our screenshots of comment threads both demonstrate hate as well as a range of other responses. This content analysis of user comments demonstrates ways users control discourse and contribute to the normalizing process, in this case through establishing or reinforcing mores about body weight. Individual comments and their accumulation as a collection of community thoughts do not exist alone: user comments are situated within their forums. Online social sites contribute to ways discourse is accessed, controlled, maintained, and controlling and, thus, to gain a deeper understanding of incivility in comment threads and ways to become activists against it, analyses should also focus on other community guides such as the technological environments of comments and the communities’ user documentation.
We put forth “community guides” as our focal point for comprehending incivility and potential strategies for resisting and disrupting it. Community guides are those elements that direct or lead a connected group of people online. “Community” may refer to people in the same place: for instance, in our analysis, community includes the users of a particular online site (e.g., the community of Jezebel readers), people interested in the same thing (e.g., fans of Louie C.K.), or those with an affinity for each other based on shared beliefs or values (e.g., those who hate obesity and overweight people). “Guides” refers to the things that direct and shape communities and their individuals, such as behaviors, habits, and beliefs. Together and within an understanding of identity as socially and discursively constructed, “community guides” are those elements (largely textual, but oftentimes multimodal) that influence norms in a specific online environment. Any analysis of community guides would need to identify the guides. As we have discussed, community guides for this analysis include technologies—the social sites we examine, with their interfaces, features, and structures; documentation—especially terms of service and community guidelines; and users. Within these analyses, we paid special attention to those markers van Dijk suggests contribute to acquiring discursive power, including participants' goals, knowledge, opinions, and attitudes, speech acts, topic and topic change, style, and volume (such as upvoting or downvoting).
First, technology design guides behavior and, thus, in some cases, guides discourse. Technology includes but is not limited to interface and feature design as well as workflow for use. For example, in many sites users can upvote or downvote content, indicating assent or dissent. These publicly visible votes form “permanent (although editable), well-formed and hierarchical” records of the community’s responses to newsworthy topics (Weninger 1), thereby teaching the community what is important. Further, anonymity online allows for higher levels of incivility; while it can provide fewer barriers to participation, it can also “curb social inhibitions” and “result in highly offensive rhetoric” (Reader 497-8), contributing to forming an environment where it is easier to be disrespectful. Thus interfaces that invite or block anonymous commenting can impact the discourse occurring in that space. Overall, interfaces matter in setting the boundaries and goals of discourse in technological spaces. As Selfe and Selfe emphasize, technologies are not neutral but rather politically charged and situated.
Second, official and unofficial documents in online communities guide behavior. To address incivility, most online spaces have terms of service documents and community guidelines that outline acceptable behavior. However, these sometimes exist in tension with actual behaviors observed in the comment threads. For example, Jezebel once described its “guiding rule” as avoiding commentary on weight: “‘We think women are taught to hate themselves at a very young age…So we don't say misogynist things about women's weight” (Johnson). But while Jezebel staff writers may align their prose with these guidelines, others do not always hold themselves similarly accountable, leading to continued racist, sexist, and oppressive commentary circulating through these technologies. Complicating the issue, research shows that users do not always read terms of service for the technologies they use, thus the standards outlined in these documents may not reach intended audiences (Vie).
Finally, individual users can guide others’ behaviors and discourse. Social sites contain users whose “emotional and philosophical ownership” of the sites creates community (Lauterer 52), but at the same time, others can use online social spaces for bullying. Users who already receive institutionalized messages against them may thus be additionally demeaned by peers and the public. In aggregate, these messages institutionalize demoralizing discourse such as fat shaming and hate speech. However, while incivility in comment threads frequently occurs, the community orientation of many social sites can serve to control discourse and keep incivility in check.
In this article, we analyze the discursive practices of technology, documentation, and users of three sites (Jezebel, Reddit, and YouTube) through a critical discourse analysis lens. First, we focus on the technologies: How are the sites organized? How do individuals gain access? What can users do there? Next, we consider documents about appropriate behavior. Finally, we address the sites’ users. Who can talk in these spaces? How do users work as community guides and assist in the control of discourse? Given that we know that oppression frequently happens in comment threads, we analyze how discursive practices influence discussions about weight. We examine how interfaces, community standards, and users may disrupt oppression in comment threads. Online social spaces are of particular interest because norms are not institutionalized through hierarchical structures; rather, the social space naturalizes thinking and behaviors through collectives. As a result, while comment threads in online technologies are often used to oppress various minority groups, individuals may harness the power of the interfaces and community guides to subvert such oppression.
In this section, we address salient elements of the three sites’ technological interfaces, features, and functions that affect discourse, particularly those that allow for the promotion of uncivil discourse. Interfaces “are thoroughly rhetorical: Interfaces are about the relations we construct with each other—how we perceive and try to shape each other—through the artifacts we make” (Wysocki and Jasken 33). We think it is important to expand Anne Frances Wysocki and Julia I. Jasken to include technology, acknowledging that features and functions work along with the interface. While each site’s genre differs, the owners provide a forum for commentary guided by its mission, management, and design. In each, the companies control who can speak but tend to exert control only when laws, policies, or community standards have been broken.
Jezebel, part of The Gawker Media Group, is a multi-authored blog focusing on women’s interests composed by Jezebel editors and staff writers. Access to discourse is controlled in that in order to comment users must sign up through Kinja, Gawker Media’s discussion platform, or connect their Twitter, Facebook, or Google accounts to Kinja. Once users have accounts, they can participate across Gawker’s various sites (io9, Deadspin, Lifehacker, Gizmodo, Jezebel, and others). Users can comment anonymously using an additional Kinja account called Burner. Like burner cell phones that allow individuals to avoid leaving an activity trail, Burner accounts can be used multiple times but are not tracked, archived or linked, nor do they display personal details.
Thus, while the comments are open and public, in order to comment, one must become part of the Gawker Media community and use an additional interface, Kinja. Kinja serves as a kind of community gatekeeper because once a user creates an account, any comment made (as well as the content attached) using Kinja’s platform is publicly viewable by clicking on a username (Figure 4). The interface allows readers to view a user’s commenting history across all sites that use Kinja. As a result, users’ comments create publicly accessible discourse that works as a constant online ethos; it can also illustrate the communities within which the user is actively engaged, a visible representation of the social networks where discourse circulates. This structure of discourse emphasizes individual character overall because users will often be seen by more than their local comments.
Using a blog interface, Jezebel invites readers to access discourse—to respond to editorial content, yet the Kinja interface makes seeing individual comments as part of a larger conversation difficult. Only the first five comments are displayed in each comment group, requiring visitors to click “see more replies” to view further. The comments are not indented as threads typically are in other blogs, though each new comment that is not simply a reply to a previous comment gets a small header stating “[username] started this thread” (Figure 1). The structure of this technological environment emphasizes individuals over the community. For one, the central content of the site is presented by the Jezebel.com staff as an entity. Additionally, it is easier to learn about individuals as formed by their discourses across sites displayed in one place than it is to thread a conversation in order to follow it. The blog format, however, makes it clear that the main voice and vision is that of Jezebel.
Users receive emails that their post has a response (if they have signed up for these alerts), which may make conversation easier on a user-to-user basis, but responses may not become part of the thread due to new moderation policies. As we discuss later in the section on users, most comments model the ones that come before, and little discussion occurs after the first few days of an initial blog post. As a commenting interface, Kinja makes following the flow of a collective discussion tedious (Figure 2). Further, it flattens participation because it doesn’t emphasize conversations or user topic changes (user-controlled discourse). Instead, it privileges the original post and treats subsequent comments not as conversations but as responses to the editorial content, which enhances the control of discourse by Jezebel staff. In this sense, the control of discourse is in the hands of Jezebel staff, who can use it to oppress, or, as is typical in Jezebel and has been demonstrated with recent interface changes, choose instead to fight oppression.
Due to a recent trolling attack using pornographic images, all Gawker Media sites have re-introduced a pending comment system, a further control of discourse to shape the community in the image of Jezebel. Certain users—with a history of longtime fruitful participation and followed by Jezebel—can comment without editorial approval. Other comments appear in gray text as part of a pending queue, visible only to readers who “proactively click ‘see all’ to know what lies beyond [the approved comments]” (Jezebel Staff, “What Gawker”). Approved commenters can promote comments out of the queue if they choose to read through and filter them. Replies to comments are also evolving as part of this process. Users (presumably those outside the pending queue) will be notified if a response to their comment is part of the queue and can choose to reveal the comment or not.
In many ways, Jezebel’s trolling attack has created a tighter sense of community as the moderation system recognizes longtime users. However, the system still relies on editorial oversight, though approved users can actively decide which comments make their way out of the pending queue. Thus, on Jezebel, editorial staff controls the content and the commenting platform, including which comments are awarded stars; however, a sense of community exists among commenters as they support, challenge, and share their opinions. This isn’t to say that uncivil discourse does not occur; it is clear from the recent trolling attack that it does. What is clear, though, is that such discourse and similar attacks often come from outside of the community of users, not longstanding members who comment often, have Kinja usernames, and have a documented commenting history. It seems that the controls of discourse in this site and structures that privilege users who are most aligned with Jezebel work to mitigate incivility.
In 2013, YouTube announced that it reached more than one billion unique users a month, meaning that “one out of every two people on the Internet visits YouTube” (The YouTube Team, “YouTube Hits”). YouTube seems to give up control of access to discourse because it is open to anyone, although commenting is limited to account holders. Aside from access, controls of discourse stemming from the system and interface design of YouTube, which are sometimes read as a control of power or identity, include the structure of comment threads; the ability to directly reply to someone or give a thumbs up or down on posts and comments; the need for an account in order to post; and the ability for anyone with an account to post a video comment.
Since January 2011, when the old YouTube system was sunsetted, Google controlled access to usage, and thereby discourse, by requiring users to obtain a Google account to subscribe to channels; create video playlists; or create a channel, give permission to upload videos, or comment on videos. While current users can choose aliases, this was not always the case. When migrating to the new system, some users felt forced to use their real names or names they did not want publicly available. When users who had not voluntarily switched to the new system were forced to migrate in order to do more than view videos, they were forced to use existing Google and, later, Google+ accounts. Consequently, some users who may have created Gmail accounts in 2004 using real names suddenly appeared stuck using a real name in YouTube—a space in which they may want to remain anonymous. Users expressed frustration over Google's attempt to control their names through usernames such as “NoIDon'tWantTo UseMyRealFrickingName” and “fuckinggoolglemangwtfiswrongwithyou.”
Users could have created new accounts but would have lost years of playlists, all subscribers, and all comments and posts collected during that time. Because user identities and histories were wrapped up in existing YouTube accounts, most users didn't feel they had a choice but to migrate. However, Google did respond to public frustration. Currently, depending upon the place where users link to obtain a Google account, Google allows the use of non-Google email addresses or assigned Google addresses. This change occurred after outraged users pressed Google to offer alternatives to being assigned a Google email address. While it makes sense that users need a Google account to use a Google product, controlling the account became not just an issue of access but of personal history and identity. This functionality change shows that mass public complaints in Internet forums can be used to successfully intervene in design features and policies to better protect people's identities. One lesson for system designers is that the choices developers make when creating interface features influence not just users’ tasks but also their sense of self.
While Google demonstrates some control over how names were displayed, the YouTube interface gives more control to users. Threaded comment boards and direct replies place greater emphasis on the individual than do non-threaded comments and the inability to directly reply through interface features, as in Jezebel, promotes a sense of individual empowerment. Unlike Jezebel (where all comments and their replies are left aligned, encouraging a sense of equality) and similar to Reddit (where all comments are threaded by default), YouTube replies are visually subordinate to the comments: They appear beneath and to the right of the comment like a hanging indent, leaving commenters' photos to stand out visibly (Figure 3).
The sense of individual power, which could influence control of discourse, is emphasized when people “direct reply” to others, which enables users to mention a person's name by creating a link to that person's account. This visually emphasizes an individual because the username becomes bold (Figure 4). Clicking on a username brings viewers to the individual's YouTube channel. In addition to the sense that they started something—a conversation, argument, or occasion to reflect—users can feel further control over discourse because their accounts can be notified when someone uses the direct reply feature (Reddit offers a similar feature). The YouTube comment thread interface instills power in users who use it to perform visual and verbal communicative acts of drawing attention to individuals. Two weeks after the Louis C.K. video clip was posted to YouTube, it had 1,748 comments and replies from 1,538 unique usernames, within which there were 845 direct replies to 415 unique users.
The YouTube interface does temper such power, though, preventing comments or comment threads from taking over by visually overwhelming the screen. After a number of replies to a single comment, the replies are collapsed under a link that reads “view all 3 replies” or “view more replies” if there are many more. The length of posts is similarly controlled as some posts are truncated and users can choose to “read more” of the post. This further identifies the interface as a community guide for discourse.
Reddit is “a type of Web-democracy…open to all comers” (Weninger 172). While anyone can browse Reddit, only users who sign up for an account and log in can post or comment, like YouTube and Jezebel. To prevent spam, a posting rate cap applies to all users; this cap is set to a certain number of minutes. As users participate more frequently in subreddits (groupings organized around areas of interest, such as gaming or politics), the time limit on this posting cap grows smaller to encourage greater participation; however, because each subreddit may have a different rate cap, a user joining a new subreddit may start over with a new rate cap. In either case, the design of features and policies in Reddit are such that more privilege is given to those who have proven themselves.
When considering access in a site like Reddit, one must distinguish between Reddit.com and its many subreddits. As of April 2014, Reddit began showcasing the top five trending subreddits daily on its main site (Kumparak), allowing users to more easily find new communities. This change illustrates that the interface design emphasizes user control of discourse, giving more power to users; indeed, many interface choices in Reddit highlight individual user and community control over discourse. Subreddits vary in look and feel, allowing “niche communities to form, instead of having one monolithic overall community” (“Reddit Frequently Asked”). Each has varying standards for behavior, described in the next section. The overall site serves to house the multiple subreddits and provides information on how Reddit works, its source code, its mobile tools and browser extensions, etc. The true community occurs in the subreddits.
Reddit offers a “karma” system rewarding users who post “high-quality content and make insightful, amusing or otherwise interesting comments” (Weninger 177). Registered users—called “Redditors”—who post links earn karma when other users vote on their usefulness; users can also earn karma when others “upvote” their comments. Redditors may up- or downvote posts based on their “helpfulness, informativity, provocativeness, creativity, or wittiness” (Richterich), and thus “organically generate sets of topical, relevant, non-redundant, high-quality content” (Weninger 175). But posts with more upvotes are more likely to be seen because they are bumped to the front page, leading to a potential “rich-get-richer” effect where upvoted posts are voted even higher by virtue of the interface making them visible (Morrison and Hayes 2267). Reddit’s interface therefore can impact both the popularity of Redditors and the things they post—the interface provides tools by which users can control context by controlling the volume. In short, voting controls context in that it sustains the ongoing activities by control.
As with YouTube’s up- and downvoting system and Jezebel’s star rating system, the Reddit voting design controls discourse beyond the interface because these votes become performative. Unlike YouTube, the votes become users' speech acts that control discourse because they control what people realistically see, particularly because many users do not read all subreddit posts and instead stop after the first few pages. The top posts, then, influence how people think, and if the comments tend to be the same, they suggest that readers should adopt similar opinions. The “superstar posts” on Reddit’s front page “constitute evidence that the voting system allows the community to self-regulate at a high level” (Mills 11).
In this way, Reddit’s interface helps control discourse through communal agreement rather than from upon high. This communal agreement made possible through Reddit’s interfaces showcase what Wasike calls “second-level gatekeeping” where users are empowered to determine what news is valuable to the community—rather than that value being ascribed in a hierarchical system where editors determine what people will read (58). As a result, activism is made possible through this system of second-level gatekeeping; users bring important issues to the forefront; they vote on what matters to their subreddits; and they call for collective action (discussed later in the section on users).
Conclusions on Technologies
While each site’s interface differs, one commonality is the way that interfaces afford community building and offer possibilities for collective action. System design is not neutral; while developers’ intent may be neutral, systems’ effects are not (McLuhan and Fiore). Interfaces are sites where “the ideological and material legacies of racism, sexism, and colonialism are continuously written and re-written along with more positive cultural legacies” (Selfe and Selfe 484), legacies we still see today in the vitriolic commentary that occurs in networked communities online.
As we illustrate, most interface features put the control of ongoing activities, topic changes, and speech acts into users’ hands. Sites like Reddit, which allow for individual enclaves of communities, and Jezebel, jokingly referred to as a “hive mind” by its users, have interface features which reflect the site’s interests in offering social control to community users. However, some sites exhibit greater institutionalized control, such as YouTube forcing users’ real names to be displayed and Jezebel’s tight restrictions on posting original content. All three provide the means by which users regulate ongoing activities: voting. Whether control of discourse through interface design comes mainly from the top down, like in YouTube, or the bottom up, like in Reddit, voting systems (karma, stars, and/or up- and downvoting) offer participants the opportunity to visibly support the content they wish to affirm and push away unpopular or offensive content. Interfaces therefore allow for oppressive content to be made invisible if enough members of the community vote against it. Such features also teach people the community norms. There is always a tension between features that emphasize individual and institutional control of power. So, while the interface can be a way to put, say, the “you” in YouTube and make it about individual users, it can also shape conversation and control the volume on individuals or their content. Technologies’ design and use is connected to their missions; how they are implemented connects to their articulation. Documentation is a way these elements are often expressed.
Terms and conditions govern our behavior in most sites online, and for nearly every program we download, we must accept or reject terms and conditions before installing. Many individuals simply click yes without actually reading, even though such documents contain crucial information. Documentation is a primary means for online social sites to define context, which in critical discourse analysis terms enables them to control discourse. For example, some terms and conditions contain mandatory arbitration clauses while others forbid users from suing. This clause makes clear the owner's reign. Some have offered money to anyone who read the document and wrote in to claim their prize, while an online game seller rewrote its terms once so that all purchases gave the company a “non transferable option to claim, for now and for ever more, your immortal soul,” which only twelve percent of purchasers rejected (marcperton).
In online communities, terms and conditions are joined by guidelines that outline standards for behavior and communication. Our idea of “community guides” is inspired by this genre of community guidelines that is used to explain and govern the community. While terms and conditions detail legal elements, community guidelines cover aspects like forum moderation, guideline violations, protected speech, post removal, and so on. Community guidelines decentralize control from an institutional level and share power with users. Not all online spaces become communities: Virtual communities are places where individuals feel “membership, identity, belonging, and attachment” (Blanchard 827). In virtual communities, members of the group feel they have the ability to influence what happens within the group (such as helping to compose its rules and guidelines). As with interfaces, documents about appropriate behavior can be composed top-down, given to members by the community’s creators, or from the bottom up, created by community members. In this section, we discuss the community guides that govern these three sites, including their terms of service documents and official or unofficial discussions of appropriate behavior.
Jezebel hosts a variety of community guideline documents: Kinja’s terms of service, Kinja’s content guidelines, and Jezebel’s “Rules of the Road.” Kinja’s terms of service emphasize the public nature of users’ content as well as the responsibility of the user for said content, including anything malicious or illegal. The terms of service clarify that any kind of attack, hack, or code engineering is not tolerated. Additionally, contributed content can be utilized in advertising or promotions and shared across other Gawker blogs. While the public nature of user comments is stated explicitly, it is not as obvious to users that once one has a Kinja account, all comments across all Kinja blogs are accessible by clicking on the individual’s username on any Kinja site. Thus, Gawker Media aggregates the disparate and multiple personas people present across the Internet into one place.
The screenshot below (Figure 5) illustrates what Jezebel viewers see when a username is clicked. This example also shows the usernames to whom tylerdhurst replied. Clicking on the date and time brings up the original blog where the user commented. This user most recently commented on Gawker, Deadspin, and Jezebel, though his entire Kinja comment history is accessible, creating an archive of participation across the various Gawkerverse blogs. Such archives will become useful as new moderation policies are implemented to catch spamming. The new policies are not yet outlined in the terms of service or content guideline documents but are mentioned in a follow-up post from Jezebel Staff explaining how Gawker Media is addressing their trolling problem. This case indicates that Jezebel is in charge and will wield control to prevent or eliminate questionable discourse. Still, the nuances of what is “public” on Kinja sites is not expanded upon in the terms of service or content guidelines.
A separate document linked from the terms of service specifically covers content guidelines for Kinja blogs (Figure 6), and though it includes a warning regarding malicious content, there is no further explanation of what is considered “malicious.” What’s particularly interesting about these guidelines is the first line: “Welcome to Kinja, where you own the story.” The only story users could own would be their responses; they have no say in and no ownership over the editorial content of Gawker blogs. Users can only email to speak to Kinja Legal because comments on both the terms of service and content guidelines pages are closed. The content guidelines page consists of a list of “simple rules” that range from advice such as “choose your username carefully” to liability protection, such as “make sure you own the rights to anything you post” before linking back to the terms of service (“Kinja Legal”). While the documentation guides users to believe they are in control of discourse, the interface puts Kinja in charge of day-to-day discourse practices.
In contrast, Jezebel’s “Rules of the Road” are “loose and interpretive” but convey a strong editorial message: “This is our website, and we will moderate how we see fit” (Coen). Despite the invitation to become involved in the community by nominating a comment of the day or offering tips to the editors, Jezebel staff are unmistakably in control. The guidelines detail how to receive stars from the moderators: by commenting consistently; saying clever, witty things; or offering new perspectives. Interestingly, as in Kinja’s terms of service and content guidelines, no specific statements address sexist, racist, anti-ethnic or homophobic speech. Rather, politeness, good humor, and intelligent contributions are privileged while personal attacks, uninformed comments, and ganging up on editors or fellow users could get users banned. Thus, “The Rules of the Road,” though not updated since 2010, encourage engagement by emphasizing what behaviors will gain editors’ attention, who decides which comments receive stars, and which comments are ignored or deleted.
Given these guidelines, the structure of the interface, and editorial oversight, one might expect less hateful or harassing commentary and a more thoughtful, less oppressive environment to emerge. From Kinja’s terms of service and content guidelines to Jezebel’s “Rules of the Road,” discourse is controlled outside of the community of users who frequent the site. Despite such discourse control, Jezebel users actively participate through commenting and responding to comments. Jezebel's genre, its community expectations, and its moderating practices work to generally prevent expressions of hate and harassment. While Jezebel users have less control over content production, or how their comments are seen and read, the continued conversation among users exhibits a sense of community missing from similar blogs and sites such as YouTube.
YouTube is to Google as Jezebel is to Gawker Media or subreddits are to Reddit: Google is the parent that composes policies to rule its multiple services, and beneath that, YouTube has specific community codes. The parent organizations tend to dictate how services are to be accessed and used while the specific services (e.g., YouTube, subreddits) regulate behavior and user discourse. While Google says, “what belongs to you stays yours,” Google makes it clear in Google Terms of Service that they control users' discourse:
When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content.
Additionally, users can never opt out: “This license continues even if you stop using our Services.” While such a policy may be logistically necessary because Google cannot control what is cached in external systems, such policies raise concerns about power. Given Google’s ubiquity, it is nearly impossible for an average United States citizen with a social or work life connected to other average citizens to avoid Google services, which means that Google essentially controls a majority of discourse. Though Google’s policies may be benevolent now, its extensive policies that cannot be canceled create a system whereby oppression would not just be present but potentially everlasting, which could take activism-turned-revolution to change.
Our earlier discussion of Google's control over usernames in YouTube is one example of how policy and interface intertwine. It is difficult to discern if a username is an alias or real name, but of the 1,538 unique user names who replied in the comment thread to the Louie video clip, 318 were clearly aliases (e.g., Waitloss, Tru Blue, coldblackice, AlllthatYazz). Approximately 3.5% of responses included direct insults such as “You're a fucking moron.” Of the 96 posts with personal insults, 21 of the insults (29%) were made by people who used first and last names that could be real (e.g., George Beckingham, Don Crowder) and eight with possible real first names (e.g., DidDi, RonnieJ5). Roughly 71% of those who made personal insults did not use clearly recognizable aliases, supporting the notion that a policy of allowing anonymity lends itself to a larger number of negative posts.
In another example from the migration of the old YouTube platform to Google, policies required that, in order to continue using YouTube beyond viewing videos, users had to create a channel. Channels make users publicly visible with a banner and profile. Although workarounds existed, many users were unaware of them and guidance was limited; consequently, having a public user channel was a default choice for users by Google. However, these workarounds indicate that the user channel system is not simply a matter of technological capability but the result of conscious choices that become policies legislated into interfaces.
It is unclear whether the system design led the policy or vice versa, but after public resistance, Google changed its policy and interface to relinquish some control to users. Rather than automatically assign service to an existing Gmail account, it now allows users to select a username that can include letters, numbers, and periods. New users can now be anonymous on YouTube. In fact, Google has a range of ways users now appear in public spaces and across services (e.g., usernames, nicknames, alternate emails, previous username not associated with Google, Google Talk or YouTube username, etc.) (Google, “Alternate”). Users can make a difference, as evidenced by the results of users’ complaints on discussion boards, passive aggressive usernames, and emails to the company.
The “YouTube Community Guidelines” present two main rules and tips about specific kinds of discourse. Through the two main rules—Respect the YouTube Community and Don't Cross the Line—the YouTube Team describes the expected environment, one for all audiences (“nuns, the elderly, and brain surgeons”) that relies on trust and respect. The “Don't Cross the Line” rule forbids porn, bad stuff (“animal abuse, drug abuse, under-age drinking and smoking, or bomb making”), graphic or gratuitous violence, gross-out or shock videos, predatory behavior, and spam. The documentation speaks directly to users in conversational language, asking them to take the rules seriously, noting that the rules are regularly enforced. Whereas the Google terms of service are also written for a general audience, the rules emphasize their control while the YouTube guidelines put the responsibility mainly on users. Together with an interface that emphasizes individuals and their discourse, these terms of service relinquish control over content, topics, speech acts, and such other than to control volume against crossing the line and illegal acts.
To help users further understand respect in this community, thus teaching them appropriate discourse, the YouTube community guidelines briefly describe “limits and exceptions” of ten more specific guidelines, including hate speech and harassment. They define hate speech as “content that promotes hatred against members of a protected group” and explain that racist or sexist material may be understood as hateful (The YouTube Team, “YouTube Community”). The YouTube Team notes, “if you wouldn't say it to someone's face, don't say it on YouTube.” Guiding user behavior directly, they end: “And if you're looking to attack, harass, demean, or impersonate others, go elsewhere.”
To help users better recognize hate speech as they define it, they provide an example: “it is generally okay to criticize a nation, but not okay to make insulting generalizations about people of a particular nationality” (The YouTube Team, “YouTube Community”). Yet this comment thread is full of generalizations about overweight people, using phrases like “fat lady,” “overweight people,” “fat guy,” and “fatties.” These are most often connected with negative generalizations about overweight people or describe a world in which fat is unacceptable: “Plain and simple most men do not find fat women physically attractive,” “unattractive women have very little options,” and “No matter what you do, no matter what you say, you fatties are losers inside and outside.” Sentiments prevalent throughout connect fat and attractiveness or fat and health:
- “If you look like a lazy slob on the outside, you're probably one on the inside.”
- “‘It's mah genetics’ explanation isn't worth shit.”…“everywhere—big gluttonous trolls wobbling around in public places in spandex.”
- “Makes me want to puke my guts out.”
- “It's the sedentary life style and unhealthy diet of endless sugary foods -”
- “you're fat and should get a gym membership.”
What may be the most hateful, though, is the rancor displayed by users:
- “…Im agree with the guy who comment LOSE SOME WEIGHT BITCH!!!”
- “if it really sucks so bad get up off your ass and change it”
- “I'm still not dating a fat chick. No I don't want to hold hands with a fat chick. No I don't want to kiss a fat chick. No I wouldn't even have sex with a fat chick unless a lot of alcohol was involved”
- “Healthy people are just better people, they try harder, they put effort into the body they have. Fuck you for complaining about being fat.”
- “Fat chicks are subhuman. Guys, don't be fatty fuckers !! Stop encouraging entitled whales. Follow the No Fat Chicks rule.”
Profanity was used 461 times, many in generalized comments about weight and some used to attack. Think of this list repeated throughout 1,748 comments. This analysis indicates tension between controls of discourse in the documentation, what users can do based on technology design, and users themselves and as a community.
Perhaps the terms of service are technically being upheld because the YouTube Team does not characterize these hateful, demeaning comments as hate speech because overweight people are not a legally protected group. Yet even if YouTube regularly deletes harassing commentary, there is enough malevolence in this thread to start to shape beliefs that being overweight is unacceptable; hate and harassment are acceptable. Regardless of whether or not the comments above are hate speech or harassment according to YouTube, the 90 direct personal insults are disrespectful and the accumulation of negative generalizations about weight create hostile conditions and an environment where oppression about weight is not only accepted but normalized. There were 560 comments about weight and 504 about bodies. In the Users section, we discuss how YouTube users become activists against oppression in this environment.
Anyone can create a subreddit, sign up for an account, and post, so community guidelines and terms of service documents are incredibly important in governing discourse in Reddit. As long as the content is legal and abides by Reddit’s terms of service, a subreddit for any topic may be created, giving considerable control to users. (Subreddit names follow an R with a backslash: r/AdviceAnimals, for example, leads to a subreddit showcasing Advice Animal Internet memes.) Reddit’s relative openness of access means that the parameters of appropriate behavior, including discourse, must emerge either from the top down (Reddit’s creators) or bottom up (subreddit participants). What tends to occur is bottom-up discussions because each subreddit determines the guidelines for its behavior. Indeed, Reddit’s FAQs state that:
Reddit does not remove posts for containing insults or negative commentary, but leaves such decisions to the moderators of particular communities. Those moderators are not employees of or retained by reddit‚ they are the persons who initiated the particular community and their appointees. While posts that contain such content can be distasteful, reddit is not in a position to arbitrate disputes. Posts should be consistent with the rules of the community to which they are posted.
Reddit (the main site) regulates only a few crucial behaviors necessary for preserving the integrity of its voting and karma systems; the terms of service forbid voting rings that game the system, posting private information, and spamming. Because Jezebel and YouTube each have policies about appropriate content, they legislate behavior more fully than Reddit, which leaves these decisions to the subreddits.
Similarly, the subreddits determine things like “whether people are expected to behave civilly or can feel free to be brutal” (“Reddit Frequently Asked Questions”). Leaving such choices up to individual communities results in not safe for work subreddits such as r/gonewild alongside tamer ones like r/aww. These policies move the control of discourse in Reddit further away from the social site and into the hands of users. Some subreddits illustrate this control of discourse through their community guidelines. For example, r/transgender showcases a particularly rhetorical approach to posting in their “suggestions for successful submissions.” They suggest posters “consider r/transgender's topic and audience; consider your content, and what you are offering; consider your motive for posting.” They articulate that “this a free speech area, and open discussion is encouraged, with caveats. Sexism, racism, and similar ism-ing are not acceptable (‘examine your privilege’). Speak your mind, but word your arguments with a consideration of other's perspectives.” r/transgender and similar subreddits that discuss sensitive subjects (feminism, body acceptance,rape counseling, etc.) have carefully considered guidelines for both content and form; their community guides—whether intentionally or not—have incorporated rhetorical principles to remind Redditors of audience awareness while composing.
In contrast, other subreddits have less concern over users’ content and, in some cases, little concern about form as well. The subreddit r/dexter (discussing the show Dexter) controls discourse at the level of appropriate forms (no memes, no spoilers) and less so regarding content (although the community guidelines specify “keep comments civil” and “submissions must be directly related to Dexter”). A food-related subreddit, r/food, has only three rules: “Be nice to each other. Only food-related posts are allowed. No overt blogspamming.” And r/showerthoughts, devoted to “any thought you might have while carrying out a routine task like showering, driving, or daydreaming,” has three rules: “No shower ‘observations.’ No puns. Don't be a jerk.”
Ultimately, the rules of communication remain up to the individual subreddits. Few guidelines are outlined by the main site itself (Figure 7).
Instead, most community guidelines occur in a document called “Reddiquette,” described as “an informal expression of the values of many redditors, as written by redditors themselves” (“Reddiquette,” emphasis in original). This document is described as a living, breathing document and indeed, at the time of this writing, the most recent edit was eleven days ago. Divided into two sections, “please do” and “please don’t,” it guides Redditors through composing new posts and commenting on, voting for, and promoting others’ posts. Reddiquette demonstrates the site’s interest in both top-down and bottom-up control of behavior through three main disciplinary mechanisms. The first includes direct censorship by moderators or administrators; the second, which is bottom-up, includes users’ own self-discipline in not posting repetitive content; and the third (also bottom-up) includes community discipline and downvoting behaviors (Richterich). This interplay between top-down and bottom-up community guides (with a greater emphasis on community-derived control of discourse) helps ensure that Reddit remains a space where “deliberations are reasonably inclusive, remain at least procedurally, if not always in practice, open to counterarguments and various viewpoints, and do a relatively good job of respecting the rights of other deliberating Redditors” (Swift 73).
Conclusions on Documentation
These online environments are complex systems; their genres, interfaces, the company’s attitudes, and documents about behavior work together to foster environments that can alternately allow for oppressive discourse or support activism against harassment. Of course, it is difficult to know how well the terms of service and community codes mitigate public harassment and its accumulation into oppression in these sites because we don't know if comments or other interchanges have been deleted or otherwise handled by the organizations and staff.
Given that anonymity can give rise to contentious posts, one way to control discourse could be to require that people use their real names. However, this is cumbersome to police and unreasonable. As a result, given that these media are publicly available, places like Jezebel, YouTube, and Reddit are open to uncivil discourse. What helps control uncivil discourse and retain community is a particular combination of community guides, ideally constructed or contributed to by the community’s users, and interface features that reward valuable, prolific posters and allow community members to self-moderate. Reddit’s Reddiquette and Jezebel’s Rules of the Road are frameworks that control access or discourse; in YouTube, which lacks the community-driven guides these other sites feature, discourse becomes much more uncivil. Communities without a strong central core of guidelines and values—ideally transfixed in a written document—may find that the loose controls lend themselves to oppressive content. And communities that address head on the kinds of activist approaches valued by the group more successfully encourage responses to emergent uncivil discourse.
The owners, managers, and employees of these sites control ongoing activities when they allow something to remain posted or they remove it, whether an original video post or a subreddit response. Individual users control ongoing activities through the presence of a response or the absence of one. Communicators must choose which events to make salient and, thus, meaningful: “Rhetors choose or do not choose to make salient situations, facts, events, etc. This may be the sine qua non of rhetoric, the art of linguistically or symbolically creating salience and then translating that into meaning” (Vatz 160). Ongoing responses are a way of making posts (and posters) and their responses (and responders) salient. In this section we examine how users responded to the Louis C.K. episode in these three sites, looking particularly at the evolution of discourse and considering how uncivil discourse emerged. We move from broad examinations of each of the three sites’ interfaces and documents regarding appropriate behavior into specific examinations of the discourse that occurred in response to this particular episode.
Though Jezebel is dedicated to women’s issues, its stories are promoted across Gawker’s blog sites, increasing its reach. The gender makeup of commenters is unclear because many choose gender-neutral usernames; although some self-identify as male or female in their comments, there is little consistency across the site. Many commenters on the specific post we examined identified their gender in their anecdotal commentary. Additionally, several users identified as fat; even more shared weight-related experiences in response to the television clip and accompanying blog post which, as part of its title, frames the episode and Louis C.K.’s commentary on weight as “absolutely magnificent.” The title thus influences the ways in which commenters respond, as they agree, disagree, or challenge the “magnificence” altogether.
Using the first comment as a model (Figure 8), users continue to share weight-related anecdotes (Figure 9).
Because the comment section is the one space where users most control the discourse, what is striking about these anecdotes is how they share a collective, comprehensive perspective about body shaming and discrimination. Perhaps even more significant is the low number of inflammatory comments. In fact, most debates center on comments about attractiveness (what is attraction; how it is constructed and judged) and humor (what makes something funny); additionally, most refrain from name-calling or personal attacks and instead point out flaws in logic or simply disagree by providing anecdotal evidence. In fact, users point out how surprisingly friendly the discussion was (and implicitly compare Jezebel’s tone to YouTube’s) (Figure 10).
Of course, some conversations devolve to snark and beyond, but in over one thousand comments, the comment threads including personal attacks seem to be perpetrated by the same small group of users. And while those particular threads may contain negative comments, eventually the threads are ignored by other users, ending the discussion. For example, one conversation (Figure 11) where a user brings in dictionary definitions as a defense consists of only five posts back and forth between the-one-who-posts and YogaNerdMD (both of whom use sarcasm and *eye rolling in other posts to other commenters). They are the only participants in this thread and once ignored, they each move on to other threads.
Ultimately, comments on this particular post do not devolve into the same level of vitriolic comments seen on other Jezebel posts, some subreddit threads, or YouTube. Though instances of name-calling occur (ranging from “moron” to expletives or references to female genitalia like in the example above), the majority of posts instead focus on society, fat as a stigma, or personal anecdotes. Perhaps the content of the clip and the blog post which frames it invite a more personal response than other types of editorial content or entertainment news. It is telling that in the space where users control their discourse, many share stories of their own bodies rather than devolve into incivility. Taken together, such expression can be read as resistance to the dominant narratives around non-normative bodies, particularly those not falling into typical beauty standards.
In fact, a number of users explicitly call out societal pressures and expectations, underlying a potentially activist standpoint:
- “This clip and article is exactly about that blurry, socially-decided categories of fat and non-fat, and how ‘fat’ is judged externally, and how many men make their relationship decisions with THAT in mind, NOT as an honestly-arrived-at personal preference.”
- “Society/media strongly affect the perceptions of what is considered ‘normal’ and what is considered ‘stigma worthy,’ outside of affecting attraction.”
Another response includes photos of Kim Kardashian, asking if she would be considered fat or “full-figured.” This particular thread explores the nuances of societal considerations of beauty as well as the constructed nature of such standards.
Many Jezebel users utilize the discourse space to draw attention to their own body-shaming experiences, express guilt over decisions based on societal and peer expectations (not dating someone because he/she was overweight), raise questions about attractiveness, or point out societal issues that lead to body image stigmas. One comment thread, however, takes control of the discourse by shifting the topic from weight-related anecdotes and comments to the general topic of how we, both culturally and individually, define humor. The thread begins in response to the initial post in which the Jezebel staff writer, Madeleine Davies, calls Louis C.K. “one of the greatest comic minds of our generation.” User resmarted expresses concern over Louis C.K.’s use of what could be interpreted as a homophobic slur, embedding a YouTube clip of Louis’ standup as evidence. Davies posts a link to a clip from an earlier season of Louie where the topic is confronted. Other users respond by questioning the use of racial slurs in comedy and, eventually, the nature of what makes something funny. Though this thread could be seen as tangential, it leads to a robust discussion on humor. Furthermore, as a shift in topic, it can be read as a way of controlling discourse. The thread challenges the Jezebel author’s belief that Louis C.K. is a comic genius, and as other users respond, challenges notions of humor. As a whole, it emphasizes a topic important to users rather than allowing the conversation to only be about weight. In these ways, the thread resists the“preaching to the choir” that can happen when the same voices of users who typically agree with one another control comment threads.
Despite the YouTube community guidelines’ primary rule to be respectful, our analysis demonstrates much disrespect in the comment thread analyzed. YouTube’s design allows for this because unmoderated threads distribute control of discourse among individuals. The most common ways users act against disrespect is by directly questioning someone's behavior or thinking, insulting, offering a counterpoint or counter-interpretation, and directing that others do or think differently (about 10%, 7%, 2%, and 1% of the comments overall, respectively).
In the 1,748 posts, the most common way commenters redirect negative expressions is to question someone's behavior or thinking:
- “Why do we treat people like shit based on appearance, when we are far more forgiving of shitty attitudes and actions?”
- “What if she likes being fat, but simply doesn't like how people react to that?”
- “Anxiety is ‘easy’ to treat? You mean numb up with some meds?”
- “It's okay to be fat as long as you hate yourself? What?”
- “What kind of person would you have to be to shame, shun, belittle, and insult those people?”
Insults appear throughout the threads but most are in the context of commenters responding to rude remarks, the second most common way YouTube commenters acted against normalizing weight discrimination. When RonnieJ5 tries to explain reasons people struggle with weight aside from being lazy or “liking shitty foods,” including “stress, anxiety, mental problems, childhood trauma,” user Halitosis ByOsmosis direct replies and says to stop normalizing unhealthy behaviors: “I think people like you are disgusting for encouraging and dismissing those unhealthy behaviors as normal.” RonnieJ5 replies that ”overweight people have enough to deal with just by their original reasons for why they overeat, and people like you kick them while they're down, which only exacerbates their vicious eating cycle. Honestly, I don't think they're disgusting. I think people like you are.” In another comment, Perilous Moo lashes back to Jags Domain Crew, “you see, the real problem with society isnt with my fat, it's with your mind. Your mind is way worse than my fat can ever be, because my fat only affects me, but your cruel stupidity affects us all.” Perilous Moo's comment recognizes the socially institutionalized nature of fat-shaming: “It's not society's problem if I'm fat. It is society's problem, however, when assholes like you run around abusing people because of their weight.”
Commenters also act against negative comments with direct and indirect counter-interpretations or counterpoints, examples of which include:
- “It's not about getting people to find you attractive, it's about getting people to treat you like an equal human being”
- “this post wasn't about suggesting what changes need to be made, it's simply pointing out the inequity which exists, but I have observed, in my 65 years, that social change begins with social awareness”
- “What a person finds attractive can change”
- “Fat in the human body is not as simple as you think: in fact, being fat isn't even necessarily unhealthy!”
- “I think the key point of this episode that everyone is missing is that Louie WAS attracted to this woman- it was her size and the perception tied to that that was holding him back.”
Fewer people act against disrespect by being directive, but those who did said things such as, “Not liking a sketch or character doesn't give you the right to call the actress a ‘fat whore’ or any other disgustingly mean insults I have seen in the comments,” “Direct criticism to [Louis C.K.] if you dislike the content or message of this scene,” “I think you'll find that searching for beauty everywhere you look will be far more rewarding,” and “Alright guys. Let's try being more polite. We aren't being very helpful to each other.”
As might be inferred by Reddit’s insistence on community-oriented guidelines that serve the subreddits’ needs, the discourse on weight in response to the episode was bounded partly by subreddit rules and partly by Reddiquette. r/videos, where this discussion was hosted, has eight rules, only one of which directly addresses tone: “No commenting with racial slurs.” Despite the dearth of rules bounding discourse in this subreddit, the 456 comments stayed generally civil. Nearly half (46%) of the comments focused on weight, with 32% overall focusing on commentary on bodies (e.g., “It's about making your mirror image feel inferior to you so that you can hold yourself up to some level in the dating world that is not completely realistic” or “Why would a sapient individual, as opposed to the number of non-sapient and non-sentient lifeforms on this planet, choose an unfit partner when better options are available?”). This tight focus on weight or body-related commentary illustrates that the conversation generally stayed on topic, considering the theme of the episode being discussed.
We tracked profanity and personal insults directed at other users. Less than ten percent of overall comments (9.4%) direct personal insults at commenters. “Dude, no one is being defensive, u cant talk out of ur ass and not expect get called on it,” states arghmonkey, while The_Psychopath retaliates “How about don't stalk me? Creep” when another user points out “Wow, you made the same comment in /r/Louie yesterday.” General profanity is more common, with one hundred uses across the 456 comments, a higher percentage than in the YouTube thread. Profanity isn’t always used in reference to another individual—many comments simply sprinkle profanity throughout, such as “oh shit i guess i have to go to nature court soon, ive never once factored ‘genetically and phycially fit’ into my relationships” or “It's sort of depressing though, because when I see shit like the reaction here, it has broader more troubling implications.”
Despite the use of profanity and occasional personal insults levied at particular users, conversation emerges with little overt incivility. Figure 12 shows a typical threaded discussion with an initial response and two subsequent responses that quote and engage with the initial post.
Users engaging in dialogue that pushes the boundaries of argumentative discourse frequently address other users’ rhetorical tactics—approximately 9.8% of comments focus on elements such as logic, lack of sources, etc.:
- “I think there is some fault in your reasoning.”
- “I'm certainly not ‘talking out of my ass’ just because you don't like what I have to say. You have no logic to your argument.”
- All I'm saying is your google search to prove a point turned up an astroturfing shill website, and for you to promote it up and down on this thread as a legitimate source is silly.”
However, even these posts remain generally respectful of the other individual in the dialogue.
Few posts devolve into uncivil discourse. In Figure 13 below, two users’ conversation begins to devolve, with one complaining that the other relies on circular logic after posting the same website repeatedly as “proof” about obesity myths.
This thread ends quickly after the users exchange a total of seventeen posts. Like the comments in Jezebel between the-one-who-posts and YogaNerdMD, who talk back and forth to each other while others ignore them, arghmonkey and baaaaaaaaaaaaaah address each other until the conversation rapidly fades. baaaaaaaaaaaaaah engages with another user in a separate portion of the conversation and explicitly calls out the user’s communication tactics:
These conclusions are from how you state your argument and jump to name calling and assumptions on me as a person, again, simply because you disagree with me. You can tell a lot about someone from how they debate/argue. “You must have no friends” is not a very mature thing to say to someone because you disagree with them. It's irrelevant, accusatory, and doesn't add to the argument. I would've had a lot of respect for you if you had responded with a counter argument on why you think the character was right to place the blame on society. Instead, all you offer is playground insults and childish argument tactics.
Even when discourse verges on incivility, users engage in a kind of meta-commentary on posts, pointing out rhetorical weaknesses in others’ arguments or critiquing their approach.
Overall, several participants engage with others in what we might call “educational reframing,” moments that illustrate one individual attempting to teach another or guide them toward resources to learn more about a topic. As an activist tactic, educational reframing attempts to control discourse by articulating explicitly the values of a community space; it also suggests participants learn more about a subject before speaking so as to adopt a more informed position. Thirty-nine comments educationally reframe, such as “You have missed a big point, and that is that sex and sexual identity is very biological” or “Look at my [user] history. I fight the theory of the social gender a lot, because I think it's an unproven and wrong hypothesis.”
An overt moment of educational reframing comes from a user who states that “while I understand that you may benefit as an individual from treating the problems of others as insignificant, this is a toxic and counterproductive way for a society to behave. We would do better to attempt understanding of each other's pain and encourage open communication about sources of strife, no matter what those sources might be.” This comment reflects Reddit’s overall ethos, a space to encourage open communication and focus on societal or individual concerns. Reddit’s long history of activism is spurred on by its users. After the 2010 Haiti earthquakes, Redditors raised over $185,000 in donations for those affected. Redditors have also engaged in symbolic activism, such as the entire site going dark for a day in protest over the 2012 Stop Online Piracy Act (SOPA), with administrators noting that “blacking out Reddit is a hard choice, but we feel focusing on a day of action is the best way we can amplify the voice of the community” (The Reddit Admins). This collectivist, activist ethos permeates Reddit and guides the discursive choices that many users engage in within the site.
Conclusions on Users
Though users of Jezebel, YouTube, and Reddit are afforded a range of discourse controls, incivility is present in each site to varying degrees. However, disruptions of uncivil discourse occur, either through silence, changes of topic, educational reframing, or directly questioning a user’s behavior or thinking. Community guides (e.g., terms of service or etiquette documents) help bound discourse. In communities with more lax guides, like YouTube, discourse is more frequently uncivil; those with stricter controls or clearer guides created by community members themselves appear to support more collaborative, even-handed conversations. There is an awareness of what is expected among users, and responses either acknowledge a sense of belonging, or, in the case of comments falling outside of community guides (model comments, terms of service, or community standards), a lack thereof. The greater the sense of collectivism, the more users appear to feel comfortable sharing personal experiences (as in Jezebel) and engage in informed conversations (as in Reddit). Users in communities without a strong sense of collectivism more frequently attack others and conversations more rapidly devolve into profanity, personal attacks, and oppressive discourse. And as recent incidents like GamerGate remind us, this matters especially for women; Susan C. Herring has shown through her body of research that, both online and off, women are more frequently the targets of online harassment at the hands of men (see Herring 2002; Herring, Job-Sluder, Scheckler, and Barab 2002). Collective communities can do more to make women feel safe in the community and promote civil discourse among all users.
Institutionalized discourse normalizes thought and behavior by first controlling access to discourse, then controlling discourse itself, and, from there, controlling thought. As Wysocki notes, what we see but do not notice in the new media texts we encounter thus embodies what we are “most likely to learn, without noticing, what to value and how to behave” (13). Thus if we are interested in disrupting the discursive power of uncivil discourse about weight—or any other topic—in online comment threads, we must understand how institutionalization of discourse happens in online social sites. To do so we examined three main locations, what we call “community guides,” for controlling access, discourse, and thought: interfaces, documentation, and users. Our analysis reveals that in the sites examined in this study, because discursive power is a complex of practices distributed across a variety of communicative acts, addressing uncivil and then oppressive practices depends upon if and how the community guides are developed, implemented, and enforced.
In all three sites, content is freely viewable, but each parent organization controls access to discourse through the community guides of technology and documentation: by policy and technological necessity of system design, users need an account to participate. Each site appears to give away free access in exchange for users’ agreement to abide by terms of service. In everyday practice, control goes unnoticed until an instance (such as Google's forced migration to a new system and regulation of usernames) reminds users of an organization's control. Because control, like interfaces, is naturalized, it often goes unnoticed; however, it is no small affordance. In all three sites, the community guide of documentation (through the terms of service) requires participants to agree that anything users post can be retained by the sites. Google even keeps control after users stop using its services.
Access to participate in discourse is most tightly controlled through the community guide of technology in Jezebel. The genre of a blog privileges a single author or entity—in this case, Jezebel staff writers. Most users can reply but not author posts; Jezebel staff’s posts model behaviors, attitudes, and beliefs. In response to recent concerns over uncivil visual discourse, Kinja is currently updating Jezebel's community guide: the technology will be redesigned so that moderators—some staff and some user representatives—will approve content. These actual controls (versus symbolic or naturalized) will mitigate incivility.
From the standpoint of the community guide of technology, YouTube and Reddit provide free access to post within the expectations as stated in the community guide of documentation. Users post in threaded conversations that make it easy to follow conversations and highlight users who begin conversations, although the focus on the individual is more obvious in YouTube because profile photos are larger. Jezebel comments arenot threaded and, in comparison, demonstrate that the conversations in YouTube and Reddit emphasize individuals and individual conversations whereas Jezebel's list of comments keeps the focus on the post. From a design perspective, comment lists as opposed to threads may be one avenue toward guiding communities toward a tighter control of discourse, whereas threaded comments guide communities toward more open participation and more conversational responses. This emphasis on design as a community guide reinforces Jane’s assertions that structural aspects of the technological medium are important in studies of incivility online, such as her analysis of Google’s autocorrect feature that often suggests racist and sexist suggested searches based on other users’ actual search practices (541).
These sites have another technological feature that influences discourse. In YouTube, when users direct reply, it emphasizes the individual again by drawing visual attention to the name and links to the person’s channel. In Reddit, direct replies work similarly; a copy of the comment appears in the user’s message inbox and clicking on a user’s name shows all comments made across all subreddits. In Kinja services, clicking on a user's name shows all comments that user has made across its services, effectively interfering with a person's rhetorically savvy participation with an audience in a situation for a purpose and effect in a kairotic moment. Displayed user comments in aggregate may highlight users and their words but it also keeps them in check through establishing a central public self. The interface has implications for the ways individuals see their own and others’ participation. This is also demonstrated in the use of voting (summarized in the Technologies section). Voting gives more discursive control to users, enough so that, as stated, Reddit has a policy against voting rings.
Once in the sites, various controls of discourse emerge. Each site has community guides of documentation in the form of terms of service for the organization overall and then expectations for the community at the service level. The service level documentation seems to most influence daily practices, although in our study that did not necessarily translate into greater control of discourse, which points to a complex of factors involved in controlling discourse. While Reddit’s expectations are most permissive (stating that posts can be brutal and that Reddit will not moderate), YouTube had the greatest sense of incivility despite saying it values respect. Jezebel's star system works along with its documentation to guide discourse; in Reddit, karma is awarded for specific behaviors, including linking beneficial content and composing worthwhile comments. These elements reward users for appropriate discourse. That Reddit had a higher percent of profanity and nearly the same percent of insults as YouTube and, yet, YouTube has more incivility may indicate that the reward system for defined characteristics helps control discourse. Jezebel's documentation states “you own the story,” which gives the impression that users are in control but in combination with the technology limits on their abilities to interact and author, the story really is mostly whatever the staff posts and allows to be posted.
Critical discourse analyses often concern themselves with institutionalized discursive practices because of unequal power relations (e.g., doctor/patient). In our analysis, it would seem that while technology and policies might shape discourse, social sites are more like utility companies than agents of discourse and thought control. They provide a telephone line, so to speak, and it is often of no matter to them, within limits of the law, what happens there. But it ultimately does matter to them if they want to keep people there. The day-to-day discursive power in the form of speech acts as well as topic and volume control are distributed amongst its public users on these sites. If users turn the volume up on weight discrimination, then it becomes normalized that being overweight is unacceptable, unhealthy, a sign of laziness, a sign of a lack of discipline, and, further, that hate and harassment are acceptable behaviors. Companies may not care about the particular points of view, but they do care about the law and harassment.
Underlying our discussion is an assumption that a primary goal is to minimize uncivil discourse; however, the results indicate that doing so may mean giving up some freedoms, which makes solutions more complex than they may seem here. Future studies should continue to examine this balance between user freedoms and site governance by organizations or other controlling entities, particularly focusing on the interplay of legal (i.e., terms of service), moral (i.e., netiquette), and cultural elements to best support effective discourse.
Balsamo, Anne Marie. Technologies of the Gendered Body: Reading Cyborg Women. Durham: Duke UP, 1996. Print.
Beausoleil, Natalie. “An Impossible Task?: Preventing Disordered Eating in the Context of the Current Obesity Panic.” Biopolitics and the “Obesity Epidemic”: Governing Bodies. New York: Routledge: 2009. 93-107. Print.
Blanchard, Anita L. “Developing a Sense of Virtual Community Measure.” Cyberpsychology & Behavior 10.6 (2007): 827-30. Print.
Bordo, Susan. Unbearable Weight: Feminism, Western Culture, and the Body (10th Anniversary Edition). Berkeley: U of California P, 2003. Print.
Brooks, Rosa. “What the Internet Age Means for Female Scholars.” The Yale Law Journal Pocket Part 116 (2006): 46-52. Print.
Brossard, Dominique, and Dietram A. Scheufele. “This Story Stinks.” The New York Times Sunday Review. The New York Times, 2 Mar. 2013. Web. 31 Aug. 2014.
Chou, Wen-ying Sylvia, Abby Prestin, and Stephen Kunath. “Obesity in Social Media: A Mixed Methods Analysis.” Translational Behavioral Medicine 4.3 (2014): 314-23. Print.
C. K., Louis. “Last Scene from ep 3 Season 4 of LOUIE on FX 'So did the Fat Lady' Louis CK.” YouTube. 13 May 2014. Web. 23 May 2014.
Coen, Jessica. “Commenting on Jezebel: Rules of the Road.” Jezebel. Gawker Media, 27 Aug. 2010. Web. 31 Aug. 2014. jezebel.com
Davies, Madeleine. “Louis C.K.'s Rant on Fat Girls is Absolutely Magnificent.” Jezebel. Jezebel.com, 13 May 2014. Web. 31 Aug. 2014.
Drushel, Bruce. “‘Homosexual Depravity’ on Film or Social Media Camp: The Evolving Framing of a Men’s Room Sex Sting.” Technoculture 4 (2014). Web. 28 Nov. 2014.
Edell, Dana, Lyn Mikel Brown, and Deborah Tolman. “Embodying Sexualisation: When Theory Meets Practice in Intergenerational Feminist Activism.” Feminist Theory 14.3 (2013): 275-84. Print.
The Engaging News Project: Journalist Involvement in Comment Sections. The Annette Strauss Institute for Civic Life. The University of Texas at Austin. 1-21. Web. 14 July 2014. democracyfund.org
Fairclough, Norman. Language and Power. London: Longman, 1989. Print.
Google. “Alternate Usernames.” Google Accounts Help. Google. Google, 2014. 29 Aug. 2014. support.google.com
Google. “Google Terms of Service.” Google Privacy & Terms. Google, 14 Apr. 2014. Web. 29 Aug. 2014. google.com
Herring, Susan C. “Cyber Violence: Recognizing and Resisting Abuse in Online Environments.” Asian Women 14 (2002): 187–212. Print.
Herring, Susan C., Kirk Job-Sluder, Rebecca Scheckler, and Sasha Barab. “Searching for Safety Online: Managing ‘Trolling’ in a Feminist Forum.” The Information Society 18.5 (2002): 371–84. Print.
Huckin, Thomas, Jennifer Andrus, and Jennifer Clary-Lemon. “Critical Discourse Analysis and Rhetoric and Composition.” College Composition and Communication 64.1 (2012): 107-29. Print.
Jane, Emma C. “Your a Ugly, Whorish, Slut.” Feminist Media Studies 14.4 (2014): 531-46. Print.
Jezebel Staff. “What Gawker Media is Doing About Our Rape Gif Problem.” Jezebel. Gawker Media, 13 Aug. 2014. Web. 16 Aug. 2014.
Johnson, Steve. “Jezebel: A Few Words With the Editor.” Chicago Tribune Lifestyles, 25 July 2007. Web. 14 July 2014. articles.chicagotribune.com
Keller, Jessalyn Marie. “Virtual Feminisms: Girls’ Blogging Communities, Feminist Activism, and Participatory Politics.” Information, Communication & Society 15.3 (2012): 429-47.
Kinja Legal. n.d. Web. 31 Aug. 2014. legal.kinja.com
Kinser, Amber E. “Gendered Performances in Employment Interviewing: Interpreting and Designing Communication Research.” The Journal of Business Communication 39.2 (2002): 245-56. Print.
Kumparak, Greg. “Reddit Starts Listing Trending Subreddits To Get More Users Into Its Smaller Communities.” TechCrunch. TechCrunch.com, 10 Apr. 2014. Web. 31 Aug. 2014.
LaBarre, Suzanne. “Why We’re Shutting Off Our Comments.” PopularScience.com. Popular Science, 24 Sept. 2013. Web. 31 Aug. 2014.
Lauterer, Jock. Community Journalism: Relentlessly Local. Chapel Hill: University of North Carolina Press, 2006. Print.
marcperton. “Read Fine Print Or GameStation May Own Your Soul.” Consumerist. Consumerist.com, 16 Apr. 2010. Web. 31 Aug. 2014.
Marling, Raili. “The Intimidating Other: Feminist Critical Discourse Analysis of the Representation of Feminism in Estonian Print Media.” NORA: Nordic Journal of Feminist and Gender Research 18.1 (2010): 7-19. Print.
McCluhan, Marshall, and Quentin Fiore. The Medium is the Massage: An Inventory of Effects. New York: Random House, 1967. Print.
McGannon, Kerry R., and John C. Spence. “Exploring News Media Representations of Women’s Exercise and Subjectivity through Critical Discourse Analysis.” Qualitative Research in Sport, Exercise and Health 4.1 (2012): 32-50. Print.
McIntyre, Meredith, Karen Francis, and Ysanne Chapman. “Critical Discourse Analysis: Understanding Change in Maternity Services.” International Journal of Nursing Practice 18.1 (2012): 26-43. Print.
Mills, Richard. “Researching Social News—Is Reddit.com a Mouthpiece for the ‘Hive Mind’, or a Collective Intelligence Approach to Information Overload?” ETHICOMP 2011 Proceedings. Sheffield: Sheffield Hallam University, 2011: 1-16. Print.
Morrison, Donn, and Conor Hayes. “Here, Have an Upvote: Communication Behaviour and Karma on Reddit.” In Proceedings of MAMA'13: Workshop on Metrics, Analysis and Tools for Online Community Management, 2013: 2258-68. Print.
Perez, Michelle Salazar, and Eloise Williams. “Black Feminist Activism: Theory as Generating Collective Resistance.” Multicultural Perspectives 16.3 (2014): 125-32. Print.
Rail, Geneviève. “Canadian Youth's Discursive Constructions of Health in the Context of Obesity Discourse.” Biopolitics and the “Obesity Epidemic”: Governing Bodies. New York: Routledge: 2009. 141-56. Print.
Rapp, Laura, Deeanna M. Button, Benjamin Fleury-Steiner, and Ruth Fleury-Steiner. “The Internet as a Tool for Black Feminist Activism: Lessons From an Online Antirape Protest.” Feminist Criminology 5.3 (2010): 244-62. Print.
Reader, Bill. “Free Press vs. Free Speech? The Rhetoric of ‘Civility’ in Regard to Anonymous Online Comments.” Journalism & Mass Communication Quarterly 89.3 (2012): 495-513. Print.
The Reddit Admins. “Stopped They Must Be.” Blog.Reddit. Reddit.com, 10 Jan. 2012. Web. 31 Aug. 2014.
Reddiquette. Reddit. Reddit.com, n.d. Web. 31 Aug. 2014. reddit.com
Reddit Frequently Asked Questions. Reddit. Reddit.com, n.d. Web. 31 Aug. 2014. reddit.com
Richterich, Annika. “‘Karma, Precious Karma!’ Karmawhoring on Reddit and the Front Page’s Econometrisation.” Journal of Peer Production 4, Jan. 2014. Web. 24 Aug. 2014.
Selfe, Cynthia L., and Richard J. Selfe, Jr. “The Politics of the Interface: Power and Its Exercise in Electronic Contact Zones.” College Composition and Communication 45.4 (1994): 480-504. Print.
“So Did the Fat Lady.” Louie. FX. 12 May 2014. Television.
Sutin, Angelina R., Yannick Stephan, Henry Carretta, and Antonio Terracciano. “Perceived Discrimination and Physical, Cognitive, and Emotional Health in Older Adulthood.” American Journal of Geriatric Psychiatry, in press. DOI: 10.1016/j.jagp.2014.03.007
Swift, Jeffrey Charles. Flash Publics: A Rhetorical Recuperation of Public Sphere Theory in a Digital Age. Diss. North Carolina State University, 2014. Ann Arbor: UMI, 2014. Print.
Van Dijk, Teun A. “Critical Discourse Analysis.” Handbook of Discourse Analysis. Eds. Deborah Tannen, Deborah Schiffrin, and Heidi E. Hamilton. Oxford: Blackwell, 2001. 352-71. Print.
Vatz, Richard E. “The Myth of the Rhetorical Situation.” Philosophy & Rhetoric 6.3 (Summer 1973): 154-61.
Vie, Stephanie. “‘You Are How You Play’: Privacy Policies and Data Mining in Social Networking Games.” Computer Games and Technical Communication: Critical Methods and Applications at the Intersection. Ed. Jennifer deWinter and Ryan Moeller. Burlington, VT: Ashgate, 2014. 171-87. Print.
Wasike, Ben S. “Framing Social News Sites: An Analysis of the Top Ranked Stories on Reddit and Digg.” Southwestern Mass Communication Journal 27 (Fall 2011): 57-67. Print.
Weninger, Tim. “An Exploration of Submissions and Discussions in Social News: Mining Collective Intelligence of Reddit.” Social Network Analysis and Mining 4 (2014): 1-19. Print.
Widdowson, Harry G. Text, Context, Pretext. Critical Issues in Discourse Analysis. Oxford: Blackwell, 2004.
Wysocki, Anne Frances. “Opening New Media to Writing: Openings and Justifications.” Writing New Media: Theory and Applications for Expanding the Teaching of Composition. Ed. Anne Frances Wysocki, Johndan Johnson-Eilola, Cynthia L. Selfe, and Geoffrey Sirc. Logan, UT: Utah State UP, 2004. 1-42. Print.
Wysocki, Anne Frances, and Julia I. Jasken. “What Should be an Unforgettable Face…” Computers and Composition 21.1 (2004): 29-48. Print.
The YouTube Team. “YouTube Community Guidelines.” About YouTube. YouTube. YouTube, n.d. Web. 29 Aug. 2014. youtube.com
The YouTube Team. “YouTube Hits a Billion Monthly Users.” YouTube Official Blog. YouTube, 20 Mar. 2013. Web. 24 Aug. 2014. youtube-global.blogspot.com
Stephanie Vie is an Associate Professor of Writing and Rhetoric at the University of Central Florida in Orlando. Her research focuses on online social networking and computer games, particularly how these technologies affect literate practices and the composition classroom. She is a Reviews Co-Editor for Kairos: A Journal of Rhetoric, Technology, and Pedagogy and a Consulting Editor for the Community Literacy Journal at Michigan Technological University. Her work has appeared in First Monday; Computers and Composition; e-Learning; and Computers and Composition Online. Her textbook E-Dentity (Fountainhead Press, 2011) examines the impact of social media on twenty-first century literacies.
Deb Balzhiser is an Associate Professor of English at Texas State University in San Marcos where she is the Writing Center Director. Her research examines effects of systems, structures, discourse, methods, and media. Her work has appeared in College Composition and Communication; Kairos: A Journal of Rhetoric, Technology, and Pedagogy; the Journal of Business and Technical Communication; and Technical Communication Quarterly.
Devon Fitzgerald Ralston is a Visiting Assistant Professor of Rhetoric at Miami University in Oxford, Ohio. Her research explores intersections of identity, rhetoric and cultural practices in online spaces as well as the influence of our practices on writing studies. Her work on how users employ place-based and geolocation sites to convey who they are through their “whereness” can be found in The New Work of Composing published by Computers and Composition Digital Press. Her most recent research examines digital activism in unexpected locations or in unexpected ways like the use of memes, satirical reviews on Amazon.com, and Tumblr blogs.
© 2014 Stephanie Vie, Deb Balzhiser, and Devon Fitzgerald Ralston, used by permission