Robert Louis Stevenson’s tale of one man with two personalities was a prophecy that the internet has unfortunately fulfilled.
The old saying “never judge a book by its cover” was coined long before the internet, yet the phrase is intimately applicable to modern-day keyboard warriors and online trolls. The internet sometimes gives a glimpse behind the curtain of someone’s true nature, and it is not always how they outwardly appear in the real world. Many people are braver behind a keyboard and will say offensive things in an anonymous online setting that they would not likely say in person. Unlike Dr. Jekyll taking his personality-altering serum to become Mr. Hyde, many online commenters do not need a formula to change into their version of a beast, taking their frustrations out on the world with no catalyst required. Where the line blurs, however, is when online comments and harassment violate real-world laws. This tenet has proven repeatedly to be true, yet opponents of policing online speech are not in favor of comment regulation. In a 2017 Pew Research poll, 56% of 4,248 people surveyed declare that people take offensive content online too seriously (Duggan). However, words matter. People must learn to temper their online rhetoric with empathy and kindness or face the consequences. Until then, people who engage in proven online bullying, harassment, and other unwanted demeaning behavior must be held accountable by the social platform they linger on, and possibly again by law enforcement, regardless of the online mask they hide behind.
One way people choose to represent their online persona is by using an avatar instead of a real photo of themselves. Avatars are digital representations that can be nearly anything imaginable. Some are cartoon versions of the real person. Others can be pets, scenery, fictional characters, and the like. Dr. Wendy Patrick states of selecting avatars, “You are choosing a character with traits you either perceive as similar to your own or represent the way you would like to be perceived.” While some might select an avatar due to self-consciousness of their appearance, others use them to hide their true identity and escape reprisal for their online actions.
In the essay “Why Good People Turn Bad Online” by Gaia Vince, she states, “They [social media platforms] offer physical distance, relative anonymity and little reputational or punitive risk for bad behavior: if you’re mean, no one you know is going to see.” This is especially true when using a created avatar versus a personal photo, or nom-de-guerre. In these cases, it is imperative that the social platform manager, be it a video game, a team chat, or social media outlet like Twitter and Facebook, take responsibility for their content and root out their bad actors. That system fails when those bad actors use false names or photos to hide their true identity. By the time the identity of the troll is discovered, they have already moved on into other platforms and personas to continue their path of digital destruction. Platform efforts of policing are usually too little, too late.
To unravel why people choose to represent themselves online differently from their real life personalities, a message board user going by the name Orcos explains why he uses an Orc as his avatar while playing the video game World of Warcraft:
“Morals are more interesting. I hate having to run around being the lawful good person whose motives are ‘be a good person at all times.’ That’s how I try to be IRL, but it’s boring when playing games.”Orcos, World of Warcraft player – (“Why Did You Choose”)
Obviously, Orcos does not look like an orc in real life, yet he has chosen that persona to represent himself to others and backed up his choice with a desire to not feel like he has to be “lawful good” all the time. With lawful morals being described as boring, many might argue that modern society is a lost cause; that people hide their true selves behind a digital mask to act on their real feelings. One of the most difficult components of this puzzle is that many internet trolls and online bullies are usually good people in real life.
However, much like the concept of the film The Purge, good people seize on those moments where they can commit crimes and get away with no repercussions. While the film depicts good people doing horrible things in real life, some could argue that society is in an early form of a digital “Purge” right now. Neda Ulaby, when writing about The Purge, states:
“Numerous polls have found that Americans are feeling more divided than ever — so a story about losing common humanity feels relevant. Regardless of politics, The Purge movies share a sense of a decay of the American dream.”Nelda Ulaby, NPR
That decay easily translates to lashing out online. With such a thin line between the digital world and the real one, those who agree with Ulaby may argue that people such as Orcos are merely a hair’s breadth from bringing their digital transgressions into reality.
Opponents could say that punitive scrutiny of online content in public forums infringes on constitutional rights to free speech, crossing into predictive policing territory portrayed in the 2002 film Minority Report. Social media companies already use trend data in personal profiles to predict targeted advertisements. They also use personal data to categorize likely personality traits based on user’s comments and whom they interact with. These tactics suggest that social media platforms have already profiled users into categories of forecasted behavior. Much like predictive policing in the movie, internet providers could make the case for predicted behavior rooted in biases found in social media comments and predict who will be an internet troll, then potentially punish them for it when they have yet to commit any offense.
CNN Money reporter Matt McFarland states, “If machines are trained on biased data, they too will become biased. Communities with a history of being heavily policed will be disproportionately affected by predictive policing.” While many reprehensible forms of speech are protected, one simply cannot yell fire in a crowded theater. Still, if someone wishing to do harm knew they could do it and get away with it, the anonymity of the internet might embolden them to act in such a way as they might never act in the real world. Proponents of online comment policing would likely applaud a predictive algorithm sniffing out the pre-offender, and platforms or authorities restrict them about it before it happens. There will always be calls to regulate free speech while people cannot self-regulate their rhetoric into civil discourse.
A proposed preemptive method to combat the hidden bad online behaviors in good people is the use of bots. Bots are computer-generated users that analyze comments and predict when a person is about to say something deemed offensive, then counteracts their comment with an empathetic rebuttal or opposite position. For example, Gaia Vince states, “A typical bot response to a racist tweet would be: ‘Hey man, just remember that there are real people who are hurt when you harass them with that kind of language.’ Simply cultivating a little empathy in such tweeters reduced their racist tweets almost to zero for weeks afterwards.”
While that may seem beneficial on the surface, engineering the social interactions of real people with predictive algorithms is a slippery slope. The effort may temporarily mask the symptom of online bullying, but it does little to root out the human condition that drives a person to be someone they are not online in the endless pursuit of likes and shares. However, Vince agrees with the tactic, stating, “…bots helped the network to function more efficiently. Perhaps a version of this model could involve infiltrating the newsfeeds of partisan people with occasional items offering a different perspective, helping to shift people out of their social media comfort-bubbles and allow society as a whole to cooperate more” (Vince). While that is not a punitive measure, she advocates that using bot-generated subterfuge to alter personality traits is permissible if the ends justify the means. While the initial outcome is favorable, deploying psychological manipulation should never be taken lightly and be closely monitored.
In The Strange Case of Dr. Jekyll and Mr. Hyde, Robert Louis Stevenson wrote, “And yet when I looked upon that ugly idol in the glass, I was conscious of no repugnance, rather of a leap of welcome. This, too, was myself…This, as I take it, was because all human beings, as we meet them, are commingled out of good and evil.” Dr. Jekyll recognized what he had become in the mirror but was not afraid of what he saw. He accepted that he was a man of two halves: a good half, and an evil one. Many who see what they write in online forums or how they choose to represent themselves digitally are not afraid of what they see either, and therein lies the problem. The only difference is, Jekyll had a mirror and saw Hyde’s face. Jekyll also saw firsthand the destructive aftermath of Hyde. Modern humans have a screen and see their words. Unlike Dr. Jekyll though, they rarely see the destruction those words have on the other side of the screen. Only a select few will filter those words for the hurt and pain they will cause and delete them. The best form of online policing now and in the future is by the user themselves, if only humans could be responsible enough to do so instead of hiding their Hyde-like nature behind a keyboard, screen, or their avatar. Stevenson’s famous 1886 fable of humankind’s two faces was a prophecy that the internet has fulfilled, and until society can reconcile the modern right to free speech with the moral obligation to be kind, the world inches closer to enacting The Purge every day.
For the record, I did receive an A on this paper. Take it to heart and be good to each other out there.Lyle
Duggan, Maeve. “Online Harassment 2017.” Pew Research Center: Internet, Science & Tech, 11 July 2017, https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/. Accessed Sept. 19, 2022.
McFarland, Matt. “‘Minority Report’ Warned About Predicting Crime. 15 Years Later, the Lesson Has Been Ignored.” CNNMoney, 23 June 2017, money.cnn.com/2017/06/23/technology/future/minority-report-15-years/index.html?sr=twcnni062417minority-report-15-years0531AMStoryLink&linkId=39046369.
Patrick, Dr. Wendy L. “Use an Avatar Online? Here is What it Says About You.” Psychology Today, 4 June 2018, https://www.psychologytoday.com/us/blog/why-bad-looks-good/201806/use-avatar-online-here-is-what-it-says-about-you/. Accessed Sep. 19. 2022.
Stevenson, Robert Louis, 1850-1894. “The Strange Case of Dr. Jekyll and Mr. Hyde.” New York, USA: Samoa Edition, 1944.
Ulaby, Neda. “The Success of Society Run Amok: What Does ‘the Purge’ Say about Us?” NPR, NPR, 4 July 2018, https://www.npr.org/2018/07/04/625683415/the-success-of-society-run-amok-what-does-the-purge-say-about-us.
Vince, Gaia. “Why Good People Turn Bad Online.” Humanities LibreTexts, 15 Sept. 2019, https://human.libretexts.org/Bookshelves/Literature_and_Literacy/Book%3A_88_Open_Essays_A_Reader_for_Students_of_Composition_and_Rhetoric_(Wangler_and_Ulrich)/Open_Essays/77%3A_Why_Good_People_Turn_Bad_Online_.
“Why Did You Choose Horde/Alliance?” World of Warcraft Forums, 22 Sept. 2020, https://us.forums.blizzard.com/en/wow/t/why-did-you-choose-hordealliance/652110/59?page=3.