First, let me say that it has been a real struggle this semester, and I have pushed blogging way down my “next actions” list. I should be committing to write here much more, and I intend to get back on the weekly schedule I maintained in the past, but the schedule for Spring 2018 has been a tough one.
Despite this, I have been considering the topic of today’s post for a while now, and with the ongoing news about how social media (mainly Facebook, also Twitter) played a large role in the nightmare that was the 2016 election, the time is right to unpack how this relates to the teaching of literature, writing, and the humanities more generally.
For years, a major component of the college freshman writing class has been source evaluation. The most famous technique for teaching this is probably the CRAAP test, although I think both librarians and composition instructors may be moving away from this model. In its most basic form, CRAAP is a mnemonic device that reminds a learner to ask critical questions to determine whether a source is credible, or at least, usable in a college essay:
- C – currency (is it current information, or is it outdated?)
- R – relevance (is it appropriate for the topic at hand? does it address your argument?)
- A – accuracy (is the information accurate?)
- A – authority (what or who is the source for this information? are they experts on this topic?)
- P – purpose (what is the function or reason for this source to exist?)
I think most people tend to quickly apply this rubric to things we read, and then we either discard the source/story, or we acknowledge any problems the source has, and read it anyway, because we’re really invested in whatever perspective the source provides (or we just really feel like getting into an Internet Fight that day, for some reason, and need ammo). What can be difficult when applying something like the CRAAP test to everything one reads is discovering that everything–EVERYTHING–written has an agenda. This blog has an agenda. The New York Times has an agenda. That scientific abstract has an agenda. None of us are free from biases or personal opinions, and it is foolish to assume we can shed those every time we write or read any kind of content. The critical practice that needs to be further cultivated is discernment; how can we tell what the bias of each text is, and how can we then assess whether the source is useful or credible, even though we’ve uncovered its agenda?
One answer to those question is, simply, to read as much as possible. Read widely, both in breadth and depth. I readily acknowledge grad school ruined my attention span as well as my childhood love of reading, but in exchange, my training gave me the ability to quickly assess sources and other kinds of content. And that really comes with practice. Reading widely helps you become familiar with things like tone, word choice, and other rhetorical markers that indicate the credibility or appropriateness of a source/story. However, many of us are failing as critical readers and thinkers these days. Many people have been taken advantage of by a variety of nasty Internet problems, from Russian troll farms to click bait from gossipy news sites to the worst things going nowadays, the Facebook news feed.
I won’t go into the long history of the problems with the Facebook news feed. I am sure you are all familiar with what it looks like and how it works. Over time, the feed morphed from a list of your friends’ status updates to a list of shared materials from your friends, advertisements, and other content that was either “liked” or commented on by your friends, but which they did not directly share. What’s happened as a result of the “algorithm,” as Facebook calls the programming which decides what shows up in your news feed, is that everything blurs together and all the content looks the same. This flattening of content makes it harder for individuals to assess the stories and articles that come across their screen. This article from Splitsider unpacks this phenomenon, within the context of comedy, but the same process is affecting other types of media, as well. The CEO of the comedy website Funny or Die, Matt Klinman, offers the following analysis:
This writer John Herrman writes about this a lot — he used to write for The Awl, rest in peace — he talks about how Facebook flattens everything out and makes it the same. That’s how we have a Russian propaganda problem. An article from something like, I don’t know, Rebel Patriot News written by a Macedonian teen or something looks exactly the same as a New York Times article. It’s the same for comedy websites. There’s a reason that Mad magazine looks different from Vanity Fair. They need to convey a different aesthetic and a different tone for their content to really pop. Facebook is the great de-contextualizer. There’s no more feeling of jumping into a whole new world on the internet anymore — everything looks exactly the same.
The interview covers other important aspects of this issue, including things like ad revenue and business ethics, which also are important, but I want to focus on this idea of “de-contextualizing” Klinman discusses. It’s harder and harder to apply something like, say, the CRAAP test to content that shows up in your Facebook newsfeed, because 1) it all looks alike and 2) it’s often quite overwhelming. Thus, we tend to ignore or passively click on videos or stories we see, but never leave the newsfeed itself. We don’t get a sense of the person or platform behind the video or story, eliminating a few of the CRAAP test’s questions. We may also ignore those inflammatory stories that our weird friend posts constantly, but the reach of each like or share grows whether we read it or not. Even clicking one of the “emoji” buttons, such as the angry face or sad face doesn’t prevent such stories from spreading; in fact, any engagement with the post ensures it will continue to live online and upset someone else, regardless of whether it is accurate/credible/recent, etc.
Why should this even matter, one might ask. Is Facebook really so powerful that it can directly affect our daily reality to the future of our society? Well, the answer to that is yes. The phony content pumped into our social media networks may have had a direct impact on the 2016 election. At the very least, we know that the “Internet Research Agency” (IRA) inserted itself into Americans’ online lives and made a mess of things. If you’re interested in reading how, check out the Special Counsel’s indictment of several Russian nationals working for the IRA who interfered in just this way.
So is there anything we can do to prevent such online interference? Can we hope that the Trump administration, teetering as it is on the brink of more indictments and chaos, will do something to help citizens protect themselves from such meddling? Well, probably not. What we can do is read more widely. Read everything you can, from a variety of sources. We may not all have great intuition about whether a particular story is bogus, but we can look for more information on any topic, rather than accepting the first article that shows up on our morning Facebook log in session. Check articles against one another. Look for confirmation in other media of a suspicious claim. Barring all of that, you can always deactivate your Facebook account. I did that at the beginning of the year, and I think I’ll write a post about the effects of that next week. I maintain a “professional” Facebook account so I can maintain the department Facebook page I moderate, but my personal account is blank. Because Facebook refuses to take responsibility for what happened with the election, and because having a Facebook page is nothing but anxiety and hassle these days, you might consider revising your own relationship with Facebook, or with social media more generally. I do think it has made me and many of us more passive consumers of content, and that’s not a good thing. So give it some thought. Or at least, apply the CRAAP test what you see.