Dave Eggers’ latest novel, The Circle, follows Mae Holland, a liberal arts graduate, who takes a job at a gargantuan technology corporation (think Facebook, Twitter, and Google combined) and promptly gives up her soul. As Mae sinks further into the morass of free gourmet lunches and unquestioning fealty to Big Data, her ex-boyfriend Mercer, a sensitive sculptor of deer antler lamps, becomes the novel’s voice of reason:

You know what I think, Mae? I think you think that sitting at your desk, frowning and smiling somehow makes you think you’re actually living some fascinating life. You comment on things, and that substitutes for doing them. You look at pictures of Nepal, push a smile button, and you think that’s the same as going there. I mean, what would happen if you actually went? Your CircleJerk ratings or whatever-the-fuck would drop below an acceptable level! Mae, do you realize how incredibly boring you’ve become?

Technology magazine Wired’s review of The Circle appeared under the sub-headline: “What the Internet Looks Like if You Don’t Understand It.” In the New York Times, Ellen Ullman stated that the book brought “little of substance to the debate,” while the New Republic called it “galling”. Others wondered why Eggers, who admitted to doing no research, had bothered to write about the Internet at all. But bad reviews and vitriol (including complaints about Eggers’ lackluster prose) weren’t enough to stop The Circle from earning rave reviews in Booklist Online and the New York Review of Books, along with the honor of Best Business Book of the Year from the Harvard Business Review’s blog. More important, Eggers inspired mainstream magazines like Forbes Magazine and Wired to publish a review of a novel, which, in this age of rapidly dwindling interest in fiction, is no mean feat. Well-researched or not, Eggers’ work struck a nerve.

After an excerpt of The Circle appeared in the New York Times Magazine, former Facebook employee Kate Losse accused Eggers of plagiarism in her blog, arguing that he’d stolen lines almost verbatim from her memoir, The Boy Kings. In her book, Losse, who has an English degree from Wesleyan University and a Master of Arts in English from Johns Hopkins University, details her experience as a customer service representative in the social networking site’s early days:

Though I didn’t quite realize it on this first day at Facebook, I was in possession of a skill set—that of the English major—that was woefully unscalable as far as Facebook was concerned, more of a liability than an asset. When I perused Mark’s profile on Facebook after we had become virtual friends, I noticed that in the Favorite Books field he wrote, “I don’t read.” Okay, I thought, gearing up for a long battle to be appreciated in my new role, this job might work out in the end but it is not going to be as easy as I…thought.

Losse later removed the accusatory blog post, but whether Eggers plagiarized it or not—and evidence suggests that he did not—both authors’ books achieve a similar end, effectively exploiting one of the prevailing fears concerning readers in our time: the question of the fate of the Humanist in the age of Google.

One thing critics on both sides of the situation seem to have missed is that while Eggers’ Mae and Losse herself both turn the experience of working at Facebook into cautionary fodder, many writers and scientists foresee a melding of technology and the arts that stands to benefit even those of us who can’t code. In his August 2013 essay—one of the New Republic’s best of the year— “Science is Not Your Enemy,” Steven Pinker reminds us that before this the era of specialization, humanists were scientists, and an understanding of technology was required for a fuller understanding of the world. He argues that enlightenment thinkers like Descartes, Hobbes, Locke, Hume, Kant and others based much of their work on studies of neuroscience, psychology and physics. Today, Pinker writes, we have more knowledge at our fingertips than Descartes et al. ever imagined, and it’s imperative that artists and writers not mistake a lack of understanding of technology for justified technophobia. Humanism, Pinker writes, is “inextricable from…scientific understanding.” He continues:

The humanities are the domain in which the intrusion of science has produced the strongest recoil. Yet it is just that domain that would seem to be most in need of an infusion of new ideas. A consilience with science offers the humanities countless possibilities for innovation in understanding. Art, culture, and society are products of human brains. They originate in our faculties of perception, thought, and emotion, and they cumulate and spread through the epidemiological dynamics by which one person affects others. Shouldn’t we be curious to understand these connections? Both sides would win.

Last year, in the Los Angeles Review of Books, Matthew Wilkens reviewed a book by a scientist attempting to do just that, applying Big Data analysis to books. In “Macronalysis,” author Matthew L. Jockers details research that Wilkens argues has the potential to provide an alternative to the standard Western Canon, with its dated emphasis on white British men and New Englanders. Thanks in part to Jockers, the technology now exists to provide literary scholars with the ability to comb through galactic amounts of literature without having to read it all themselves. Wilkens begins his review with a caveat:

If the idea of studying literature without reading it strikes you as somewhere between bizarre and dangerous, you’re not alone. There’s a whole cottage industry devoted to dismissing such projects as hopeless (or trivial, or both) or denouncing them as the death of the humanities. But it’s worth asking what they entail and what they allow before we resign ourselves to living with the tremendous limitations of reading alone.

“Reading alone,” as defined here, means judging a book according to its individual merits, rather than as a point on the literary evolutionary continuum. Wilkens is rightly wary about the promises of such interconnectedness. He writes: “As much as suggestions of future progress as transformative results in their own right…skeptics could be forgiven for wondering how long the data-driven revolution can remain just around the corner.” But Wilkens is an author himself, and, as such, is hardly immune to Jockers’ findings. He marvels at the idea that it’s now possible to guess “the author, text identity, national origin, author gender, and genre of about 100 19th-century novels by nearly 50 different authors at rates much higher than chance just by examining the frequency of certain common words in those books.” And while Jockers’ research sometimes simply confirms long-held prejudices about 19th century literature—women wrote about the home, men about seafaring—the fact that it’s now possible to provide a concrete basis for these biases could prove revolutionary for archivists in the field. Particularly relevant for academics may be Jockers’ research into the common themes or topics shared by novels of different eras, as well as how those topics tend to increase or decrease in consistency over time.

Meanwhile, Wendy Earle presents the flip side of the intersection between technology and the humanities in her piece, arguing for an end to the encroachment of technology in museums. Citing a CNN article calling for museums to increase interactive engagement with audiences, Earle requests the opposite, imploring directors to stop over mediating the visitor experience via audio and screens. She writes that “Museums do themselves no favours by trying to compete with the multimedia attractions of contemporary culture by imitating them….When it comes to technology in museums, less really is more.” Monitors and audio guides now find themselves competing with apps and tweets for the viewer’s attention, obscuring, Earle argues, the initial purpose of museums, the display of art and artifacts. These tactics, she says, also present a more insidious problem: “Technological reconstructions often make it appear as if curators already have the answers, even though their reconstructions are based on elaborate guesswork and the answers are not absolute.”

In spite of Earle’s arguments, museum directors and others hoping to keep their artistic organizations viable aren’t likely to heed her warnings any time soon. As Sue Halpern writes in the New York Review of Books, “We are living, we are told, in the age of Big Data and it will….‘transform how we live, work, and think.’” Already, Amazon’s algorithms can determine what recommended books customers are likely to buy more accurately than human editors, while, Halpern tells us, “a company called Narrative Science has an algorithm that produces articles for newspapers and websites by wrapping current events into established journalistic tropes—with no pesky unions, benefits, or sick days required.” The more streamlined these services become, the more the topography of writing and art will be forced to change to keep up with the rest of the world.

For those of us still happily living analog lives, however, all is not lost. After all, the National Science Foundation this year provided funds for poet Jynne Dilling Martin to move to Antarctica this winter as the continent’s artist-in-residence. Martin has published gorgeous poems and translated into lay terms some of the work scientists at the Institute are doing. Consider also, as the writer Erik G. Wilson put it in this article for the Chronicle of Higher Education, “poetry makes you weird.” By this, he goes on to explain, it allows for “a going out of our nature, and an identification of ourselves with the beautiful which exists in thought, action, or person, not our own.”

– That’s a direct quote from Percy Bysshe Shelley’s Defence of Poetry. Show us an algorithm for a social networking site that can come up with a line like that, Mr. Eggers, and then we might start to worry.


Further Reading:


Image credit: IBM Curiosity Shop via flickr