Thursday, March 31, 2011

Carr's The Shallows - our plastic minds


A summary of:

Carr, N. (2010). The Shallows: What the Internet is Doing to our Brains. W.W. Norton & Company: New York.

(Warning: Internet causes massive brain damage?)


Having read Carr’s first book, Rewiring the World: From Edison to Google, I must say that I am wholly convinced now that Google is a very sinister company. Not only do they aspire to world domination, but they seem to have a highly objectionable vision of what their world would look like, if they got their way. They seem to have no concern for the environmental impacts of their projects, nor for the ethical implications of them, both evidenced by their new baby, the Google Book Search.


More worrying than this is their utter disregard for – and massive devaluation of – the human mind. To them, it is an imperfect machine; and I suppose the fact that their technologies may or may not be damaging it only further proves to them its need for augmentation… or replacement.


But I do care about what’s happening to our minds. And while you are immediately branded a Luddite to say so, I think the Internet may be hurting us. I think there are two kinds of effects this is having on our minds: cognitive impairment (which you would have to weigh against the clear cognitive augmentation is also provides), and damage to our emotional and psychological wellbeing. I shall take these one at a time.


Carr argues – and he is not alone in proposing this – that the Internet is “chipping away my capacity for concentration and contemplation” (7). He continues, “Whether I’m online or not, my mind (6) now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski” (7). Or, stated elsewhere: “Our use of the Internet involves many paradoxes, but the one that promises to have the greatest long-term influence over how we think is this one: the Net seizes our attention only to scatter it” (118). He understands this to be a result of technology’s power over us. He writes that he began to sense that the computer “was more than just a simple tool that did what you told it to do. It was a machine that, in subtle but unmistakable ways, exerted an influence over you. The more I used it, the more it altered the way I worked” (13), so that for example, “In using the word processor, I had become something of a word processor myself” (13). He also cites the interesting case of Nietzsche: when he adopted a kind of typewriter known as a writing ball, his writing style changed perceptibly. A friend of his commented that his “prose had become tighter, more telegraphic. There was a new forcefulness to it, too, as though the machine’s power – its ‘iron’ – was, through some mysterious metaphysical mechanism, being transferred into the words it pressed into the page” (Carr’s words, 18).


But the Internet is not a writing tool. It is a thinking tool. As such, it changes the way we think. Carr muses on his distracted mind, “But my brain, I realized, wasn’t just drifting. It was hungry. It was demanding to be fed the way the Net fed it – and the more it was fed, the hungrier it became. Even when I was away from my computer, I yearned to check e-mail, click links, do some Googling. I wanted to be connected. Just as Microsoft Word had turned me into a flesh-and-blood word processor, the Internet, I sensed, was turning me into something like a high-speed data processor machine a human HAL” (16). This is an interesting way of understanding what I have come to call information pleonexia. Our insatiable desire for information is in a way determined by the technology’s desire for information working through – and using – us.


There is another way of understanding this phenomenon, and that is to look at our neurobiology. Carr spends a great deal of time explaining what is called ‘neuroplasticity’, i.e. the ability of the brain to change and adapt. By studying simpler nervous systems, like that of the large sea slug, biologist Eric Kandel proved how easy it is to alter the brain. Carr writes: Kandel… found that if you touch a slug’s gill, even very lightly, the gill will immediately and reflexively recoil. But if you touch the gill repeatedly, without causing any harm to the animal, the recoiling instinct will steadily diminish. The slug will become (27) habituated to the touch and learn to ignore it. By monitoring slugs’ nervous systems, Kandel discovered that ‘this learned change in behavior was paralleled by a progressive weakening of the synaptic connections’ between the sensory neurons that ‘feel’ the touch and the motor neurons that tell the gill to retract. In a slug’s ordinary state, about ninety percent of the sensory neurons in its gill have connections to motor neurons. But after its gill is touched just forty times, only ten percent of the sensory cells maintain links to the motor cells. The research ‘showed dramatically,’ Kandel wrote, that ‘synapses can undergo large and enduring changes in strength after only a relatively small amount of training” (28).


What’s important for us to understand from this is that it is very possible, in fact very likely, that the Internet is changing the physical structure of our brains. Carr cites some other interesting examples.


· “In 2008, Small and two of his colleagues carried out the first exper-(120)iment that actually showed people’s brains changing in response to Internet use…. / The most remarkable part of the experiment came when the tests were repeated six days later. In the interim, the researches had the novices spend an hour a day online, searching the Net. The new scans revealed that the area in their prefrontal cortex that had been largely dormant now showed extensive activity – just like the activity in the brains of the veteran surfers. ‘After just five days of practice, the exact same neural circuitry in the front part of the brain became active in the Internet-naïve subjects,’ reports Small. ‘Five hours on the Internet, and the naïve subjects had already rewired their brains.’ He goes on to ask, ‘If our brains are so sensitive to just an hour a day of computer exposure, what happens when we spend more time [online]?’” (121)


· “In 2003, a Dutch clinical psychologist named Christof van Nimwegen began a fascinating study of computer-aided learning that a BBC writer would later call ‘one of the most interesting examinations of current computer use and the potential downsides of our increasing reliance on screen-based interaction with information systems.’ Van Nimwegen had two groups of volunteers work through a tricky logic puzzle on a computer…. One of the groups used software that had been designed to be as helpful as possible. It offered onscreen assistence during the course of solving the puzzle, providing visual cues, for instance, to highlight permitted moves. The other group used a bare-bones program, which provided no hints or other guidance (214). / In the early stages of solving the puzzle, the group using the helpful software made correct moves more quickly than the other group, as would be expected. But as the test proceeded, the proficiency of the members of the group using the bare-bones software increased more rapidly. In the end, those using the unhelpful program were able to solve the puzzle more quickly and with fewer wrong moves. They also reached fewer impasses – states in which no further moves were possible – than did the people using the helpful software. The findings indicated, as van Nimwegen reported, that those using the unhelpful software were better able to plan ahead and plot strategy, while those using the helpful software tended to rely on simple trial and error. Often, in fact, those with the helpful software were found ‘to aimlessly click around’ as they tried to crack the puzzle. / Eight months after the experiment, van Nimwegen reassembled the groups and had them again work on the colored-balls puzzle as well as a variation on it. He found that the people who had originally used the unhelpful software were able to solve the puzzles nearly twice as fast as those who had used the helpful software. In another test, he had a different set of volunteers use ordinary calendar software to schedule a complicated series of meeting involving overlapping groups of people. Once again, one group used helpful software that provided lots of on-screen cues, and another group used unhelpful software. The results were the same. The subjects using the unhelpful program ‘solved the problems with fewer superfluous moves [and] in a more straightforward manner,’ and they demonstrated greater ‘plan-based behavior’ and ‘smarter solution paths’” (215).


Notice that the latter of these studies seems to directly contradict Jane McGonigal’s euphoric assessment of the ways in which gaming helps us solve problems. And the former study begs even more urgently, What exactly are we training our brains FOR??? It seems to me that we are training ourselves to think like machines think… which is a futile endeavor, given that we will always be sub-par machines. Why not train our brains to think like better humans? We can excel at that!


Of course, we can’t deny that the Web helps us tremendously in doing very specific kinds of thinking. And we seem excited at the prospect of freeing up some real estate in our brains. But just as we made the mistake of thinking that labor-saving devices would free up time, we made the mistake of thinking that cognitive tools would free up our brains to do more exciting things (see p.181 for evidence of this mistake). I was surprised in particular by the study by James Evans at the University of Chicago, who looked at citations in journals from 1945 to 2005. Carr writes: “He analyzed the citations included in the articles to see if patters of citation, and hence of research, have changed as journals have shifted from being printed on paper to being published online. Considering how much easier it is to search digital text than printed text, the common assumption has been that making journals available on the Net would significantly broaden the scope of scholarly research, leading to a much more diverse set of citations. But that’s not at all what Evans discovered. As more journals moved online, scholars actually cited fewer articles than they had before. And as old issues of printed journals were digitized and uploaded to the Web, scholars cited more recent articles with increasing frequency. A broadening of available information led, as Evans described it, to a ‘narrowing of science and scholarship’. / In explaining the counterintuitive findings in a 2008 Science article, Evans noted that automated information-filtering tools, such as search engines, tend to serve as amplifiers of popularity, quickly establishing and then continually reinforcing a consensus about what information is important and what isn’t” (217). So just because doing something seems easier (I often think how difficult it would be for me to do a PhD without the Internet), that doesn’t make the product better. Perhaps humans are better versions of themselves when they have to struggle toward their goals.


But again, some cognitive functions are undeniably improved by the Web. Carr writes: “While experimental evidence is sparse, it seems only logical that Web searching and browsing would also strengthen brain functions related to certain kinds of fast-paced problem solving, particularly those involving the recognition of patterns in a welter of data” (139). / “Other studies suggest that the kind of mental calisthenics we engage in online may lead to a small expansion in the capacity of our working memory. That, too, would help us to become more adept at juggling data. Such research ‘indicates that our brains learn to swiftly focus attention, analyze information, and almost instantaneously decide on a go or no-go decision,’ says Gary Small. He believes that (139) as we spend more time navigating the vast quantity of information available online, ‘many of us are developing neural circuitry that is customized for rapid and incisive spurts of directed attention’” (Carr, 140). But then he is quick to put these gains in perspective, writing, “…it would be a serious mistake to look narrowly at the Net’s benefits and conclude that the technology is making us more intelligent. Jordan Grafman, head of the cognitive neuroscience unit at the National Institute of Neurological Disorders and Stroke, explains that the constant shifting of our attention when we’re online may make our brains more nimble when it comes to multitasking, but improving our ability to multitask actually hampers our ability to think deeply and creatively” (140). The thing that I must reconcile in my own research is this: while I am on the one hand bemoaning what the Internet is doing to certain cognitive functions (such as creativity and attention), I am also questioning the goal of cognitive augmentation that underlies computer technology in general. So I am effectively suggesting that we shouldn’t be aiming to improve our cognitive functions. If that’s true, then do I have a right to complain that certain ones seem to be getting weaker? I think I do, actually. It’s the old Hippocratic Oath: Do no harm.


There are three other points to make about neuroplasticity:


1) It would explain why things that are anathema to us initially come to feel like human nature. We literally change. In social psychology terms, this is known as the foot-in-the-door phenomenon; i.e. as we make small concessions, we allow more and more unwanted things through the door. So while we may cringe initially at the thought of ubiquitous computers, they become acceptable through our repeated exposure to them, as our brains change and come to react less and less to this unwanted stimuli (like the sea slug no longer recoiling at touch).


2) As Carr says, “What we’re not doing when we’re online also has neurological consequences. Just as neurons that fire together wire together, neurons that don’t fire together don’t wire together. As the time we spend scanning Web pages crowds out the time we spend reading books, as the time we spend exchanging bite-sized text messages crowds out the time we spend composing sentences and paragraphs, as the time we spend hopping across links crowds out the time we devote to quiet reflection and contemplation, the circuits that support those old intellectual functions and pursuits weaken and begin to break apart. The brain recycles the disused neurons and synapses for other, more pressing work. We gain new skills and perspectives but lose old ones” (120).


3) Given the changeability of our brains, it is not far fetched to suggest that we are damaging our wellbeing through our exposure to the Internet, potentially hardwiring in anxiety, addiction, apathy, etc.. This is different from saying that technology increases our anxiety (etc.). This is saying that technology is encoding anxiety (etc.) into our brains.


So let’s get to the wellbeing implications of the Internet.


We are increasingly addicted to information. E.g., “The near-continuous stream of new information pumped out by the Web also plays to our natural tendency to ‘vastly overvalue what happens to us right now,’ as Union College psychologist Christopher Chabris explains. We crave the new even when we know that ‘the new is more often trivial than essential’” (134). This is similar to Postman’s argument about the information-action ratio, that we feel increasingly disempowered to do anything with or about the information we receive. We are, like all good addicts, consuming compulsively; and we need more and more information to satisfy our cravings, ultimately meaning that we consume, but we are doing so only to feel ‘normal’. In other words, it is not making us happy; it is not satisfying our human needs.


Secondly, we are overloaded with information, which as we know from studies affects our ability to be compassionate, or to respond to information in emotionally appropriate ways. Carr writes, “Information overload has become a permanent affliction, and our attempts to cure it just make it worse. The only way to cope is to increase our scanning and our skimming, to rely even more heavily on the wonderfully responsive machines that are the source of the problem. Today, more information is ‘available to us than ever before,’ writes Levy, ‘but there is less time to make use of it – and specifically to make use of it with any depth of reflection.’ Tomorrow, the situation will be worse still” (170).


Thirdly, technology is making it more and more difficult to be contemplative. And it is very likely that humans need these opportunities to feel fulfilled: “When carried to the realm of the intellect, the industrial ideal of efficiency poses, as Hawthorne understood, a potentially mortal threat to the pastoral ideal of meditative thought. That doesn’t mean that promoting the rapid discovery and retrieval of information is bad. It’s not. The development of a well-rounded mind requires both an ability to find and quickly parse a wide range of information and a capacity for open-ended reflection. There needs to be time for efficient data collection and time for inefficient contemplation, time to operate the machine and time to sit idly in the garden. We need to work in Google’s ‘world of numbers,’ but we also need to be able to retreat to Sleepy Hollow. The problem today is that we’re losing our ability to strike a balance between those two very different states of mind. Mentally, we’re in perpetual motion” (168).


This really does sound like I’m just a Luddite. But I don’t think it’s crazy to weigh the downsides of technology versus their supposed benefits. Otherwise, how can we give our informed consent to the objects in our world? And without this consent, of course these technologies will appear to us as techno-demons, i.e. as forces that confront us that are beyond our control. But they are in our control! And the key is to begin by being questioning. This is no easy task, especially since it is very difficult to appreciate the effects technology are having on us while we are being affected by them (it’s easier for an outsider to study this). And there is an element of denial as well, given the Sisyphean challenge of trying to stop the continued proliferation of technologies and the growing power they have over us (for example, we can’t stop the Google Book Search, no matter how damaging this may be). But I think if we face up to the difficult truths, we can start to improve our situation, and we’ll be better off.

Saturday, March 26, 2011

Jane McGonigal – Gaming Can Make A Better World



In his book, Carr writes, “The importance of such skills [the ones fostered by computer game play] shouldn’t be taken lightly. As our work and social lives come to center on the use of electronic media, the faster we’re able to navigate those media and the more adroitly we’re able to shift our attention among online tasks, the more valuable we’re likely to become as employees and even as friends and colleagues” (140). Jane McGonigal takes this argument one step further, arguing that the skills developed in online game play are the skills that can help us save the world. Or more correctly, she believe that games reinforce behavior and attitudes that may prove necessary for solving our real world problems.

The first such attitude is what she describes as “urgent optimism.” Unfortunately, she admits, so far this urgent optimism translates into the belief that one can change the virtual world only, and not the real world (the very thing she hopes to change with her games). But the point is that, whereas in the real world we feel increasingly disempowered to make real change, paralyzed by the seeming impossibility of affecting big problems, in the game world, these players feel like nothing is impossible. I would agree that we certainly need more of this.

The second behavior is that gamers are really good at weaving a social fabric. I might suggest that the weaving of this fabric is a game in itself – strategic alliances. My real concern here is that it reinforces the slightly sick transformation of ‘friendship’ that is occurring with our social media. And the other is that this notion of geographically boundless collaboration itself has environmental implications, when we consider the resources that go in to maintaining these connections (see Mobile Lives). But I am torn here, because in my work, I’m arguing that we need greater social connections, and these gamers are doing this. I suppose the difference is that I’m suggesting that the connections are themselves better (i.e. they are more fulfilling, and meaningful), whereas she is implying that these connections need to be more numerous, and more organized, which itself betrays a worldview bias.

The third attitude is what she calls “blissful productivity.” This is the idea that humans are most human when they are being productive, and that we feel good about this. I find this funny, in that it seems like the latest transformation of the Protestant Ethic to fit our modern world. But yes, it would be nice if these productive efforts were focused on changing the world for the better, rather than on making money (and in turn, working to perpetuate the world as it is now).

And finally, she identifies “epic meaning” as being something that gamers are drawn to. When describing the appeal of her game, A World Without Oil, she says blithely that “nobody wants to change how they live because it’s good for the world, or because they’re supposed to,” but that if you immerse people in an epic game, they can in a sense be tricked into doing what’s good for the world (though she didn’t phrase it that way). I find this really sad on the one hand, because I can’t fathom why our sustainability problems are not ‘epic’ enough to engender this motivation. On the other hand, my research argues something not too dissimilar! – namely that I suggest that people need to be re-immersed (reminded in some cases) in a meaningfilled world, one that is passed down through great myths and spiritual traditions, in order that they adopt these more harmonious behaviors. (See Walker’s Sustainable By Design.) As she said, those who participated in the epic game, continued the practices learned years after the game ended. This suggests that this is likely the key, and that I am on the right track. You have to provide people with a greater reason to want to change the world. I just sort of wish it wasn’t by having them play games online.

McGonigal has to overcome many people’s kneejerk reaction to immersion. People might argue that immersion is bad – or if not inherently bad, that this amount of time immersed in a fictional world is to the detriment of real world health/happiness/productivity, etc.. I don’t think immersion is bad (see Ryan post). And I don’t think that the amount of time spent means that it is necessarily bad either. It all depends on what you are gaining from that engagement. My issue is that the games are built within the same paradigm that seems to be producing an unsustainable relationship between humans and the planet. They reinforce particular kinds of thinking (as I mentioned before about collaboration/information exchange, for example), and they increase our addiction to computer technologies, which has both psychological and environmental implications. For example, she quickly mentions how many more gamers will be using mobile devices to connect to these games, and dismisses the environmental impact by saying that the power they use will be increasingly ‘green’. I worry that there will be a point at which these lines cross on the graph, and that as we use more and more devices/energy, we are running out of time to use these games to solve these huge problems – or indeed to do anything to solve them. Increased gaming accelerates the crises in some ways, just in terms of the energy cost of doing so.

And as for the ‘escapism’ arguments against immersion, she is not advocating exodus into virtual reality. She doesn’t want to make better games so that we have a better place to escape to. She wants to make better games that help us make our real world better. I think this is a great mission.

Finally, I must come back to the question she asks, which is: In all of the zillions of hours we spend gaming, what exactly are we training for? She quotes a figure that the average young person will spend 10,000 hours gaming, which, according to Malcolm Gladwell’s research, means that we are producing a fleet of gaming ‘virtuosos’. But really, are these individuals ‘virtuosos’ or ‘idiot savants’? I guess I think that being a great computer gamer may not be that helpful when the time comes that we disengage from computer technology. If we are creating a generation of people who are excellent at solving problems specifically with computer – or a she suggests, we are evolving to think like this – then we may be in big trouble because. What happens when we try to unplug? Do we know how to function without these tools? We are not preparing ourselves for Power Down. And as Carr shows, using computers to help us solve problems does not make us better problem solvers, but worse….

Carr's The Shallows - the Internet is changing us

A summary of:

Carr, N. (2010). The Shallows: What the Internet is Doing to our Brains. W.W. Norton & Company: New York.

(Be Warned: The Internet is changing us)


The main message Carr sends in this new book is that our technology – indeed, every new medium – changes us. This is not a new idea. McLuhan, whom he cites frequently, is famous for saying that the “medium is the message”, alluding to the “transformative power of new communication technologies” (2). But this aphorism was also, according to Carr, a warning “about the threat the power poses – and the risk of being oblivious to the threat” (2). Carr suggests that often times, the introduction of a new technology sparks debate about the wrong thing, namely the content the technology conveys. Carr writes: “What both enthusiast and skeptic miss is what McLuhan saw: that in the long run a medium’s content matters less than the medium itself in influencing how we think and act. As our window onto the world, and onto ourselves, a popular medium molds what we see and how we see it – and eventually, if we use it enough, it changes who we are, as individuals and as a society” (3). Just as “When a carpenter picks up a hammer, the hammer becomes, so far as his brain is concerned, part of his hand” (208), when we use the Internet, the Internet becomes an extension of our brains.


So how does the Internet change us? Well, some cognitive skills are strengthened as we engage with the Web. These include things like “hand-eye coordination, reflex response, and the processing of visual cues” (139). As a 2003 Nature study shows, game players are more deft as shifting their visual focus and identify more items in their visual fields. The authors of this study concluded that “‘although video-game playing may seem to be rather mindless, it is capable of radically altering visual attentional processing’” (139). A similar argument has been made by Jane McGonigal (see post). But the question is, how useful are these skills in a age when computers no longer exist? If we are training ourselves to be better and better at using computers – i.e. thinking like computers – aren’t we getting less and less good at NOT using computers? (Of course, not only are their cognitive tradeoffs to be considered with our increased Web use, there are negative psychological consequences to this as well – which is the topic of my next blog.)


Another thing that becomes clear in reading this book is that the tenets that underpin technology are self-reinforcing. For example, Carr identifies the Internet as one of many “intellectual technologies,” in that they seek to “extend or support our mental powers” (44). In using the Internet, we tacitly accept the correctness of this endeavor; and the more we use the Internet, the more we come to value its cause. Nowhere is this more evident than in Google itself. Carr quotes Richard Koman, who said that Google “‘has become a true believer in its own goodness, a belief which justifies its own set of rules regarding corporate ethics, anti-competition, customer service and its place in society’” (164). Here we see in action the reshuffling of the hierarchy of value system. Whereas once honesty and community etc. may have been at the top, here they are subordinated to freedom of information. (For evidence of this, note the many lawsuits against Google’s Book Search and their dismissive, cavalier attitude, Google’s Eric Schmidt saying: “‘Imagine the cultural impact of putting tens of millions of previously inaccessible volumes into one vast index, every word of which is searchable by anyone, rich or poor, urban or rural, First World or Third, en toute langue – and all, of course, entirely for free’” (162).) The Google Book Project vaults “data” and “information” to the top of our priorities, asserting yet again “the machine” over “the garden” (167). It is little surprise, then, that Westerners see their great philanthropic mission as being the increased access to information for supposedly ‘disadvantaged’ populations (e.g. the well-meaning but entirely misguided One Laptop Per Child Initiative; proving that poverty is measured in diminished informational resources, rather than diminished social bonds, happiness, etc.).


What is even more worrying – and frankly insulting – is the assertion by these intellectual technologies (the Web more so than others, I would argue) that we would be “‘better off’ if our brains were supplemented, or even replaced, by artificial intelligence” (173). When societies were becoming literate, there was similar debate about the impact it would have on people’s minds. Socrates feared that writing would teach people to become less dependent on their own memory, and remember things “not from within themselves, but by means of external marks” (177). And he was right. But this is ever more true with the Internet, which encourages people to completely offload their memory to the Web, to save up precious brain space. But Socrates’ wariness should prove to us just how far our line in the sand has moved, how much we have been changed by our technology. If books were at one time anathema to us, and now many of us see nothing wrong with deferring by default to Google’s search results, it should suggest to us that perhaps we need to rethink the seemingly unquestionable sense of the mission of intellectual technologies.


Another way that technology changes us is that it homogenizes us culturally, so that we lose all rich, human ways of doing as we replace them with specifically technological ways of doing. (This is another reason for my aversion to One Laptop Per Child – pushing Western values to non-Western countries.) Carr writes, “Culture is more than the aggregate of what Google describes as ‘the world’s information.’ It’s more than what can be reduced to binary code and uploaded onto the Net. To remain vital, culture must be renewed in the minds of the members of every generation. Outsource memory, and culture withers” (197). He quotes Richard Foreman, who argues that offloading memory threatens both the depth of culture and the depth of self: “‘I come from a tradition of Western culture,’ he wrote, ‘in which the ideal (my ideal) was the complex, dense and ‘cathedral-like’ structure of the highly educated and articulate personality – a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West.’ But now, he continued, ‘I see within us all (myself included) the replacement of complex inner density with a new kind of self – evolving under the pressure of information overload and the technology of the ‘instantly available.’ As we are drained of our ‘inner repertory of dense cultural inheritance,’ Foreman concluded, we risk turning into ‘pancake people – spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button’” (196).


The final way that technology changes us is that we design our societies to function around its capabilities; or as Weizenbaum says, “Such technologies become part of ‘the very stuff out of which man builds his world” (206). Carr quotes Weizenbaum futher: “‘The computer was not a prerequisite to the survival of modern society in the post-war period and beyond,’ Weizenbaum argued; ‘its enthusiastic, uncritical embrace by the most “progressive” elements of American government, business, and industry made it a resource essential to society’s survival in the form that the computer itself had been instrumental in shaping’” (207). Carr explains further: “Comptuers would come to mediate the activities that define people’s everyday lies – how they learn, how they think, how they socialize. What the history of intellectual technologies shows us, he warned, is that ‘the introduction of computers into some complex human activities may constitute an irreversible commitment.’ Our intellectual and social lives may, like our industrial routines, come to reflect the form that the computer imposes on them” (207). Just as Carr warned from the beginning: “The computer screen bulldozes our doubts with its bounties and conveniences. It is so much our servant that it would seem churlish to notice that it is also our master” (4). This is because, “Once (206) adopted , they can never be abandoned, at least not without plunging society into ‘great confusion and possibly utter chaos.’ An intellectual technology, he wrote, ‘becomes an indispensible component of any structure once it is to thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure’” (207).


My concern is this: Everything we know as humans (from our spiritual traditions, social taboos, etc.) should tell us that we should resist the mesmerizing effect of these intellectual technologies, and yet we sacrifice these values because adopting them is easier to do. But like every empire, the Age of the Computer will one day fall (just as cars will as fossil fuels dry up and nuclear power cannot supply enough energy to the grid to provide for an electric car society), and when that comes, I’m afraid we’ll be very ashamed of having eschewed our human values for technological ones.


I want to end on a more positive note. The real question about the power of technology should be this: How do we use it to our advantage? Can we see hope in the transformative affects of media? Firstly, as Langdon Winner pointed out, “technologies are not merely aids to human activity, but also powerful forces acting to reshape that activity and its meaning” (47). To us, this should indicate that if we design responsibly, we can have a huge impact on our very worldview. So just as the mechanical clock “helped bring into being the scientific mind and the scientific man” (44), it is possible that a spiritualized cyberspace might create an altogether different man, perhaps an incarnation that’s more sensitive to issues of human/nature harmony. And secondly, if we recognize that the addition of a medium affects all other media – in McLuhan’s words, ‘It never ceases to oppress the older media until it finds new shapes and positions for them’ (89) – then if we create a new Internet, it has the power to affect the Internet we have now. In other words, we are not in competition with previous, highly popular incarnations of the Web. By innovating cyberspace creatively, we may ameliorate the negative impacts of the Web (as we know it today) by merely opening eyes to an alternative.