A summary of:
Lanier, J. (2010). You Are Not A Gadget: A Manifesto. Allen Lane: London.
Chapter 2: An Apocalypse of Self-Abdication
Lanier interestingly proposes that "if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring" (25). Are we designing from a place of fear, on some level accepting our eventual slavery to our own creations? We seem to believe that bits are alive... and should be respected, and (absurdly) free. But they are bits! They are not conscious; we are. We deserve to be free; they don't (28). What happens when you revere technology is that you idolize it and attempt to be as much like it as possible. It's like we wish we were machines. And if that's not the case, if that statement is abhorent, then we need to reconfigure out technology to reflect and reassert our humanness.
Consider this ridiculous desire to rid us of the need for authors (i.e. people, creative ones!) and turn the internet into one massive, global book (Kevin Kelly). And how offensive is Chris Anderson's proposal that "science should no longer seek theories that scientists can understand, because the digital cloud will understand them better anyway" (26)! Why are we so defeatist? Why do we assume that the computer knows more than us? But we do. For example, when we accept Word's automation of the document, we tacitly accept the idea that the technology understands us more than we understand ourselves. Where is our self-esteem? Can we not assert ourselves? Why can't we recognize how much better we are at (certain) things than computers? For example, "we don't hold sentence-comprehension tournaments, because new find that task too easy, too ordinary" (34).
This is all symptomatic of a new religion that deifies information in the hopes that we can achieve immortality by uploading into a computer (29).
The question Lanier wants us to ask is: How does technology change people? For example, what does this do to empathy? "I fear that we are beginning to design ourselves to suit digital models of us, and I worry about a leaching of empathy and humanity in that process" (39). While not fully explicating the reasons for this fear, he seems to imply that digital technologies create a blurring of the lines we use to draw "circles of empathy" around things - e.g. is a digital person a person worthy of empathy? - which then threatens morality, and we must actively reconstruct our morality to ensure it reflects our values meaningfully.
The alternative - to be a bit hyperbolic - is that we unwittingly facilitate an army of zombies: people who appear people-like, but who are empty of any real humanity. Lanier takes a defiant stand against such zombie-hood:
"Humans are free. We can commit suicide for the benefit of a Singularity. We can engineer our genes to better support an imaginary hive mind. We can make culture and journalism into second-rate activities and spend centuries remixing the detritus of the 1960s and other eras from before individual creativity went out of fashion.
Or we can believe in ourselves. By chance, it might turn out we are real."
Another great quote from this chapter is one from Turing's 1950 paper: "In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates" (31).
I would humbly suggest that technology should not fill that void in us that we have traditionally filled with spirituality. It is not equivalent. It, in itself, will not uplift us. The question is, however, whether or not we can assert our humanness over and above technological values, and in turn find ourselves reflected in our technology.
No comments:
Post a Comment