Thursday, September 29, 2011

Alone Together or Alone/Together (?)

In the first few chapters of Alone Together, Turkle invites us to think about robots, specifically what robots do to disrupt the way we see things such as subjectivity, emotion, and love.
The very first chapter begins by talking about robots as companions, rooted in a conversation that Turkle had with a reporter from Scientific American in which the reporter accused Turkle of maligned views toward same sex marriage because of her criticism that people should not be allowed to marry robots. The reporter claimed that Turkle wasn’t giving Turkle the same rights as humans, and that prejudice about one group necessarily transferred to other groups. Angered by this moment, Turkle sets out to write about our relationship to technology and the loss of the “authenticity” of things. This will eventually relate to our exchanges with computers in communication, but Turkle takes the first half of the book to talk about our relationship with robots.
                Authenticity, as a concept, seems to be Turkle’s biggest concern. Early on in the first half of the book, Turkle talks about the difference between toys and robots, or maybe more precisely the conflation of the two. While children have always used toys, some even having anthropomorphic tendencies, robots ability to react and to move and, seemingly, to learn, changes the whole experience. Both older generations and kids purchase robots as a toy/companion, but it often merges into just being a companion. Part of the problem, then, is that these virtual pets don’t die. The pets then give the owner the illusion of responsibility. A person can take care of these toys, but they can also turn them off, walk away.
                But the will to think about these robots as having underlying human characteristics isn’t just an element that has to deal with robots, but, it seems, has to do with language itself. Turkle cites a famous computer interaction program called ERICA. When people would write into the program, it was only a matter of time until the person started to share very personal information about themselves looking for the computer to care. It is in this “caring” that Turkle harps on, because the computer doesn’t actually have a sense of empathy, and she feels we are being enchanted in order to be deceived. The deception of care might lead us to believe that we’re having social lives, but then will keep us from having to think about what it “really” means to care as a communicative act between two humans.
                Later on in the section, Turkle discusses the many ways that we create an “I and thou,” which seems to be a way of saying that the robot has personal characteristics. Sometimes this is discussed as the way children like to make up for the robot’s inadequacies. Seeing that the robot was having a speaking malfunction, or if a limb wasn’t working, people would still try to “care” for the robot in ways that considered the robot as “hurt” or “sick.” When the robot couldn’t talk, some children tried to use sign language to talk to the robot. At other times, children would take offense to the robot’s inefficiencies, to the point where the researchers had to consider what they should do if the robot’s inabilities had profound negative effects on the children.
                One of the strongest chapters in the book so far was the one on “Communion” where a performance artist tried to perform the moves of the robot in order to understand it’s mindset and actions. She studied the robot and the trainer, and then put on a performance where she made those movements, and then talked to Turkle about her experiences. In order to inhabit the mindset of the robot, the performance artist had to give the movements an emotional background. Sometimes this related to a “love” for the operator, and other times it was just a mood behind the movement. In the end, Turkle devolves to a discussion of how our only way to think about movement might be in consideration of emotion, with emotion being a factor that we can’t just get away from.
                In another great part of the chapter, a researcher argues with Turkle’s concept that machine emotions are inauthentic while human emotions are authentic. In reality, we can’t know when a person is being genuine to us or just putting on a performance, we can only take it in stride and accept that we are seeing some kind of emotion. This is similar then, to robots, where we believe that the emotion is not authentic (we don’t think the machine can feel) but at the same time we really can’t know. Turkle ends the chapter out of this argument by saying that it’s not so much that we should privilege one over the other, but we should still consider the consequences of the change and the result of the shift. 


Questions: 


What does Turkle's argument have to do for our considerations of subjectivity? This is to say, how do we know that the emotions of others are authentic? Do we have any better proof that robots are not authentic? 


Does everything that enchants also deceive? What is the full implication of this statement, and how do we reconcile it beyond just robots and technology? 


In what ways do we project emotional capacity onto more than just robots or technology, but onto all kinds of trivialized areas of our lives?

No comments:

Post a Comment