Another thought on Natalie Angier’s exegesis of cuteness. (Recall: "The human cuteness detector is set at such a low bar, researchers said, that it sweeps in and deems cute practically anything remotely resembling a human baby…")
The anthropomorphism of robots is especially revealing of our instincts and cognition. Eight years ago I went to a talk titled "Emergent, Situated, and Embodied: alternative AI and the aesthetics of behavior." (I got a woody from the title alone. I know, I’m dork.) Here’s what I wrote about it afterward:
Simon Penny, an artist and engineer, came to speak at Brown a few weeks ago, and he presented some of his robotic projects he has worked on. While showing a video of people interacting with one of his odd-looking robots, he told us that he tried to avoid making the robot look like any existing animal, because empathetic interpolation clouds our perceptions of the machine. Despite this effort, people still played with it as if it were alive. I turned to the person next to me and whispered, "Oh, it’s so cute!", because it was, well, really cute.
That’s from an online essay I wrote about consciousness as simulacrum, titled "Ghost in the Black Box."
Yesterday, I came across Simon’s essay titled "Embodied Cultural Agents: at the intersection of Art, Robotics and Cognitive Science." It explains Petit Mal, the wobbling, long-necked, two-wheeled robot he showed us. Here’s the relevant bit:
I wanted to avoid anthropomorphism, zoomorphism or biomorphism. It seemed all too easy to imply sentience by capitalising on the suggestive potential of biomorphic elements such as eyes, ears, legs, arms etc. I did not want this `free ride’ on the experience of the viewer. I wanted to present the viewer with a phenomenon which was clearly sentient, while also being itself, a machine, not masquerading as a dog or a president.
Though, to be honest, Petit Mal does resemble the lovechild of an ostrich and FDR.
Okay, now what happens when a roboticist TRIES to capitalize on people’s baby detectors? (Forget Furby; we’re talking science, not sales.) Ask Cynthia Breazeal, creator of Kismet and Leonardo. She wanted these robots to look cute so people would become engaged and play with them and even become invested emotionally. (I have no doubt that if they pretended to yawn, I would yawn too.) They even use social cues to develop a pseudo-theory-of-mind. Using this strategy, robots can learn from people in a natural way, benefiting fluid human-robot cooperation.
I asked Cynthia a couple years ago what got her into robots. "I saw Star Wars when I was really young," she said. "I really really adored R2D2 and C3PO."
Previously: Have you hugged a colon today? (re: cuteness)
Previously: Mirror Mirror (re: robots, theory of mind, yawning, empathy)
Leave a Reply to mir Cancel reply