View on GitHub

ArtificiallyIntelligent

Download this project as a .zip file Download this project as a tar.gz file

Body and Mind

Jon Rogers

This builds on the year-long conversation with Shannon Dosemagen that started at the Mozilla/Rockefeller Bellagio retreat in June 2018. One of the central themes that came out of that was that we need to frame a new relationship between technology and ourselves. That our body has many ways in which it talks to us and to other people, including (of course) but not limited to the ability to talk in the traditional sense of actual words. Our body is in fact an incredible sensor. Shannon has used this to frame much of her activist work with Public Lab. This is something that we wrote about following an event with our friends at Unbox Festival in Bangalore in late 2017.

As Nick Shapiro, a colleague and collaborator of Shannon’s who has tracked FEMA trailers and indoor air quality, wrote in Attuning to the Chemosphere 2, many times the first sign of changes to our environments and personal health come from a “sensorial skewing” (Shapiro, 2015: 368). These early experiences with community monitoring where the body is our first form of environmental sensor and analog monitoring tools prompt deeper understanding and narrative development of environment and health issues, have become embedded values in the way Shannon approaches her work.” Shannon Dosemagen, 20171

As a starting point to put the reality of our bodies before the artificiality of machines is a compelling alternative to the heavily commercial interests of Silicon Valley’s tech monoculture. It also plays into the threats of AI that the American philosopher Daniel Dennett refers to as being the misguided placing of power into machine’s hands by the anthropomorphizing of machines by the hands that want us to use them [REF]. That people are being fooled by the illusion of intelligence in order to hand over more data than they would if they really understood their machine qualities rather than their artificially human qualities. This evidently plays out in the example that Google hold up in their demonstration of their Google Duplex voice system that fools a hairdresser into believing that they are talking to a human2. This is not just about voice recognition and speech generation, it’s about providing “natural sounds” that don’t change the information being communicated they change the tone. Specifically they change the tone from machine-like to human-like.

Google states transparency is the core of this. However is sound more incomprehensible to sound more human a good thing? Would a clearly talking machine be something we want? Or would that freak people out. I’m now going to share an experience that I’ve not shared. I’m very private about a job that did when I was younger. In fact it was my first job since graduating. I worked as Stephen Hawking’s “graduate assistant”. I got a very generous credit from him in a recent update of his bestseller A Brief History of Time. Stephen had very particular views on AI and with his sad passing this year, I wanted to share a story. A story that I feel puts a bit more context into my concerns about the quest for AI that mimics people rather than AI that solves real problems for people. It was the summer of 1994. I’d just graduated and was back at my parents house applying for every job possible. One of those jobs was in the New Scientist. It had the kind of title that you really couldn't’ not apply for: “Graduate Assistant to Professor Stephen Hawking”. Long story short. I got the job. I had a wonderful, life changing year. I am the person I am today very much in part due to an incredible year with Stephen. I was living in various houses in Cambridge - I couldn’t quite find the right one. I think it must have been my sixth and final one that I stayed in for the longest… and this time I forgot to mention to my flat mates of the job that I had. One day I heard a scream from down by the phone. “Somebody’s faking Stephen Hawking and saying they want to speak to you” came the reply. “Ah” I said. I had some explaining to do.

Stephen of course found this mildly amusing and was very used to this kind of voice interaction. He would often phone a restaurant to make a booking. At home in Cambridge it was very much something that people expected. When away less so. He wanted to phone. It was his voice, he didn't want to change it and he wanted to use it his way. Of course he did. It was about him being him and about his authenticity as a person. Indeed, when an offer was made to update his voice synthesiser to make it more modern, he refused. One of the greatest minds of our times had a voice that sounded like a machine to those that didn’t know him. To those that did, it was very clearly and dearly him. It is this that Google Duplex are trying to remove. That their artificially “deep” mind needs to imitate notions of humanness when one of the greatest minds is confident in being who they are with a electronically produced voice says so much about the health of a system that is about deception rather than authenticity.

What I am trying to say here is to state the obvious that our relationships between us and machines is not simple. That artificiality is contextual. This context changes. It has always changed.

Let’s think about taking the hairdresser call in a few logical steps. Why would be just want to automate the conversation with a hairdresser? Would we not want to automate hairdressing? Is after all the technology we need to cut hair not already in place in current digital production technologies? Somewhere between a vinyl cutter and a CNC machine? If we can 3D print the likeness of our faces to an incredible accuracy can we not cut our hair to the same precision? Could hair be managed by machines? “Hey Google Hairplex shape my hair like Esquire’s top instagram feed”. A daily hair routine by machines. We would not have a hair out of place. What’s more, we wouldn’t need to decide any more. Wouldn’t it be better if our mirror’s decided we needed a trim? That that trim was styled based on our instagram likes? Cutting out the middle-magazine? Our social distance between media, the internet and our physical appearance could be shortened to quite literally a hair’s width.

Why stop at hair. Our teeth could be polished and filled by increasingly sensitive and accurate machines. 3D printing our incisors to give us the perfect set of choppers. Wait a minute though. I’m rather proud of my “British teeth” i hear you cry. Ridiculed by American friends and their pearly whites, British teeth are very much part of the history of who I am. They are authentically me. Maintaining this would be like the Siri voice interface where you can choose English (British) or English (American). Would AppleTeeth give us a setting for Teeth(British) or Teeth(American)? The very essence of who we are being decided by a few simple sliders at the back of the inside of a phone interface. Artificiality with settings.

References:


  1. https://issuu.com/helloqs/docs/future\_of\_human\_ecologies\_digital
  2. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html