* this is a work in progress. I’ll likely refine my thought a little better, but the general construct is laid out. It may change drastically or very little, as I continue researching…
We have to protect ourselves by learning to avoid killer robots from the future. This tinhat tangential rambling will outline some basic steps and concepts to keep you alive after the advent of the Technology Singularity.
Shit! We don’t have much time to affect a change(s), so let’s get started.
AI is learning at a feverish pitch: Faster CPU’s to run comparison algorithms for image categorization and EXIF tagging; larger storage devices to hold very high definition files and videos; faster memory, GPUs, network, CODECs, more efficient code, etc…
Have you ever logged into FriendFace to scan recent events with friends and family, upload a picture or two from a party to share with friends and associates, and maybe even highlight a face in a picture and tagging it with someone’s name?
Or perhaps you’ve confirmed that you are a human by answering a question a robot couldn’t understand like reading some swirly text?
You’ve inadvertently been teaching a program:
- To interpret an array of color pixels = person known by a specific, located at a specific geo IP location and known associate of other humans.
- To interpret visually obscured information based on previous experience
What are we teaching AI? Really, what are we teaching the progenitor of our future robot overlords?
- visual recognition – spatial image and object recognition
- object avoidance – follow the lines on the road while avoiding obstacles.
- balance – walk as a bipedal organism, within their environment. While carrying a heavy load
- classify – vehicle, animal, organic, inorganic; put everything into a category
- efficiency – do this menial task, using the fewest moves
- Recognize familial and work connection based on Geo IP, meta tagging, basic user info from cookies
There are countless variation of tasks that are being taught AI, and yet there are a few (glaringly important IMHO) things we are not teaching:
The Arts – its a big category, but it’s the one category of “things” that humans do that other creatures have not yet demonstrated. Physical expression thru drawing, painting or sculpture; Temporal expression thru the performing arts, music, and dance. Vision, audition, gustation, olfaction and somatosensation are the receptors for expression. We should teach computers to dance and sing, paint, even recite Shakespeare. Even if they won’t understand the value of the arts, an elegantly programmed gesture (nod, tip o hat), a graceful dance step to avoid a hallway collision, or a contextually relevant Shakespearean quote will make robots easier to accept. Programming the Arts into basic protocols might help robots avoid the uncanny valley by giving them a little humanity.
Nurture and Compassion- There are already surgical robots, but after surgeries, and with an ever increasing population, nursing robots will become a huge demand… especially in negative population growth rate countries. Having a robot that can collect data and perform important tasks: blood pressure, heart rate, temperature, pain level, drug dispensary, might not by itself teach nurturing. Again a simple gesture like stroking a hand or pushing a lock of hair away from the eyes might be the small things we teach our android caretakers and babysitters.
Passion and Love – the strong and uncontrollable desire to achieve an accomplishment, a sense of satisfaction for achieving a goal… and the desire to share with the accomplishment with another that you have a strong affection, these are poor definitions of passion and love, yet as humans we need them. I can’t imagine how these can be programmed into a robot, but we should challenge ourselves to try to figure out how to express these concepts, and imbibe our artificial descendants.
Perhaps the sex robot
is the gateway to putting many of these concepts together, a crossroads where technology and human user interface meet…