instructor train thyself, AI defeat yourself.
AI researchers have taught an AI to beat human players at the game go, thought to be one of mans oldest documented board games dating back 2500BCE.
Then, researchers had a second generation AI trained by this champion Go player. It basically trained a version of itself to be a supreme Go master.
WTF AI scientists! Why don’t you take the oldest human strategy game and teach a machine to be superhuman.
This academic paper is basically a road map to show our robot overlords (or evil scientists) what is required to eliminate humans. Some milestone tasks are likely quickly achieved, such as learning how to play Angry Birds better then a human, which is a virtual achievement(all in software), where as others are a bit more difficult to achieve, such as beating a human in a bipedal 5k race(software and hardware, balance, stereoscopic vision, image recognition, etc). Curious that AI researchers have the longest viability in the human job market…
The 2 most interesting section of the paper are in the Supplementary Tables, specifically S5: Description of AI Milestones, and curiously S3: Demographic differences between respondents and non-respondents, show most of the respondents(90+%) were male. Unfortunate for the respondents, AI based sex skills are not listed in the AI milestone list.
When Will AI Exceed Human Performance? Evidence from AI Experts
We’ve seen this in movies and cop show…
The fuzzy surveillance camera capture a perp in the commission of a crime, and the detective asks the IT guy to zoom in to a certain area and enhance the picture. The result is a sharp image that without a doubt identifies the perp, and off they go to catch the criminal…
Working with digital images, you soon learn that zooming in on an image will eventually show you the actual pixels, and increasing the resolution/pixels doesn’t add any detail, it only increases the number of pixels that were formerly represented by a single pixel. Granted there are image manipulation techniques to blend/soften/sharpen/edge detect to enhance an image, but never to the level of the fiction we’ve been exposed to.
That is, until google brain. This AI algorithm attempts to (intelligently) guess what detail to put into place when increasing the pixels. This is actually a super interesting application of AI, but also scary ass shit.
An immediate application that comes to mind would be for improving the quality of surveillance photos and videos. But what if the AI gets it wrong? Would the enhanced image be admissible in court? Maybe the enhancement was 90% accurate, but it wasn’t you but your doppelgänger?
I’m sure with with testing and practice, AI enhancement algorithms will get more accurate… but where do we draw the line? Eye witness accuracy is fallible. AI Enhanced photography could have similar failure rates as well.