Tuesday, April 19, 2016

Is AlphaGo really such a big deal

In some ways yes as described in this article in Quanta with the same title as this post. But not mentioned often enough is this:

For the last 20 years, we’ve had exponential growth, and for the last 20 years, people have said it can’t continue. It just continues. But there are other considerations we haven’t thought of before. If you look at AlphaGo, I’m not sure of the fine details of the amount of power it was using, but I wouldn’t be surprised if it was using hundreds of kilowatts of power to do the computation. Lee Sedong was probably using about 30 watts, that’s about what the brain takes, it’s comparable to a light bulb.

This is from Geoff Hinton. Also interesting is this article on Demis Hassabis one of the founders of Deepmind on what the future might be:

Most AI systems are “narrow”, training pre-programmed agents to master a particular task and not much else. So IBM’s Deep Blue could beat Gary Kasparov at chess, but would struggle against a three-year-old in a round of noughts and crosses. Hassabis, on the other hand, is taking his inspiration from the human brain and attempting to build the first “general-purpose learning machine”: a single set of flexible, adaptive algorithms that can learn – in the same way biological systems do – how to master any task from scratch, using nothing more than raw data.

No comments: