Sounds to me like a Chess and Go AI both are working the same way
I oversimplified quite a bit to make it semi-understandable for non-technical people. Specifically avoided any mention of deep artificial neural networks since I can't really explain them better than "algorithms that kind of simulate how neurons behave and make a bunch of statistical guesses."
But you can pick up the crucial differences between AlphaGo and Deep Blue just by checking their Wikis:
AlphaGo:
https://en.wikipedia.org/wiki/AlphaGo#AlgorithmDeep Blue:
https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)#Deep_Blue_versus_KasparovOf course, Deep Blue isn't very representative of modern state-of-the-art chess AI, but the core difference is that AlphaGo significantly reduces the need for computational power through its use of neural networks and Monte Carlo methods- so the massive number of possible combinations in Go is no longer daunting.
An article from The Verge explains this better:
The twist is that DeepMind continually reinforces and improves the system’s ability by making it play millions of games against tweaked versions of itself. This trains a "policy" network to help AlphaGo predict the next moves, which in turn trains a "value" network to ascertain and evaluate those positions. AlphaGo looks ahead at possible moves and permutations, going through various eventualities before selecting the one it deems most likely to succeed. The combined neural nets save AlphaGo from doing excess work: the policy network helps reduce the breadth of moves to search, while the value network saves it from having to internally play out the entirety of each match to come to a conclusion.
source:
http://www.theverge.com/2016/3/9/11185030/google-deepmind-alphago-go-artificial-intelligence-impact(as you can read from one of Deep Blue's creators in the article-
AlphaGo doesn't emphasize search nearly as much as a chess AI but instead focuses on building and testing intuition)
Basically, it's got two separate networks- one to look at possible movesets, and another to evaluate them without simulating a whole game. Needless to say, this is far more advanced than the "candidate move" strategy used by chess AI- it also means that AlphaGo keeps getting better and better over time.
As far as what a neural network is (oh boy): neurons (in your body) fire when an activation threshold is met. You can do something a little bit analogous with statistics as an efficient way to aggregate a bunch of complicated data- having a bunch of simulated neurons "fire" when some statistical thresholds are met, and scaling up through a bunch of layers until you get a good statistical guess about how likely you are to win with a particular playstyle, for example. Or whether a picture depicts a dog. Or whether someone has cancer. Or whether you're likely to buy something.
P(x) = f(a, b, c, ..., z) basically and your neural network (and a lot of other ML algos) tries to simulate that function based on what it can pick up from existing data.
Just highlights the huge difference machine learning makes- and why a lot of people think it's our best shot at a general artificial intelligence.
Edited 3/15/2016 06:54:47