A potential minimax tree for Go will likely dwarf any Warlight tree (as it does any chess tree, by a few orders of magnitude). Every turn in Go one can place a stone on basically any unoccupied space (19x19 = 361 total spaces), and even discarding obviously wrong moves for a large part of the game one will have around 200+ valid choices - each turn. Even if we consider attacking with 1,2,3,4,5,6...10 armies as different moves (as they are), a Warlight game on ME will not have so many different moves to pick from until very late in the game, and discounting obviously wrong moves (e.g. attacking large enemy stack with 1/2 of your large stack, or deploying all your income in one spot in backwater) there are usually even fewer moves to consider.
...however AlphaGo does not just do a minimax, as effectively doing that is impossible for Go (in addition to the size of the tree, evaluating a position is extremely difficult, unlike chess - and WarLight!). See any of the first few links @
https://www.google.ca/search?q=how+alphago+works. Unlike a decision tree or a min-max tree there is no easy human-readable explanation of what the values of parameters of the neural networks it trains mean. That is kind of the beauty of it, the people who programmed AlphaGo were not nearly as good a Go players as it is, and not just because they could not scan so many ply deep, but because they actually did not even know which strategies to use in which case which AlphaGo learned for itself. I'm sure if a similar algorithm is applied to WarLight (with some tweaks to the structure of the networks, and maybe some pre-/post- processing) it will do very well. Yes, it will take time to train it, but once it does (all games still being accessible in machine-readable form should simplify the technical difficulties of doing so) it should perform well
Edited 5/30/2017 20:55:40