Bloomberg informs:
Goldman Sachs’ statistical model for the World Cup sounded impressive: The investment bank mined data about the teams and individual players, used artificial intelligence to predict the factors that might affect game scores and simulated 1 million possible evolutions of the tournament. The model was updated as the games unfolded, and it was wrong again and again. It certainly didn’t predict the final opposing France and Croatia on Sunday.
The failure to accurately predict the outcome of soccer games is a good opportunity to laugh at the hubris of elite bankers, who use similar complex models for investment decisions. Tom Pair, founder of the Upper Left Opportunities Fund, a hedge fund, tweeted recently:
Of course, past data don’t always predict the future; Goldman Sachs never tells clients to make decisions solely on the basis of its models’ findings. And in any case, the model only generated probabilities of winning a game and advancing, and no team was given more than an 18.5 percent chance of winning the World Cup. The moral of the story is probably that buzz-generating technologies such as big data and AI don’t necessarily make statistical forecasting more accurate...
Goldman Sachs’ economists fed oodles of data about teams and individual players into four different types of machine-learning models to figure out the statistics’ predictive power [in 2018]. Then they ran simulations to compute the most likely score of each game. The first results of adding player-level variables, such as an athlete’s average ranking on the team and measures of his defensive and offensive abilities, looked encouraging. Thanks to the use of more granular data, made possible by AI, this year’s model should have worked better than the 2014 one.
If anything, it worked worse.
Read Hayek, Goldman:
-RW
No comments:
Post a Comment