TL;DR: I made a Naive Bayes based difficulty estimator for the game Sentinels of the Multiverse.
I really like the card game Sentinels of the Multiverse by +Christopher Badell, +Paul Bender and +Adam Rebottaro. It's a cooperative superhero game, but different combinations of heroes, villains, and environments can make for somewhat hard to predict difficulty level.
Fortunately some folks started a project to log details of a large number of plays of Sentinels, providing the raw data to do a difficulty estimator. I wrote a Naive Bayes win/loss "classifier" to yield a scoring system which lets you predict the likely difficulty of a particular combination.
http://x.gray.org/sentinels-of-the-multiverse-difficulty-scores.html
I really like the card game Sentinels of the Multiverse by +Christopher Badell, +Paul Bender and +Adam Rebottaro. It's a cooperative superhero game, but different combinations of heroes, villains, and environments can make for somewhat hard to predict difficulty level.
Fortunately some folks started a project to log details of a large number of plays of Sentinels, providing the raw data to do a difficulty estimator. I wrote a Naive Bayes win/loss "classifier" to yield a scoring system which lets you predict the likely difficulty of a particular combination.
http://x.gray.org/sentinels-of-the-multiverse-difficulty-scores.html
1) It really requires a lot of data. Only 2000 games in the training set is enough, but isn't way too much, given the number of parameters.
2) It is hard to compensate for change over time (eg, rules adjustments during development).
And, no I haven't heard about anyone doing this kind of tuning. I've heard of people running their game in simulation to tune certain parameters, but that's a bit different.
Even if it's not much, taking an analytical approach (ie, recording all the details of how each test plays out) for later analysis seems like a cool idea. Perhaps very tedious, but hey ;)
That makes our most recent play a little crazy:
Miss Information + Final Wasteland + 5 Players + Team Leader Tachyon + Omnitron-X + Haka + Chrono Ranger + Ra. And that scores a ridiculous -337. And, of course, we lost...
Most importantly, we played on advanced, and the (very very sparse) data suggests that Miss Information takes a huge step up (about 100 difficulty points) going to advanced, compared to the typical villain who seems to go up about 75. We also played Time Cataclysm, not Final Wasteland, bumping it up a bit, but still. I get a total difficulty of that game at -180, and we lost, and it wasn't that close. But, some slightly different card draws, and maybe nuking that first Clue and the game might have gone differently.
In my reshare on +mkgray Board Games, I posted some further thoughts on that particular game.
Given how (relatively) badly we lost, I don't want it to be true. ;)
Still...
Anyway, all that to say that any tool that can help with the analysis of a large number of recorded games would definitely be useful to me. :)
http://x.gray.org/sentinels.json
Screenshots here: http://imgur.com/a/ndx0K
APK: http://idunnolol.com/tmp/SentinelsRandomizer.apk