Superbowl Squares is pretty simple. (And I’m going to analyse an especially simple version). Take a 10×10 grid of and assign the numbers 0-9 to each row and each column (at random), assign one team to the rows and another team to the columns.

The winner is the person who picks the last digit of each teams score at the end of the match (alternatively each quarter). e.g. if the score was 18-12 to the Patriots, then the person who picks the orange “x” cell wins the pot.
Typically you pick the squares before knowing which number is assigned to each column. Why? Because some squares are probably more likely than others – it’s impossible for the score to be 1-0, so the cell (1,0) is probably less likely to win. How much more or less likely? Well, that’s what we’re going to find out.
Spoilers: tl;dr If you get a choice, pick one of 7-0, 7-4, 4-7 to the favoured team.
Calculation 1 – Simple calculation
We can look at a long history of NFL games, and check out what the final scores were in each of those games. Then calculate the frequency of each square. Using the data from pro-football-reference.com (PFR), we can calculate this fairly simply.

The best squares are those with scores of the form x7-x0. (For example, the most popular score is 20-17, which occurs in 260 games).
However, we should be able to do quite a bit better that this. For one thing, this grid is symmetric, and given that the Patriots are 2-1 on (~66% chance of winning), we might expect the grid to be less symmetric than this.
Calculation 2 -using simple winner model

Creating the same grid as before, but conditioning on “Patriots win”. Comparing this with the symmetric grid, we see that winning means that 4-3 is relatively more likely and 3-4 is relatively less likely. (7-1 is relatively less likely, this is just last digits, so we shouldn’t necessarily expect larger number – smaller number scores to become more likely).
A simple model would be to take the odds for the match, and do a weighted average of the winner grid and the loser grid. This appears as follows:

This still looks fairly similar to the original grid, so let’s look at the differences to see which squares are most improved.

So the biggest winners (red) are 1-0, 4-0, 3-0, 8-7, 4-3, 1-7. (NB, the difference matrix is skew-symmetric by construction*).
It appears as though the loser scoring a multiple of 10 (including 0) is more likely. I might look into why at a later date.
Calculation 3 – adjusting using market odds
So far we’ve not done anything especially complicated. From here on we go down a rabbit hole.
Total score

Looking at the Betfair market for Total Score, we can see that compared to our data source we are expecting more points to be scored. (A simpler way to see this is the median score in our data set is 40, Betfair has 48). (Note to self – is this particular to these teams, or have score lines increased more recently?)
We then can then approximate the full Betfair points distribution by taking the PFR shape and fitting it to the Betfair values.
Points difference
We can learn even more than just Patriots are favourites from Betfair. Betfair has odds on score differences with handicaps from Eagles -10 to Eagles +15. (These odds give market probabilities that if you add “x” to the Eagle’s scoreline the Eagles win).
Firstly, let’s take a look at the empirical points distribution. This appears to be very similar to a logistic distribution, but with quite a bit of noise close to zero. (This is not too surprising, given what we’ve seen before with the last digits across all matches).

Now let’s look at the Betfair market:
The blue dots are Betfair probabilities, which match fairly closely the logistic calculated empirically. ie a shifted logistic function is a reasonable approximation for both the empirical distribution and the distribution for this match. (And we can compute odds-ratios between the two using these functions).
Putting it all together
We have the frequencies of scores from our empirical distribution and we are interested in the frequencies of scores given our new information. Using Bayes, (and crossing our fingers and hoping the correlation between total score and points difference doesn’t matter too much), the probability of each score should now be:


The differences – fairly similar to the simpler model although slightly more extreme in places.

The more informative way to plot this is not absolute difference (which is what you might care about if you were marking your P&L on this), but the ratio of the change (to see where the new value is appearing. This looks as follows (using the log ratio).

Looking at this, it appears that losing and scoring a 5 is hard. This is presumably because both 5 and 15 are hard to achieve, and to get to 25 means you are much more likely to win.
This whole exercise has got me wondering, could a toy model match these results. (Each play selected at random from the scoring plays at the frequencies achieved in the league. What would happen as you increase the expected number of plays. What happens in the limit as the number of plays goes to infinity.).
* 

which is clearly skew-symmetric