## Tuesday, March 13, 2012

### EigenBracket 2012: Using Graph Theory to Predict NCAA March Madness Basketball

Last year one of the research associates in my lab asked if I wanted to make a bracket for the NCAA March Madness tournament.  Since I don't follow sports (other than what Jenny's interest in K-State sports exposes me to), I thought I wouldn't do very well just quasi-randomly picking teams.  Instead, I thought it would be a lot more fun to perform a mathematical analysis on the Win-Loss records for all the teams and use that data to "rationally" choose the bracket.

Last year, I had mixed results.  On the positive side, my bracket out-competed the members of my lab.  However, in the NCAA bracket group my family ran, my mom - who I am pretty sure made rather random choices - beat my bracket.

This year, I'm hoping to improve on that (limited) success.  Here's my strategy:

One of the most basic statistics about a team is their win-loss record.  For example, this year Kansas State had a win-loss record of 21-10.  Intuitively, we expect teams with favorable win-loss records to beat those with poor win-loss records.

Win-Loss record isn't the whole story, though, because all wins are not created equal.  Defeating a bad team is easy (and tells us very little), but defeating a good team is indicative of skill.  Therefore, a simple (though recursive) definition of a good team is: a team that is capable of winning games against other good teams.

Our major project, then, is to identify good teams without any a priori knowledge of team quality.
To find these teams, I modeled the win-loss history as a weighted, directed network.  To do this, I analyzed 5242 games played between Division I teams in the 2011-2012 season.

Win-loss history is simply a list of game results (e.g. K-State beat the Charleston South 72-67, then beat Loyola 74-61, then ... lost to West Virginia 80-85, ... etc).  A directed network is simply a connection of nodes (representing teams) and arrows connecting teams called directed edges.  Every time a team defeated another, an arrow was drawn from the losing team's node to the winning team's node to represent this game.

Note though that all wins are not created equal.  A game decided by a 20-point margin ("nuked from orbit", as I always say) is a lot different than a game decided by a single point.  In the former case, one team is clearly superior, while in the latter, the teams are roughly equal.  So let's revise our definition of a good team to be: a team that is capable of winning games against other good teams, by a large margin.

To account for this, each arrow is weighted between 0 and 1 to describe the magnitude of the victory.  Winning a game by a large margin results in an arrow with weight 1 being drawn from the loser to the winner.  By contrast, for games that are decided by only a few points, we'll draw arrows from both loser to winner AND winner to loser, both being weighted about 0.5 (slightly less for the loser).

Mathematically, edge weights can be calculated by using a a logistic function centered at zero.  We'll need to choose an operational definition of "winning by a large margin" (which corresponds to choosing how large a margin is required before a score of 0.9 is assigned to an edge).  Some experimentation (guided by a tiny amount of intuition) suggests that about 6 points is enough to definitively resolve that one team is better than another.

To find good teams, I computed each team's eigenvector centrality in the network.  For those who like math (if not, skip ahead to the next paragraph): any network can be modeled as an adjacency matrix.  The dominant eigenvector of this can be found by using power iteration.  The score assigned to the team associated with the i'th row of the adjacency matrix is given by the i'th element of the dominant eigenvector.

A simplified (and mostly accurate) way to think about this is that every team starts out with an equal number of "quality points".  Every time the computer says "Go", teams distribute their quality points to all the teams that beat them.  Thus, good teams get more quality points than they gave away (and vice versa for bad teams).  After a few rounds of this procedure, the quality points for every team approaches convergence.

[ Detail note: to prevent all the quality points from getting totally absorbed by undefeated [or nearly-so] teams, every team gives a small fraction of their points to every other team, just to keep some points in circulation. ]

Using this procedure, we can get a list of teams ranked in order of quality.

The top 10 ranked teams are (in order):
1. Kentucky
2. North Carolina
3. Syracuse
4. Missouri
5. Michigan St.
6. Ohio St.
7. Belmont
8. VCU
9. Kansas
10. Louisville
Note that the top 5 teams are all #1 or #2 seeded teams, but the algorithm did not know that a priori.  This provides a high-level validation to the algorithm.  Some other teams are seeded much lower.  For example, Belmont is a #14 seed, but has been talked about as an upset risk in their first game against Georgetown.  [ My analysis apparently does not think much of Georgetown, assigning it a rank of 65, which is in stark contrast to its #3 seed status ].

If you were wondering, the 5 lowest-scoring Division I teams - according to this particular analysis anyway - are (getting worse as you read down the list):
1. Fairleigh Dickinson
2. Binghamton
3. Tenn-Martin
4. Towson