By Chris Baker (@ChrisBakerAM) and Steve Shea (@SteveShea33)
March 16, 2015
“We played well tonight, but the shots didn’t fall.” – Every coach ever
We hate the use of “random” or “luck” when describing the outcomes of athletic events. When Curry makes a three, it’s not the same as walking up to a slot machine, pulling the lever and winning the jackpot. One event requires skill, and the other does not.
Of course, Curry has spent nearly his entire life preparing to knock down a jumper when given the opportunity, but the nonrandom contributions extend further. Any specific 3-point attempt can be the result of the efforts of several skilled Golden State players. Perhaps Klay Thompson picked off a pass and started the transition that led to an open shot for Curry on the wing. Maybe Andrew Bogut set a perfect screen to spring Curry in the corner. Maybe the defense simply made a mistake. Curry doesn’t get open “by chance.”
Basketball is not random. However, the difference between making and missing can be so small that we can’t necessarily fault a player for missing a shot. Even Curry will miss the occasional open catch-and-shoot corner 3. (I think…maybe…Is he human?)
One of the major themes in sports analytics is to evaluate the process and not be fooled by the results. Sometimes good process can produce poor results, and vice versa. For example, a team can prepare exceptionally well for the draft and still miss on the prospect they select. Another team can throw darts blindfolded to pick their prospect and hit. It doesn’t mean that throwing darts is the better draft preparation.
Players miss good shots and make bad shots. Yet, we assume that the player that went 8 for 10 from the field played well, and the player that went 3 for 10 did not. The first player may have made 5 contested mid-range jumpers where he usually shoots 28%. The second player may have worked hard to get open corner 3s where he usually shoots 46%, but tonight he went 0 for 4. Perhaps we should be praising the second player and not the first.
An analogous situation occurs from the defensive perspective. We shouldn’t necessarily judge a defender based on how many shots his opponent makes. Instead, we should focus on the shots he forced his opponent to take. If those were bad shots, even if his opponent made an unusually high percentage of them, it was a solid defensive performance.
To shift the attention from the results (make or miss) to the process (the quality of the shot), we need an expected points model.
Expected points model
NBA.com used to provide shot logs for all players. These shot logs detailed the shot’s distance to the hoop, the distance to the closest defender, and whether or not the player dribbled into the shot. We used this information to calculate the average shooting percentage in all situations for all players for the each of the previous two seasons.
With this information, we can calculate how many points a player is expected score given their shot opportunities in a given game. We can then aggregate the expected points for the players to arrive at an expected total for the team. (We can calculate totals as well as rates, such as expected points per shot.)
Team analysis
No player is working in isolation. When one player gets open, he can often credit his teammates for creating the space. Similarly, defense is a team activity with rotations, switches and help. Thus, the team level may be the most appropriate domain to apply an expected points model. At least, it’s a good place to start.
The following plots the Spurs’ expected points per shot versus their actual points per shot by game in the 2014-15 season. (Note that the team total is the aggregation of the individual totals. In other words, the formula for expected points per shot pays attention to who’s taking the shot.)
As expected, actual points per shot varies more than expected points per shot. There are games when the team’s actual points per shot far exceeds their expected points per shot and vice versa. For example, on April 15, 2015 against New Orleans, the Spurs had an expected points per shot of 1.04. That’s a poor number by their standards. However, that night the shots fell. They scored 1.22 points per shot. That was quite different than what took place March 17, 2015 against the New York Knicks. That night, the Spurs’ expected points per shot was 1.13. That shows that San Antonio was able to get (and chose to take) good shots. However, the shots didn’t fall against New York. The team scored 0.94 points per shot.
The question for San Antonio is how do they want to evaluate their offense? Do they want to praise the performance against New Orleans (April 15) because the shots fell, or are they going to be more pleased with the offense against New York (March 17) where they were able to create better looks?
The situation is similar on the defensive side. On December 10, 2014, the Spurs held the Knicks to an expected points per shot of 0.93. However, that night the shots fell for New York. They scored 1.14 points per shot. On March 27, 2015, the Spurs allowed 1.08 expected points per shot from Dallas. However, the Mavs only scored 0.84 points per shot. Based on the actual points per shot, it appears as though the Spurs played much better defense against the Mavs. The expected numbers tell a different story.
It’s important to note that the opponents’ expected points per shot are based on the opposing players’ average numbers on the season. Thus, they are not restricted to performances against a particular team. For example, New York’s expected points per shot of 0.93 against the Spurs on December 10th was based on the shots each player got against San Antonio and what those players typically shoot in those situations (against the Spurs or not).
Not all contested shots are equal. When Serge Ibaka contests a shot at the rim, it looks very different than when Isaiah Thomas contests a shot at the rim. Thus, opponents’ expected points per shot and actual points per shots may differ.
Last season, Houston, Golden State, Oklahoma City, Chicago, and Milwaukee saw the biggest difference in opponents’ expected PPS and opponent’s actual PPS on average per game. For Houston, the difference was about 3 points per 100 shots (where 0.44 FTA is a “shot”). In other words, Houston’s opponents scored 3 less points per 100 shots than their expected points per shot suggested. Golden State, Oklahoma City, and Chicago were all around 2 points per 100 shots. Milwaukee was close to 1.5.
All five of the teams at the top of this metric were known for having great “length” and “positional versatility” on defense. Some may wonder why the Houston Rockets, which put a great emphasis on perimeter shooting in their offense, would go for players like Josh Smith, Corey Brewer and K.J. McDaniels. We’re seeing part of the answer in these numbers. Great length and quickness can certainly influence expected points. It can mean running more players off the 3-point line, or coaxing players to pull up in mid-range (as opposed to challenging the length at the hoop). However, what we’re capturing in this metric is that these types of defenders are doing even more.
At the other end, Minnesota gave up a terrible 6 more points than the expected model predicted for every 100 shots. New York and Orlando were the next closest at around 3 points per 100 shots.
San Antonio didn’t see a significant difference between their opponents’ expected points per shot and actual points per shot on average.
The table below displays the averages for all teams in each of the last two seasons.
Opponents' Expected vs. Actual Points Per Shot (PPS)
Season | Team | Opp. Exp. PPS | Opp. Act. PPS | Difference |
---|---|---|---|---|
2015 | MIN | 1.085 | 1.142 | -0.057 |
2015 | NYK | 1.081 | 1.114 | -0.033 |
2015 | ORL | 1.075 | 1.108 | -0.032 |
2015 | BRK | 1.068 | 1.092 | -0.024 |
2015 | LAL | 1.097 | 1.119 | -0.022 |
2015 | DEN | 1.078 | 1.099 | -0.021 |
2015 | DET | 1.067 | 1.087 | -0.020 |
2015 | CLE | 1.058 | 1.073 | -0.015 |
2015 | TOR | 1.075 | 1.090 | -0.015 |
2015 | CHO | 1.041 | 1.053 | -0.012 |
2015 | MIA | 1.080 | 1.092 | -0.012 |
2015 | BOS | 1.057 | 1.068 | -0.011 |
2015 | SAC | 1.081 | 1.091 | -0.010 |
2015 | PHO | 1.076 | 1.084 | -0.009 |
2015 | ATL | 1.054 | 1.060 | -0.006 |
2015 | MEM | 1.054 | 1.059 | -0.005 |
2015 | DAL | 1.081 | 1.085 | -0.004 |
2015 | SAS | 1.042 | 1.045 | -0.003 |
2015 | UTA | 1.060 | 1.061 | -0.001 |
2015 | LAC | 1.076 | 1.075 | 0.001 |
2015 | NOP | 1.074 | 1.070 | 0.003 |
2015 | WAS | 1.047 | 1.043 | 0.004 |
2015 | PHI | 1.096 | 1.088 | 0.008 |
2015 | IND | 1.056 | 1.046 | 0.010 |
2015 | POR | 1.040 | 1.029 | 0.011 |
2015 | MIL | 1.080 | 1.065 | 0.016 |
2015 | CHI | 1.041 | 1.020 | 0.020 |
2015 | OKC | 1.083 | 1.061 | 0.022 |
2015 | GSW | 1.058 | 1.036 | 0.022 |
2015 | HOU | 1.085 | 1.054 | 0.031 |
2014 | ORL | 1.059 | 1.091 | -0.032 |
2014 | MIL | 1.090 | 1.121 | -0.031 |
2014 | PHI | 1.105 | 1.136 | -0.031 |
2014 | UTA | 1.093 | 1.122 | -0.029 |
2014 | DET | 1.094 | 1.120 | -0.026 |
2014 | ATL | 1.066 | 1.092 | -0.026 |
2014 | MIN | 1.077 | 1.101 | -0.024 |
2014 | BRK | 1.084 | 1.104 | -0.020 |
2014 | NYK | 1.100 | 1.120 | -0.020 |
2014 | CLE | 1.076 | 1.093 | -0.017 |
2014 | WAS | 1.077 | 1.092 | -0.015 |
2014 | BOS | 1.082 | 1.097 | -0.015 |
2014 | MIA | 1.087 | 1.100 | -0.014 |
2014 | DAL | 1.102 | 1.113 | -0.011 |
2014 | SAC | 1.098 | 1.108 | -0.010 |
2014 | SAS | 1.032 | 1.040 | -0.008 |
2014 | MEM | 1.068 | 1.074 | -0.006 |
2014 | CHA | 1.050 | 1.056 | -0.005 |
2014 | NOP | 1.117 | 1.120 | -0.003 |
2014 | LAL | 1.092 | 1.094 | -0.003 |
2014 | PHO | 1.087 | 1.086 | 0.001 |
2014 | TOR | 1.082 | 1.078 | 0.004 |
2014 | POR | 1.059 | 1.055 | 0.004 |
2014 | DEN | 1.094 | 1.087 | 0.006 |
2014 | CHI | 1.043 | 1.022 | 0.021 |
2014 | GSW | 1.066 | 1.044 | 0.022 |
2014 | HOU | 1.082 | 1.058 | 0.024 |
2014 | LAC | 1.087 | 1.057 | 0.030 |
2014 | OKC | 1.090 | 1.059 | 0.032 |
2014 | IND | 1.050 | 1.008 | 0.042 |
While actual and expected averages for opponents do not always align on a season, the expected model can still be quite useful for evaluating team defensive performance. For one, it reflects the extent to which a team forced difficult shots (e.g. contested low percentage opportunities).
Also, if not faced with significant injuries or trades, teams tend to keep player minutes and usage consistent. Thus, when comparing performances for a particular team, any added benefit from defenders that contest “better” remains close to constant.
Individual analysis
The chart below plots James Harden’s actual versus expected points per shot by game for the 2014-15 season. Similar to the team level, the individual’s actual points per shot will vary much more than their expected points per shot.
On December 31, 2014, Charlotte held Harden to 1.11 expected points per shot. Harden’s average game that season was 1.21 points per shot. (Again, “shot” includes 0.44 free-throw attempts.) Part of Charlotte’s success was that they held Harden to just 4 free-throw attempts.
Unfortunately, Harden scored 1.73 points per shot. (It helped that he went 8 for 11 on threes). Charlotte should certainly review the game film to see what they could have done better. However, the expected numbers imply that Charlotte did a better job defending Harden than the actual numbers suggest.
On March 12, 2015, Utah held Harden to 0.80 points per shot. However, the expected model reveals that it may have been more a result of Harden having an off night than anything exceptional from Utah’s defense. Utah allowed Harden to get 1.31 expected points per shot that night.
In spite of the poor performance from Harden against Utah, the Jazz might want to go back and revise how they defended him. If they continue to allow 1.31 expected points per shot from Harden, he’s going to score more than 0.80 points per shot.
Final thoughts
Good process can occasionally yield poor results. Poor process can occasionally yield good results. When teams focus too much on the results, they can be misled. For example, if the shots aren’t falling for three straight nights, a coach might think he needs to mix things up. These changes may not be necessary, and the expected points model would go a long way in determining if the lack of offensive efficiency is due to something systematic, or just a run of “bad luck.”
Additional notes
-We’re not sure what an “expected turnover” looks like, but actual turnovers could be added to both the expected and actual shot production to get an expected and actual offensive rating for the team or player.
-Here, we used an entire season as the base line to judge games in that season. Teams would likely want to see expected production following each game as the season progresses. To do this, teams could use a rolling 60 to 82 prior games (dating back to the previous season) as the baseline. Rookies would likely need an artificial prior until a decent sample could be gathered from their NBA minutes.
-More detailed information about player locations could help this model. For example, a defender contesting a shot at the rim from behind the offensive player is much different than a player contesting from in front of the defender. The shot logs did not contain this level of detail.
-There were some glitches in the data, but we did not find anything that would dramatically influence the numbers presented in this article.