With NASCAR taking a couple of weekends off because of the Olympics, this is the perfect time to analyze how my simulations and usual handicapping techniques have performed since the pandemic. COVID-19 has thrown a complete wrench into typical activities at the track each weekend — specifically, qualifying and practice sessions have been removed from most schedules. I have felt a difference in the consistency of my wagering, and I’m speculating that I will be able to discover what has happened in analyzing the simulations.
I have tied together all the simulations and results on a database for races since COVID put a halt to the 2020 season. Since the Darlington event May 17 last year, which essentially restarted competitive sporting events, 54 Cup Series races have been run. But only six have hosted practice or qualifying sessions. In the rest, qualifying was done by car owner points or by a preset formula designed to benefit the drivers who performed best in the last race. To me, this has caused the greatest difference in handicapping strategy. I, and surely many of you, used to rely heavily on starting spots earned at a specific track and performance marks that set drivers apart in practice runs.
I hoped to find out the importance of the practice and qualifying data at the six races that have used them, the importance of the starting spot since it was essentially earned at a previous event, which of my other factors had best predicted success for drivers recently and how my simulation projections had been compared with previous years. Hopefully this will give us a better handle on wins and losses in handicapping NASCAR.
Overall projection of winners by the simulations
Over the last 54 races, my simulations have projected the winner nine times for an average of once every six races. When I last studied this information and tweaked my simulation formulas before 2020 in preparation for a season of “Racers Odds” with Brad Gillie and Jeff Hammond, the average was about one winner for every 5.2 races, so the frequency of picking winners has dropped. I can only assume this is due mostly to the lack of qualifying and practice data to bolster the formulas. Here is a breakdown of the winners of the last 54 races and where my simulations projected them:
Projected first — 9 times
Projected second — 9 times
Projected third — 3 times
Projected fourth — 1 time
Projected fifth — 5 times
Projected sixth — 4 times
Projected seventh — 10 times
Projected eighth — 2 times
Projected 11th — 3 times
Projected 14th — 1 time
Projected 15th — 3 times
Projected 17th — 1 time
Projected 20th — 1 time
Projected 21st — 1 time
Projected 24th — 1 time
The 24th-place projection winner was the upset that occurred at Kentucky last summer when Cole Custer passed likely winner Martin Truex Jr. on a late restart and went on to win. The 21st-place projection winner was Michael McDowell, who won a usually unpredictable Daytona 500 in February. If you’re doing the math, you’ll see that the winner of the race came from my top two projections in 33% of the races and from my top five in exactly half the races. As much as that sounds pretty good on the surface, both figures are actually a bit less than what I became used to before the pandemic.
Matching up top-3 and top-5 finishes, projections
Since DraftKings offers top-3 finishing odds for each driver, it’s appropriate to go beyond the winners and look at the results that follow. Regarding the actual top-3 finishers in a race as compared with my projections over the last 54 races, of course, there have been 162 actual top-3 finishers. My simulations have projected top-5 finishes for 48 of those drivers, a percentage of 29.6%. That is also a touch below what I had become used to.
However, when you expand to top-5 finishes vs. top-5 projections — or 108 total fourth- and fifth-place positions — my simulator has projected 54, or exactly half. That is beyond my usual expectations and something to consider in the future. In all, of the 270 actual top-5 finishers, my simulations have projected 102 over the last 54 races, good for 37.7%. This is something to consider as a wagering option, or in using the top five drivers projected in matchups of your choice, as they have been fairly accurate. Additionally, of the 268 drivers I projected in the top five that didn’t finish as such, 21 experienced some sort of mechanical trouble and were unable to finish the race, something impossible to predict.
Overall finishes vs. projections
Before the start of the 2020 season, I ran a three-year study on my simulations, and the absolute average difference of my projections compared with the actual finishes was 6.49. I figure this taking into account only the races in which drivers complete 95% or more of the laps. For the last 54 races, that difference has increased to 6.93. I can tell you with 100% certainty that this decline is due primarily to the change in qualifying and practice, as well as the introduction of a handful of new tracks to the circuit. But if you consider only the races since May 2020 that hosted qualifying and practice, the projection difference is 8.08. The tracks that have had qualifying and practice sessions have been the three new tracks (COTA, Nashville and Road America) plus the flagship events (Daytona 500, Coca-Cola 600). Of those, the only race in which it could be assumed that teams knew going in what they had for car setup would be Charlotte.
To summarize, the introduction of new tracks on the circuit and the lack of track prep in most races has led to a higher level of unpredictability.
How much has formulated starting spot mattered?
In most races nowadays, the starting lineup is set by a predetermined formula that rewards drivers who performed best in the previous race. This is significantly different from the qualifying procedures that were customarily done. In those, the driver earned his starting spot by his speed at that track.
Fundamentally, this has meant a huge difference in the importance of where a driver starts, as it can easily throw off a bettor’s perception of how good or bad that driver might be in that particular race, with no sound logic or data to support the thought. In the recent 48 races without qualifying and only a formula to set the lineup, the absolute difference in starting and finishing spot of drivers who completed 95% of the laps was 7.74. My simulation projection difference was 6.93, nearly a full position better for projecting performance.
Absolute projection/finish difference for other factors
So far, we know that my overall simulation has produced an average absolute projection/actual finish difference of 6.94. We should judge future handicapping strategies using this figure as the basis for comparison. I also just showed that the absolute difference for formulated starting spot to finish is 7.74.
Let’s look at four other easily measurable statistics and how they compare with finishing position. For each race, I use these factors as part of my simulation formulas. Again, note that these calculations take into account only races in which drivers completed 95% of laps:
— Last race finish for each driver compared with actual finish difference: 8.45
— My formulated track rating going in at that track for each driver compared with actual finish difference: 7.35
— My formulated track designation rating for that track and similar ones for each driver compared with actual finish difference: 7.09
— My formulated last 10 races rating for each driver compared with actual finish difference: 6.88
What have we learned here? First, bettors must recognize that what happened in the last race is least important nowadays among the factors that I use in my simulations. In most cases, the tracks at consecutive races are entirely different, meaning what worked in one race typically won’t work in the next. Second, it seems that the most important factor is momentum, and in previous studies, I have found that 10 races worth of data has been the most ideal number for measuring momentum. Third, the track and track designation ratings are useful in their own right, but not as useful as their weight in the overall simulation formula.
Which tracks have proven most and least predictable?
Here is a ranking of the average absolute finish vs. my simulation projections over the last year and a half of Cup Series racing. The Handicap-ability Grades you see on the race simulations in Point Spread Weekly are determined using this logic. The grades were determined before the 2020 season using a three-year average. Obviously that means that these differences are not yet accounted for in the grades and will have an impact next time I conduct the overall study, perhaps before next season.
The tracks are listed in terms of most predictable my simulation to least predictable, again using the 95% laps completed criteria only:
Rank, Track, Projection Difference
1. Richmond International Raceway: 4.47
2. Phoenix International Raceway: 4.54
3. Sonoma Raceway: 5.27
4. New Hampshire Motor Speedway: 5.28
5. Michigan Speedway: 5.45
6. Atlanta Motor Speedway: 5.54
7. Dover International Speedway: 5.61
8. Las Vegas Motor Speedway: 5.64
9. Martinsville Speedway: 5.95
10. Charlotte Motor Speedway: 5.95
11. Kansas Speedway: 6.17
12. Pocono Raceway: 6.55
13. Darlington International Raceway: 6.57
14. Charlotte Motor Speedway Roval: 6.58
15. Kentucky Speedway: 6.8
16. Texas Motor Speedway: 7.25
17. Homestead-Miami Speedway: 7.45
18. Nashville Superspeedway: 7.77
19. Circuit of the Americas: 7.81
20. Bristol Dirt Track: 8.13
21. Road America: 8.49
22. Bristol Motor Speedway: 9.38
23. Indianapolis Motor Speedway: 10.28
24. Daytona Road Course: 10.72
25. Talladega Superspeedway: 11.2
26. Daytona International Speedway: 11.7
Pretty simple analysis — the flatter and shorter the track, the more predictable the races have been. This can easily be explained in that at the shorter flat tracks, the cream typically rises to the top and quickly puts down lesser competitors laps. It should be noted that of the top nine tracks, five still have events on the 2021 schedule yet to come, including the top two. Interestingly, the championship race for 2021 is at Phoenix, the track shown as the second most predictable since the pandemic began. Perhaps this would be a good race to boost your bankroll using my simulation.
Which track designations have proven to be most and least predictable?
Using the same logic as before, here is a ranking of the track designations in terms of their level of predictability since May 2020:
Rank, Designation, Projection Difference
1. Flat short track: 5.13
2. Wide 2-mile speedway: 5.45
3. Fast superspeedway: 6.09
4. Cookie-cutter speedway: 6.45
5. Intermediate speedway: 6.57
6. High-banked concrete: 7.08
7. Flare superspeedway: 7.18
8. Dirt track: 8.13
9. Road course: 8.3
10. (Former) restrictor-plate track: 11.39
Again, the shorter flat tracks have been the most predictable, while the road courses and former restrictor-plate tracks like Daytona and Talladega have been the wild cards. Most often those tracks produce the most calamity, a natural obstacle to betting success for those who rely on historical data as the basis for their handicapping. The series has added four road courses to the circuit over the last year.
Which drivers have proven to be most and least predictable?
The last question is not a measure of which drivers have performed best, just which have performed most predictably. Consider this as you handicap the rest of the season. Consistency and predictability should be a fundamental factor in which drivers you choose to back, fade or avoid each week.
Rank, Driver, Projection Difference
1. Kevin Harvick: 5.47
2. Anthony Alfredo: 5.5
3. Matt DiBenedetto: 5.9
4. Kurt Busch: 5.96
5. Brad Keselowski: 5.96
6. Chris Buescher: 6.19
7. Austin Dillon: 6.36
8. Daniel Suarez: 6.48
9. Aric Almirola: 6.77
10. Ryan Newman: 6.92
11. Chase Briscoe: 7
12. Joey Logano: 7.04
13. Denny Hamlin: 7.06
14. Kyle Larson: 7.14
15. Kyle Busch: 7.17
16. Ryan Blaney: 7.22
17. Ricky Stenhouse: 7.29
18. Ryan Preece: 7.31
19. Corey LaJoie: 7.35
20. Michael McDowell: 7.37
21. Cole Custer: 7.46
22. William Byron: 7.58
23. Alex Bowman: 7.63
24. Martin Truex Jr.: 7.94
25. Darrell Wallace: 7.98
26. Chase Elliott: 8.04
27. Tyler Reddick: 8.14
28. Erick Jones: 8.63
29. Christopher Bell: 8.9
30. Ross Chastain: 11
Kevin Harvick has been the most predictable driver by my recent simulations. Furthermore, three of the top five drivers on this list are veterans with over a decade of experience. Consistency has been a hallmark of their careers. At the same time, some less experienced drivers like Ross Chastain, Christopher Bell, Erik Jones and even Chase Elliott have proven to be among the least predictable. Using this data should help you pinpoint drivers who can be trusted more.
After conducting this study, I see no reason to make any tweaks to the simulation formulas. Still, I would endorse a strategy of relying more heavily on the simulations at certain tracks than at others, and then being willing to pledge more of your wagering to underdogs at the less predictable tracks. The odds set by bookmakers typically don’t reflect these differences in unpredictability.