Posted on

July 30, 2023 MLB Baseball Betting Cards details:

I dug into the model further and I have some interesting adjustments for the model.

Model 9.3:

Yesterday I highlighted the current model’ls backtested data and where the data showed results would be with Model 9.2 parameters.

Today I examined it for logic flaws. Sometimes I try to get the parameters to best fit the data and it doesn’t really make logical sense on some of the changes, so I worked on removing them and reviewed where the model ended up.

The Adjustor:

  • The Adjustor has been set to the Last 90 (L90) days for review. I wanted to get a large sample set to see how teams are performing compared to Fangraph’s median projections and adjust accordingly. I do like the large sample size, but I think it may be a tad bit too large.
  • I’m going to try to reduce it down to 60 days to better capture new team dynamics with callups. This should represent about a 2% swing each day as theres about 50 games played every 60 days and 1 game result out of 50 is 2%.
  • It will now be a L60 Adjustor

Other Forced Adjustments:

  • I’ve recently had a 2% addition factor to the Away team’s adjusted WP% and I’ve also reduced down Runt’s bets WP% an additional 2%.
    • These were adjusments that were tailored to the model’s results as it helped improve overall units won in backtesting.
  • The problem with these is that they were fitted, and they didn’t really make logical sense. I was only testing bets that qualified based on the arbitrary availalble lines from the books and backtesting my data to to see how to best qualify my bets and maximize units won. This is a good start, but long term the books can change how they do things and my model would be worse off.
  • I’m removing these adjustments as they also make sense from an overall macro view. Looking at the Away teams this season, those teams have won about 47.5% of those games out of ~1600 games. The final adjusted WP% I had inflated Away teams up to 50% on average, this removal corrects this flaw and gets them back in line with the actual season total results.
  • On the Runts side I was just being extra cautious for qualifiers, but it can be adjusted in the overal eROI criteria.

The Coin Flip Factor:

  • This might be my favorite addition to the recent models. It doesn’t make sense at first pass from a logical point of view as I’ve already taken Fangraphs base projection and adjusted them on performance. However the Coin Flip factor is what I think really starts to drive the model as it acknowledges for the fact that stats alone don’t win games. Sometimes things go completely off script, the chaos factor or as I call it the coin flip factor.
  • Look over at the Reddit MLB daily betting forums and you’ll see everyone bemoan how the games are just coin flips. Well I’m embracing that fact here, because yes we can see that’s true
  • To add in the Coin Flip Factor I just take my adjusted WP% after accounting for the L60 performance and add 50% to that projection then divide by 2. Overall this just adjusts the games closer to 50% so that I’m less likely to bet a game when the odds are steep to the negative side and luck will always be lurking to flip the script.
  • This helps put me in a position to take advantage of the idea the games do sometimes behave like a coin flip.
  • Best of all it proves to be an accurate display as my AWAY teams were showing a projected WP% average of 45% with all the other adjustments. The Coin Flip Factor here raises them up to 47.5% to get right in line with actual season results of Away team win percentage.
  • It’s logical in that it helps account for unknowns by evenly trying to get the odds for either team to win closer to 50%, but it also is paired with what we would statistically expect to bring a more balanced result of outcomes.

The Results:

  • After implementing the L60 Adjustor and removing the 2% adjustments in certain areas. I then was able to test out eROI cutoffs in the bets for MAINs and Runts.
  • I eventually settled in on a eROI of 13%+ for RUNTS, and here is the big shocker…. the removal of MAINS as a bet.

  • After the changes there was no viable cutoff range for MAIN bets. Any level of eROI showed negative units won and the count got increasingly smaller of qualified bets as this level rose.
    • @ 13%+ it was showing -62 units won on 42 qualified bets since May 1st.
  • Now the benefits though can be seen all in the RUNTS side.
    • The units bet is up to 1687 and units won at 394.
    • Compared to yesterday’s Total Bet data of Model 9.2 showing 1050 units bet and 364 units won.

Does it make sense to remove MAINS?

MAIN review:

  • It seems odd, but previously I had the model fitted on what I would consider MAIN bets, bets expected to win 50%+.
  • Since May 1st, the new parameters have MAIN picks (qualified and unqualified picks) winning 473 out of 912 games, 51.8% record. So far so good in that they are accurately living up to their name of winning 50%+
    • Those 60%+ WP% won 44 out of 57 games, an outstanding 77%.
      • The Coin Flip Factor suppressing the WP% down, which is fine due to the small sample size this high.
    • Needless to say the accuracy is correct here in that MAINS live up their name.

RUNTS Review:

  • Runts with 40%+ WP% won 428 games out of 855 games, this is 50%
    • This is where the fun in the model is as it shows that after the coin flip factor is in place these games are truly winning more than the baseline , the average of all 855 RUNTs here was 46.5%
  • RUNTS are also proving accurate and winning more than expected.

The problem with sportsbetting is that we can’t control the lines available to bet against, we can only choose our spots. And that’s the rub here as my model is showing accuracy in the WP% calculations, but when it comes time for qualification the model takes a loss on units in regards to MAINS.

My theory:

The books can’t win both sides so right now my theory is the books have intentionally shaded the lines higher on likely winners, MAINS/ 50%+ projected teams. As a result most of the value has been sucked out of trying to make money of teams that SHOULD win the majority of time. This is why the model has driven me towards betting exclusive big underdogs, that’s where the value is according to my backtested data as there’s no profit to be made betting favorites of any kind.

It could be a psychology and books know it, people like to pick winners and have winning records. Well to me the only proof of winning is postive Units, if I show a 45% win rate on bets projected to win 40% of the time I’ll take that if it means I win big units as a result. I don’t need the 50%+ win rate on my record, I just want the profit as that’s all that matters here.

Bankroll Management adjustment:

  • I’ve been trying to clean up how I display my units bet and factor in adjustments since the season start as I scale up bet sizes. I’m just going to display the unit sizes as what they are with their larger value.
  • I’m removing the Adjustment Value concept on the 1st of each month and I’m replacing it with the Bankroll Churn statistic that you can see on each daily card.
  • I’ve been referring to churn as the amount each bet size represents to my current bankroll. In the card for today, my 7 unit bet size could be made 22.5 times before exhausting my current bankroll. Going forward unit values on a given day will be determined by where the card the previous day ended up in regards to this churn value.
    • My goal is to maintain a churn level of 20-30 games, thus if a card ends with the churn value under 20 games I will lower the next day’s bets down 1 unit. If it ends above 30 games the unit size will increase 1 unit the next day.
    • This method should keep the bankroll tuned for success as I’ll be betting roughly the same amount percentage wise each day as to my current bankroll size. Somewhere between 3.3%-5% is what the 30 and 20 game churn value represents.
    • On April 12th I lost 11 out of 12 games, but that was on a much earlier version of the model and it is quite rare these days for me to find that many games with qualified bets. The 20-30 churn level is something I feel comfortable with to withstand variance based on current model parameters.

Game Quantity:

  • Lastly I’m making a committment to game quantity each day. Each day I create a card I will make it a goal to have at least 25% (Rounded Up) of the daily slate on the Final Card. That’s 4 games mimimum on a full slate day of 15 games.
  • The new parameters show an uptick in qualified units to 1687 from 1050 units. Thats 241 qualified games in the backetested data to the Model 9.2’s 150.
  • That’s a helpful increase in quantity, but if the card has less than 25% before I declare the card final I will add a stretch game of the next closest qualified games based on eROI until the minimum is met. I will label these as something special for tracking purposes.

That is all for today’s model changes, thanks for reading!

Current Missing Lines/Close Lines:

These are games missing lines currently or games that I’m looking to check back on as the best lines I’ve found so far are close to qualifying for a bet. The line indicated is what will qualify the bet, parentheses is the closest odds I’ve found.

0 games missing lines:

3 games are close to qualifying:

  • KCR +150 (+155)
  • CHC +120 (+130)
  • MIL +130 (+140)

A couple of games like KCR and MIL are relatively close to qualifying, but I’ve already reached the 4 game 25% threshold for today’s card so I’m calling it final.

Last checked @ NOON. the card is FINAL.

DAILY CARD:

*See Glossary for details to help explain terms and other recent model changes.

___________________________________

___

Leave a Reply