How AI Consensus Betting Works: The Vector 4 Method
Methodology
Vector 4 is the framework behind every Otterline pick. It combines four independent model signals, measures how well each combination has historically performed, compares those signals against live market pricing, and classifies the result into a confidence tier. The goal is a repeatable, transparent process — not hunches, not one-off locks.
The Four Vectors: Independent Model Sources
“Vector 4” refers to four separate prediction sources that are polled independently before any aggregation occurs. Keeping sources independent is critical — if models share the same input data or methodology, their agreement carries no additional signal.
Otterline Model
Proprietary NHL/NBA prediction model trained on historical game outcomes, schedule, travel, rest, and situational factors.
Dunkel Index
Long-running third-party power-rating system with a documented public track record across multiple sports and seasons.
CBS Sports Expert
Curated daily picks from CBS Sports editorial picks, adding a human-informed signal that often captures injury and lineup news faster than stat models.
Mladek Model
An independent quant model with an underdog-value bias. Particularly effective at surfacing spots where the market has overreacted to public narrative.
Consensus Scoring: Agreement + Combo Performance
For each matchup, we count how many of the four models pick the same side (the “consensus percentage”). But raw agreement isn't enough. We also look up the historical win rate for the specific combination of agreeing models — not all pairs or triples perform equally.
| Agreement level | Example | Strength |
|---|---|---|
| 100% (4/4 models) | All four pick the same team | Strongest |
| 75% (3/4 models) | Three agree, one dissents | Strong |
| 50% (2/4 models) | Split between two pairs | Moderate — watch combo record |
| <50% | Mostly disagreement | Low — typically filtered out |
Combo win rates are tracked over rolling and all-time windows. A combo that has historically won 62% of the time when it agrees carries more weight than a combo winning at 51%.
Market Comparison: Implied Probability vs. Model Confidence
A strong consensus doesn't automatically mean a good bet — the market may already agree. We compare model confidence against market-implied probability (sourced from Polymarket and book lines) to identify spots where the models are ahead of the market.
Value zone
Model confidence > implied probability by 4–8 pts
Neutral zone
Model confidence ≈ implied probability (within 3 pts)
Avoid zone
Model confidence < implied probability (market leads)
The Mladek Spotlight pick specifically targets the value zone — spots where the model's confidence exceeds implied odds by a meaningful margin, and particularly on underdogs or mid-range pricing where the market gap is most actionable.
Confidence Tiers and Pick Buckets
After scoring, picks are sorted into tiers and assigned to curated buckets. Each bucket has its own agreement threshold, combo win rate minimum, and (where applicable) price constraints.
Power Plays (NHL)
56%+ consensus · combo win rate ≥ 55%High-frequency daily plays with consistent historical backing. Focus on reliability and volume.
Diamond Picks (NHL)
56%+ consensus · combo win rate ≥ 55% · avoid heavy chalkStronger filter than Power Plays. Requires both model agreement and a documented combo edge. Price-aware.
Mladek Spotlight
Mladek signal + value vs. market + underdog leanOne pick per day. The clearest model vs. market value spot. Bet at 0.5–1 unit.
Verified Value (NBA)
Model consensus + price-aware filterNBA's top bucket combining multi-model agreement with a value angle versus closing lines.
NBA Sniper
Strict consensus + 40–59% Polymarket-implied windowLow volume, high bar. Targets a specific market-implied range where models show clear disagreement with public pricing.
Risk Controls and Quality Gates
Raw model output is filtered through a set of hard and soft constraints before any pick is published.
- Minimum sample constraint — combo win rates are only used once a combination has appeared at least N times in historical data.
- Overexposure guardrail — extreme chalk (e.g. -300+ favorites) is excluded from Diamond and Value Edge regardless of consensus score.
- Market synchrony check — if consensus score and market probability are within a narrow window (both very high), the pick is downgraded to reduce correlated exposure.
- Injury and lineup flag — late scratch or lineup change information can override model output, particularly for NBA.
- Single-game maximum — users are advised to cap exposure at 0.5–1 unit per game regardless of confidence tier.
What Vector 4 Does Not Do
- It does not guarantee outcomes. All sports betting involves variance and every pick can lose.
- It does not account for in-game developments — picks are pre-game and not updated once posted.
- It does not replace bankroll discipline. A 60% win rate on max bets is worse than 55% on flat 1-unit sizing.
- It does not make parlay recommendations. The framework is designed for straight moneyline or equivalent single-game bets.