How This Works

Understanding the LSR System

College Football Ranking System: The Logical Strength Rating (LSR)

What This Is

This is a college football ranking system built on philosophical logic rather than statistical correlations. Instead of asking "what patterns predict wins?", we ask a more fundamental question: "What does it mean for one team to be superior to another?"

The result is the Logical Strength Rating (LSR) — a ranking system that combines what teams have actually accomplished on the field (65%) with the structural realities of talent and resources in college football (35%).

Why We Need This

The Problem with Existing Rankings

Most ranking systems fall into two camps:

  1. Pure Results Systems (like raw win-loss records): A 12-0 team from the Sun Belt automatically ranks above an 11-1 SEC team, even if the Sun Belt team hasn't played anyone ranked and the SEC team lost by 3 points to the #2 team in the country.
  2. Pure Predictive Systems (like computer models): These try to predict who would win head-to-head matchups, often relying on statistical correlations (e.g., "teams that rush for 200+ yards win 73% of the time"). But correlation isn't causation, and these models can contradict what actually happened on the field.

Our Approach: Logic Over Statistics

We built this system from first principles — starting with the fundamental definition of what a game is, what winning means, and what makes a team superior. No shortcuts, no statistical tricks. Just pure logical reasoning applied to the structure of college football.

The Core Philosophy: Six Logical Principles

1. Victory is the Goal (But Margin Matters Too)

The Logic: The fundamental purpose of a team is to win games. A win is better than a loss, period. But how you win provides additional evidence.

Why dynamic? Beating Ohio State 31-0 is genuinely more impressive than beating Morgan State 60-0. The dynamic cap rewards quality performance, not stat padding.

2. Strength of Schedule Matters (Recursively)

The Logic: Beating a great team means more than beating a bad team. But how do we know which teams are great? We can't just look at their records—we need to consider who they played too.

Solution: We define each team's strength in terms of all other teams' strength. This creates a system of equations that must be solved simultaneously. Every team's rating depends on:

This is recursive—your opponents' strength depends on their opponents' strength, and so on. The system iterates until all the ratings stabilize into a logically consistent solution.

3. Recent Performance Matters More (But Not Too Much)

The Logic: Teams change over time. A team in Week 1 is not the same team in Week 10—players improve, get injured, coaching adjusts, chemistry develops.

Solution: We weight games using a square root progression. Game 1 gets weight 1.0, Game 2 gets weight 1.41, Game 3 gets 1.73, and so on. By Game 12, recent games matter about 3.5x more than your first game.

Why square root? It's the Goldilocks solution:

4. Context Must Be Neutralized (Home Field Advantage)

The Logic: If every home team in the league wins by an average of 3 points more than they "should," then winning at home by 3 doesn't prove you're 3 points better—it just proves you played at home.

Solution: We calculate the league-wide Home Field Advantage (HFA) by averaging all games. Then we subtract that advantage from every home team's performance and add it to every away team's performance.

This "neutralizes" all games to equivalent conditions.

5. Sample Size Matters (Confidence Adjustment)

The Logic: A team that has played 1 game provides minimal evidence. A team that has played 12 games provides much more reliable evidence.

Solution: We apply a piecewise linear confidence adjustment. Teams with fewer games get their rating "pulled toward zero" (the average) until they accumulate sufficient evidence:

6. Structural Advantage is Real (The Talent Component)

The Logic: College football differs from professional sports. In the NFL, salary caps and draft systems create competitive parity. In college football, there is systematic, persistent inequality in talent and resources.

The Reality:

Our Solution: Hybrid Approach (65% Demonstrated + 35% Structural)

We calculate two separate ratings:

Demonstrated LSR (65% weight)
What the team has actually accomplished on the field this season—wins, margins, strength of schedule, adjusted for context.

Structural Advantage (35% weight)
The measurable, persistent advantages a team possesses:

We combine these into a Talent Index (0-100 scale), then transform it to a Structural Advantage score (±10 scale).

The Final Formula:

Final LSR = (0.6313 × Demonstrated Results) + (0.3687 × Structural Advantage)

How These Weights Were Determined:

Rather than choosing weights arbitrarily, we derived them empirically through optimization:

  1. Objective: Maximize prediction accuracy on historical head-to-head matchups
  2. Method: Golden section search minimizing log-loss
  3. Data: 531 games split into training (75%) and test (25%) sets
  4. Result: Optimal weights = 63.13% demonstrated, 36.87% structural
  5. Validation: 87.12% test accuracy

Why This Balance Works:

Example: How to Read the Rankings

Rank  Team              Record   Dem    SA     Final
1     Ohio St           7-0      23.11  -0.24  14.50
2     Indiana           7-0      22.33  -5.56  12.05
3     Alabama           6-1      16.34   0.00  10.32
14    South Florida     6-1      15.10  -7.44   6.79

Ohio State (#1):

Indiana (#2):

The Story: Indiana's dominance on field keeps them #2 despite talent gap. Alabama's elite talent keeps them #3 despite a loss. South Florida's G5 talent prevents them from cracking top 10 despite matching Alabama's record. The system balances what you've done (63%) with what you have (37%).

Philosophy: Why This Approach Matters

The Problem with Pure Empiricism

If you only rank based on game results, you get absurdities:

The Problem with Pure Realism

If you only rank based on talent/resources, you get different absurdities:

The Hybrid Solution

By weighting demonstrated results at 65% and structural advantage at 35%, we acknowledge both realities:

  1. What happens on the field matters most (empiricism)
  2. But structural inequality is real and measurable (realism)

This isn't "giving blue bloods a bonus"—it's acknowledging that college football has extreme talent stratification that creates genuine capability differences. The team that wins the national championship almost always has a top-10 recruiting class. That's not bias; that's empirical reality.

Transparency & Reproducibility

Everything about this system is transparent:

You can run this yourself, modify the weights, and see how it changes. That's the difference between a logical framework and a black box.

Frequently Asked Questions

Q: Why not 50-50 demonstrated vs. talent?
A: We optimized the weights empirically by maximizing prediction accuracy on historical games. The data showed that 63-37 produces the best predictions. This makes demonstrated results almost twice as important as talent.

Q: Why cap point differential at ±31, and why is it dynamic?
A: Because winning 70-0 vs. 31-0 against the same opponent doesn't prove you're twice as good. But the cap adjusts based on opponent quality—full credit (31 pts) vs. elite teams, minimal credit (~12 pts) vs. terrible teams.

Q: Doesn't this favor big-name programs?
A: No—it favors programs with elite talent, which happen to be big-name programs. The correlation isn't arbitrary; it reflects decades of recruiting success. And demonstrated results still count almost twice as much (63% vs 37%).

Q: What if a G5 team goes undefeated?
A: They'll rank highly if they dominate their competition. An undefeated G5 team with strong demonstrated LSR (say, 20.0) and mid-tier talent (SA = -5.0) would typically rank top 10-15. That's fair—they get credit for dominating while acknowledging talent limitations.

Conclusion: Logic Over Luck

This ranking system is built on a simple premise: superiority should be defined logically, not statistically.

We started with first principles—what is a game, what is victory, what makes a team better—and built upward from there. The result is a system that:

It's not perfect—no ranking system is. But it's honest about what it measures and rigorous about how it measures it.

In a sport where controversy over rankings dominates the conversation every year, we offer something different: a philosophically coherent answer to the question "who's better?"