I read a good bit on this, some positives, some negatives. If the NCAA does it right, it should be an overall positive. Many people are so hesitant because it includes a lot of "trust us" statements.
Obviously RPI had issues. This new model tries to take everything relevant to the better prediction models into account:
1) "Team Value Index" --- seems a lot like RPI
2) Team Efficiency -- points scored/allowed per 100 possessions
3) Wins -- runs the risk of double counting as Team Value Index includes this metric, but is the "just win baby" portion of the formula apperently
4) Adjusted Winning Percentage -- Basically rewards for winning at home, punished for losing at home
5) Scoring Margin -- run up the score and get more points, capped at 10. Interesting relationship with efficiency. A slow and steady team like UVA or Princeton is unlikely to run the score up much. Also, teams that go into desperation mode and start fouling may see increased loss margins. "They" say they evaluated the margins, and 10 was statistically significant but not encouraging others to crush people's souls.
Overall, I think it's a neat new tool. But since it has major implications I would really have liked to see what the new system WOULD HAVE said about the last 3 years worth games. Better yet, hand it off to the stat nerds. Do all MidMajor's get punished for some reason while anyone who plays 5 top ten teams (even if they lose all the games) gets a big bump? Is there a scheduling secret that helps climb the rankings? Is that nun for Layola factored in properly?
Why is the formula proprietary? When there were 5 potential black boxes and humans contributing to a ranking, it made a little more sense (FBS). But this is more like a gymnastic competition - to build the best routine I need to know how much the difficulty score matters, what elements weigh the most, and how many points do I get if I stick the landing, are there certain elements I shouldn't risk doing (I feel like my summer Olympics watching is really paying off)?
The machine learning portion is interesting. You have many data sets already, presumably the machine utilized those to tweak whatever predictive formula was the starting point. While basketball clearly advances year by year, making some stats more relevant to outcome, I wouldn't think there is a strong need to "learn" midseason. Why not just state that the formula will be updated to be more predictive at the end of each season?
Better yet, just show us how much better NET is at predicting how good a team is than RPI. Show us the trial runs. We want to believe.
CBS has a good write up on it:
https://www.cbssports.com/college-b...long-overdue-overhaul-on-an-outdated-process/
So does this Ben Snider from SB Nation:
https://www.anonymouseagle.com/2018...ll-selection-committee-metric-ranking-rpi-net
NCAA story:
https://www.ncaa.com/news/basketbal...-mens-basketball-committee-adopts-new-ranking