A comprehensive report by Jim Mancari
When Ivy League rivals Harvard and Yale met in a rowing match in 1852, it marked the beginning of inter-collegiate sports. Over the next 50 years, as other sports such as football and basketball started gaining popularity at the turn of the 20th century, a system of organization was desperately needed. In 1905, President Theodore Roosevelt called for a series of special White House meetings to discuss the increase in injuries and even deaths in the sport of football. Representatives from Harvard, Yale and Princeton—known as the “Big Three” at the time—attended the meetings. As a result, 62 institutions became charter members of the Intercollegiate Athletic Association of the United States in 1906. The organization was renamed the National Collegiate Athletic Association (NCAA) in 1910.
In its over 100-year history, the NCAA has undergone extensive reforms since its early days, especially in forming conferences, acquiring lucrative sponsorships and agreeing on television deals.
However, in the NCAA’s history, one practice has remained constant: Student-athletes play for free.
Each year, more than 400,000 student-athletes compete on nearly 18,000 teams at over 1,000 schools across three NCAA Divisions (DI, DII and DIII). The schools involved bring in huge sums of money each year from their athletic programs, especially those schools that advance to national championships.
It’s easy to think that since these schools make so much money that the athletes—the actual performers in the athletic competition—would be entitled to a share of the earnings. But as it stands, the NCAA views student-athletes as non-professionals who represent their school, not themselves, during athletic contests.
Continue reading “The Story.”