What began as a suspicious in-person bet in an Alabama-LSU baseball game in May has turned into a crusade which goes way beyond a single collegiate ball game.
The National Collegiate Athletic Association (NCAA) unveiled a partnership December 13 with Signify Group, a data science company which will use its AI Threat Matrix to study and analyze a variety of parameters aimed at making student athletes and staff members safer.
The service debuts before year’s end goes live this month with a particular emphasis on X (the former Twitter), Instagram and TikTok.
A pilot program during the 2023-24 NCAA championships will analyze abuse and other pressures in 35 languages; the relationship to sports betting; and establish procedures to reach primary contacts from team members to police.
The initial data collected will serve as the yardstick for future championships.
The NCAA said, “with the growth of sports betting in the U.S. in recent years there have been many instances of college sports representatives receiving targeted abuse and threats.”
The project—the largest of its kind in North America—is supposed to help the NCAA deal with suspected instances, and come up with a way to change policies in response, according to a press release.
“Engaging Signify to monitor NCAA championships reflects our resolute commitment to college athlete safety and well-being,” NCAA president Charlie Baker told Yogonet Gaming News. “This pilot is just the start of much broader online protection measures the NCAA will put in place to guide our longer-term strategy.”
Jonathan Hirshler, CEO of Signify Group, said the next few months will shed a lot of light on the problem.
“I am confident that we will not only unearth deep insights into online abuse and threats in college sports but also help drive real action in this space in partnership with the NCAA and law enforcement agencies.”