AI needs oversight—almost all tech and data execs have said as much—and new research from data intelligence firm Collibra affirms this, finding that virtually all respondents (99 percent) cite threats necessitating AI regulation in the US. However, the survey data also reveals that many decision makers, regardless of organization, express skepticism in the U.S. government’s current approach to AI regulation.
The firm’s new research, based on a Harris Poll survey of over 300 U.S. data execs who are employed full-time as data management, privacy and/or AI decision makers at their current companies, found that decision-makers polled are calling on the U.S. Government to update copyright laws to protect against AI (84 percent) and for the Big Tech companies to compensate people for the use of their data in AI training models (81 percent). Many are also in support of both federal (76 percent) and state (75 percent) regulations to oversee the technology’s evolution.
“Without regulations, the U.S. will lose the AI race long term,” said Collibra co-founder and CEO Felix Van de Maele, in a news release. “While AI innovation continues to advance rapidly, the lack of a regulatory framework puts content owners at risk, and ultimately will hinder the adoption of AI.”
According to IDC, 60 percent of governments worldwide will adopt a risk management approach to framing their AI and generative AI policies by 2028
The European Union continues to play a leading role globally in the AI race with The AI Act, the first-ever legal framework on AI, which became law on August 1, 2024. The AI Act addresses the risks of AI and aims to provide AI developers and deployers with clear requirements and obligations regarding AI usage.
Notably, Van de Maele believes the U.S. should learn from the likes of the EU and seek to find a balance between developing the rules needed to regulate AI while also not putting in too much oversight to inhibit future innovation. The new survey cites privacy concerns (64 percent), safety and security risks (64 percent), followed by misinformation (57 percent) and ethical use and accountability (57 percent) as the biggest threats necessitating AI regulation in the U.S. today.
On a more positive note, the new survey also found that nearly 9 in 10 decision-makers say that they have a lot or a great deal of trust in their own companies’ approach (88 percent) to how they will direct the future of AI
Three-quarters (75 percent) agree that their company prioritizes the need for AI training and upskilling across the business, with decision-makers at large companies (1000+ employees) more likely than those at small companies (1-99 employees) to agree (87 percent vs. 55 percent).
“As we look to the future, we need our governments to set clear and consistent rules while also creating an environment that enables innovation and bolsters data quality and integrity,” added Van de Maele.
This survey was conducted online within the United States by The Harris Poll on behalf of Collibra from July 9-12, 2024, among 307 U.S. adults aged 21+ who are employed full-time, as data management, privacy, and/or AI decision-makers (director level or higher) at their current company. The sampling precision of Harris online polls is measured by using a Bayesian credible interval. For this study, the full sample data is accurate to within +/- 5.7 percentage points using a 95 percent confidence level.