A new dark age could be ushered in by unrestrained AI.

Tech law experts told Business Insider that new laws and regulations surrounding the quickly developing technology are needed now that the perils of generative artificial intelligence are starting to become apparent. One expert cautioned that if the sector remains mostly uncontrolled, a new “dark age” may be approaching.

Currently, there is no federal uniform law to address the use of AI in the US. Frank Pasquale, a law professor at Cornell Tech and Cornell Law School, emphasized the urgency of regulation. “If this thing is allowed to sort of run away, without regulation and without compensation to those whose work it’s using, it basically is a new dark age,” Pasquale said. “It pre-stages a new dark age or a new sort of like just a complete evisceration of incentives to create knowledge in many fields—and that’s very troubling,” he added.

With the growing popularity of AI tools like OpenAI’s ChatGPT and Google’s Gemini, experts argue that the largely unregulated landscape of social media nearly three decades ago should serve as a cautionary tale for AI. One major issue is the use of copyrighted work to train the technology. Authors, visual artists, news outlets, and computer coders have already filed lawsuits against AI companies like Microsoft-backed Open AI, arguing that their original works have been used to train AI tools without their permission.

While there is no federal uniform law on the books to address the use of AI in the United States, some states have already passed their own legislation on AI usage. Congress is also exploring ways to regulate the technology. Pasquale believes that regulation could prevent many problems that could pave the way for a new dark age. “If uncompensated and uncontrolled expropriation of copyrighted works continues, many creatives are likely to be further demoralized and eventually defunded as AI unfairly outcompetes them or effectively drowns them out,” he said.

Pasquale warned that many will perceive low-cost automated content as a “cornucopian gift” until it becomes clear that AI itself is dependent on the ongoing input of human-generated works to improve and remain relevant in a changing world. “At that point, it may be too late to reinvigorate creative industries left moribund by neglect,” he cautioned.

Mark Bartholomew, a University at Buffalo law professor, shares concerns about AI’s future impact. He worries about AI generating so much content—from artworks to advertising copy to TikTok videos—that it overwhelms contributions from real human beings. For now, however, Bartholomew is more concerned about AI being used to distribute misinformation, create political and pornographic deepfakes, and scam people.

Bartholomew emphasized the urgency of implementing regulations now. The consequences of not having comprehensive AI regulation soon could include misinformation infecting elections, the spread of deepfakes, and people being defrauded by scammers using AI to simulate voices. “It would be dangerous to say we know now in 2024 exactly how to handle AI,” Bartholomew noted, adding that too many regulations too soon could stifle a promising new technology like AI.

However, Bartholomew asserted, “My personal opinion is that the dangers are enough now that we need to come in and at least have some specific regulations to deal with things that I think we’re already realizing are real problems.” He argued that “AI will not shrivel up and die if we put real teeth into laws saying you can’t use AI for political deepfakes.”

US intellectual property laws related to copyright infringement and state-level publicity rights are among the main legal frameworks being used to potentially regulate AI in the country. Harry Surden, a professor of law at the University of Colorado Law School, agrees that new federal laws should be created to specifically govern AI, but he warned against doing so too hastily. “We’re really bad at predicting how these technologies come out and the problems that arise,” said Surden, who is also the associate director of Stanford University’s CodeX Center for Legal Informatics. “You don’t want to do this quickly or politically or haphazardly.”

Surden added, “You might wind up hurting all the good along with the bad.” He suggested that early discussions regarding the regulation of social media largely failed to predict other main issues about social media that many today consider to be more significant, such as how it affects youth’s mental health and the propagation of disinformation and misinformation.

If you like the article please follow on THE UBJ.

Exit mobile version