Dangers of AI in criminal activity examined by House Judiciary Subcommittee
As artificial intelligence transforms every corner of society, the House Judiciary Subcommittee on Crime and Federal Government Surveillance explored its darker side during a hearing on Wednesday.
(The Center Square) -
(The Center Square) - As artificial intelligence transforms every corner of society, the House Judiciary Subcommittee on Crime and Federal Government Surveillance explored its darker side during a hearing on Wednesday.
They discussed the growing threat of AI-enabled crime, ranging from fraud and identity theft to exploitation of children. They also assessed whether U.S. law enforcement and policies are equipped enough to keep up with the wave of digital criminality.
The hearing follows a recent incident in which an individual used AI to mimic the voice of Secretary of State Marco Rubio, raising concerns about impersonation and disinformation.
Zara Perumal, co-founder and Chief Technology Officer of Overwatch Data, outlined the far-reaching implications of AI-enabled crime for businesses, everyday users, and the evolving criminal ecosystem.
“One of the most immediate changes is how generative AI lowers the barrier to learning, executing, and scaling cybercrime and fraud,” Perumal said.
She emphasized that AI threats are even more dangerous because of their ability to mimic real people using a blend of voice, image, and video technologies.
“These scams target our instinctive human trust in hearing a loved one’s voice, and our willingness to do anything to help them,” she said.
As AI continues to become more advanced, it is also becoming more weaponized to carry out cyberattacks that are more precise.
“For instance, AI-powered tools can autonomously scan for weaknesses in critical sectors such as energy grids, hospitals, communication networks, and global financial systems,” Ari Redbord, Global Head of Policy, TRM Labs, said.
According to Andrew Bowne, professorial lecturer of Law at The George Washington University Law School, “a comprehensive legislative framework that both regulates AI development and deployment and protects potential victims” is a step toward meaningful action.
“Congress could require companies developing AI systems with potential criminal applications to conduct and publish impact assessments evaluating the potential for misuse,” Bowne said.
When responding to these threats, experts shared their thoughts on what some of the solutions are to strengthen protections and ensure accountability.
“Congress should create a federal right for individuals to demand the removal of non-consensual AI-generated content depicting them, with civil and criminal penalties for non-compliance,” Bowne said.