Facebook parent Meta may have unleashed the kraken with the apparent leak of its text-generating program “LLaMA,” while even more secure artificial intelligence models have already been used for illegitimate purposes, Missouri Sen. Josh Hawley warns.
Hawley and Sen. Richard Blumenthal, D-Connecticut, sent a joint letter this week to Meta CEO Mark Zuckerberg over the program’s potential for “misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms.”
Meta released its “Large Language Model Meta AI” – LLaMA – to approved researchers in February. But it was an open-source release, meaning the program “can be freely accessed, used, changed, and shared (in modified or unmodified form) by anyone.”
As the senators note, “Regrettably, but predictably, within days of the announcement, the full model appeared on BitTorrent, making it available to anyone, anywhere in the world, without monitoring or oversight. …
“Even in the short time that generative AI tools have been available to the public, they have been dangerously abused – a risk that is further exacerbated with open source models. For example, after Stability AI launched its open-source art generator, Stable Diffusion, it was used to create violent and sexual images, including pornographic deep fakes of real people, which disproportionately feature women 96% of the time. Even OpenAI’s closed model, ChatGPT, has been misused to create malware and phishing campaigns, financial fraud, and obscene content involving children.”
Hawley amplified his concerns in an interview with The Heartlander – including the potential for the further creation of reality-bending “deepfake” videos in which real people are convincingly put into false or misleading contexts, saying and doing things they actually didn’t.
“The easy bottom line here is,” Hawley said, “what is Facebook doing in developing this AI? Are they putting safeguards in place so that it can’t, for instance, influence our elections? Is this AI going to be able to generate deepfake videos like the one we’ve seen with Trump, that then is going to go out there during an election – and be up on TV, and it will be entirely false and try to mislead people? Are they going to use this to try and push false information to voters?
“We already know Mark Zuckerberg has had his hand in election-related issues for years now, funding efforts about vote counting. Is this going to be the latest thing? I mean, that’s my big concern.
“What about kids? (Is Facebook) going to let their AI model generate fake but sexually explicit images of children, or tell children that they ought to commit suicide?
“It doesn’t look like to me Facebook is concerned about anything except for Facebook’s bottom line and their own power. And I think we’ve got to make sure that these big companies who are not very good actors, that they are held to account.”
Hawley is out front of the issue in the Senate, suggesting to colleagues that legislation is urgent, and should cover five areas: the right to sue AI companies for harm; personal data protection; protecting minors from AI; blocking AI from going to or from China; and requiring AI programs to be licensed.
“What we can do now is, let’s put some basic common-sense guardrails in place,” Hawley said. “Let’s protect kids online and say that these companies shouldn’t be able to use AI to target kids under the age of, say, 16.
“Let’s give people the right to sue when they are harmed by AI – and if it comes after you and it gets all of your personal information without your consent, you should be able to sue Google or Facebook. If it harms you in some other way, you should be able to sue them, hold them accountable.
“My big concern, again, is that the most powerful, and frankly liberal, companies in the world are about to become more powerful, and we won’t be able to do anything about it. We’ve got to give normal, everyday people the right to hold these big companies accountable.”
Given the urgency of these matters and the speed with which AI is being rolled out, The Heartlander asked Hawley whether he thinks alarm bells are going off adequately in the halls of Congress.
“Probably not yet. I think there is a dawning realization that we’ve got some work to do here. And everybody, including me, hopes this technology will be useful. I mean, I hope so. I hope it will be good for people. I hope it will help our workers and help our families.
“But I want to make sure that we don’t just say to these powerful companies that have spent the last however-many-years trying to push their woke ideology down our throats – I’m not willing to just turn over the keys to them and say, ‘Yeah, sure, go ahead, use AI, do whatever you want to us.’ No, thanks. No, I want to have some power for ordinary people to be able to protect their information, to protect their right to vote, to protect their kids. So that’s why we need to take some action right now.
“As a parent, I don’t want my kids getting sucked into some discussion with an AI bot on Facebook or Google that wants to target them, that wants to feed them information and wants to extract information from them.”