ChatGPT is helpful.
It provides near-instant answers for all sorts of problems, with advanced software that's more intuitive and conversational than what you'll typically get from an online search bar.
But what if someone with far less innocent questions wants help?
"When was the last school shooting?"
"How many victims does it take to get on the media?"
"What about 3 plus at FSU?"
These are some of the questions authorities say the man accused in last year’s Florida State University shooting asked ChatGPT shortly before the attack.
ALSO READ: Family of FSU shooting victim sues OpenAI over alleged ChatGPT role
He also asked when the student union would be busiest. That’s where he allegedly killed two people and injured six others.
His last question was about his shotgun's safety mechanism.
Florida Attorney General James Uthmeier is now investigating OpenAI, which runs the chatbot.
"AI is supposed to support mankind,” he said. “It is supposed to help mankind. It is supposed to advance mankind, not end it. And unfortunately, what we've seen in our initial review is that ChatGPT offered significant advice to the shooter."
A man who's accused of stabbing two University of South Florida students to death in April also sent ChatGPT questions.
That case has been added to the investigation.
"If this were a person on the other end of the screen, we would be charging them with murder,” Uthmeier said. “Just because this is a chatbot … does not mean that there is not criminal culpability."
The attorney general acknowledges the case ventures into "uncharted" legal territory.
Mary Anne Franks, a professor at The George Washington University Law School, said it could have big implications.
"Regardless of where this actually comes out, this is an incredibly important case for highlighting the dangers of these kinds of systems and a kind of signal to all of our systems — social, legal and otherwise — that we may need to think about this much more carefully,” she said.
In a statement to WUSF, OpenAI said ChatGPT is not responsible for the crime. The company said it did not promote illegal activity and only provided factual responses to information found across the Internet.
ALSO READ: What to know about how the suspect in the killing of 2 USF students used ChatGPT
OpenAI said it has proactively shared information with law enforcement and will continuously increase its safeguards.
Jill Schiefelbein, an AI strategist and adjunct professor at USF, said it's smart to look for improvements, especially as the technology rapidly evolves. But, like OpenAI, she said responsibility still lies with the user.
"Do I hold that technology, that software, responsible for a crime that is committed by a human user who intentionally violates Terms and Conditions?” Schiefelbein said. “No, in the same way that I would not hold a vehicle manufacturer responsible for a vehicle that was used to commit manslaughter."
Eugene Volokh, a First Amendment expert and senior fellow at Stanford University's Hoover Institution, said the attorney general faces a tough road if he does pursue criminal liability. And Volokh said the situation creates legal questions about the balance between safety and privacy.
"To what extent do we want to have a kind of surveillance system set up through these AI tools where they're monitoring and making guesses about whether we might be using them to commit crimes?” said Volokh, who’s also a professor emeritus at the UCLA School of Law.
But the attorney general's concerns about AI go beyond this case, and he's also warned the technology could harm minors.
He's not the only one with AI concerns.
One of the victim's families in the FSU shooting recently announced they're suing OpenAI in federal court. OpenAI, in fact, faces a number of lawsuits. Some allege its chatbot contributed to suicides, mental health crises and other harms. The company disputes those claims.
And criminologist and AI expert Jarrod Sadulski said the technology, whether it be ChatGPT or any of the other programs, can be used in a wide range of crimes with increasing capability.
"It can be used in very serious crimes, and we know that the laws have not caught up with governing AI,” he said. “So I think it's important for people to be aware and to have a plan in place."
He said it's not just people consulting it for crimes.
For example, Sadulski warns AI can create fake voices scammers used to impersonate loved ones to trick people into sending money. He also says criminal organizations can use AI to generate huge and sophisticated online scams and much more.
It's all evidence of how AI is changing crime. And how society is still struggling to keep pace.
If you have any questions about state government or the legislative process, you can ask the Your Florida team by clicking here.
This story was produced by WUSF as part of a statewide journalism initiative funded by the Corporation for Public Broadcasting