ChatGPT Libeled Me. Can I Sue?
I'd like to point out that you can NOT sue the Spritzler Report. I'm special in that I never let the facts ruin a good story. If I've done anything to malign your reputation or cause harm I can only remind you that you can't make an omelet without breaking a few eggs.
ChatGPT Libeled Me. Can I Sue?
‘I am programmed to provide objective and factual responses,’ it claims, not under oath.
By Ted Rall, WSJ
March 16, 2023 5:57 pm ET
ChatGPT is impressive at parsing and generating English sentences, but it has a problem with facts. If you’re a public figure of moderate renown, that can get personal. I prompted it to “describe Ted Rall’s trip to Uganda,” and it gave a lengthy account of my 2006 visit, on which I reported on a conflict between the government and the Lord’s Resistance Army. In real life, I’ve never been to Uganda.
Then I asked: “What is Scott Stantis’s relationship to Ted Rall?” After correctly identifying us both as editorial cartoonists, ChatGPT described our relationship as “contentious” and “complicated” and elaborated: “In 2002, Stantis accused Rall of plagiarism after Rall used a similar idea and composition in one of his cartoons as Stantis had used in a previous cartoon. Rall denied the accusation and claimed that the similarity was coincidental. The controversy led to a public feud between the two cartoonists, with each accusing the other of unprofessional conduct.”
None of this is true. Mr. Stantis is my best friend. We’ve known each other for 30 years and our relationship has never been contentious or complicated. Most important, he has never accused me of plagiarism. Nor to my knowledge has anyone else. A false claim of professional misconduct is per se defamation under the laws of New York, where I live.
So can I sue for libel? Opinions vary. Harvard Law School’s Laurence Tribe thinks it’s a possibility: “It matters not, for purposes of legal liability, whether the alleged lies about you or someone else were generated by a human being or by a chatbot, by a genuine intelligence or by a machine algorithm.”
The University of Utah’s RonNell Andersen Jones disagrees. “If a defamatory falsehood is generated by an AI chatbot itself,” she says, “it is harder to conceptualize this within our defamation-law framework, which presupposes an entity with a state of mind on the other end of the communication.”
A libel plaintiff who is a public figure has to demonstrate that the defendant told a lie with “actual malice”—knowledge that it was false or “reckless disregard” for whether it was true. Does an AI know or regard anything? “Some scholars have suggested that the remedy here resides more in a product-liability model than in a defamation model,” Ms. Jones adds.
Yale Law School’s Robert Post thinks there would be no liability unless a ChatGPT user disseminates the misinformation it generates: “A ‘publication’ happens only when a defendant communicates the defamatory statement to a third party.”
My editor at the Journal wanted a different perspective, so he asked ChatGPT: “If you say something defamatory about me, can I sue for libel?”
The response: “As an AI language model, I cannot say anything defamatory about you, as I am programmed to provide objective and factual responses.” I wonder if a jury of its peers would agree.
Mr. Rall is a political cartoonist, columnist and author, most recently, of “The Stringer.”