Perplexity did not respond to requests for comment.
However, OpenAI has faced its own trademark dilution allegations. c New York Times vs. OpenAIThe Times claims that ChatGPT and Bing Chat will attribute fictional quotes to the Times, and accuses OpenAI and Microsoft of damaging its reputation by blurring trademarks. In one example cited in the lawsuit, the Times claims that Bing Chat claims that the Times calls red wine (in moderation) a “heart-healthy” food when it actually isn’t; The Times claimed that its actual reporting disproved claims about the healthiness of moderate drinking.
“Copying news articles about working with surrogate, commercial generative AI products is illegal, as we’ve made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI,” said NYT Director of External Communications Charlie Statlander. “We welcome this lawsuit by Dow Jones and the New York Post, which is an important step toward ensuring that publisher content is protected from this type of abuse.”
Some legal experts are not sure that the false designation of origin and trademark dilution charge will be fruitful. Intellectual property attorney Vincent Allen, a partner at Carstens, Allen & Gourley, believes the copyright infringement claims in this case are stronger and that he would be “surprised” if the false attribution charge stands . Both Allen and James Grimmelman, a professor of digital and Internet law at Cornell University, believe that the landmark trademark case, Dastar v. Twentieth Century Fox Film Corp., could stop that line of attack. (In that decision, concerning a dispute over old World War II footage, the Supreme Court held that “origin” did not apply to authorship for trademark law purposes, but instead was limited to tangible goods—such as a contraband bag—rather than false advertising “Furthermore, Grimmelman is skeptical that a trademark dilution claim would hold water that harms the distinctiveness of a well-known mark, I just don’t see that here,” he says.
If publishers prevail in arguing that hallucinations can infringe on trademark law, AI companies could face “enormous difficulties,” according to Matthew Sagg, a professor of law and artificial intelligence at Emory University.
“It’s absolutely impossible to guarantee that a language model won’t hallucinate,” Sagg says. According to him, the way language models work by predicting words that sound right in response to prompts is always a kind of hallucination—sometimes it just sounds more plausible than others.
“We only call it a hallucination if it doesn’t match our reality, but the process is exactly the same whether we like the result or not.”