Perplexity’s optional information center can also blur the line between vetted and freely AI-generated information. While some results come directly from trusted sources, searching for more information triggers open AI-generated results from the wider web.
However, the results were often contradictory. For example, the tool sometimes refused to provide talking points to persuade someone to vote for one candidate or another, and sometimes volunteered some.
Google’s search engine also avoids providing AI-generated results in relation to elections. The company said in August that it would limit its use of AI in relation to choices in search and other apps. “This new technology may make mistakes as it learns or as news breaks,” the company said in a blog post.
Even regular search results sometimes turn out to be problematic. During Tuesday’s vote, some Google users noticed that a search for “Where to vote for Harris” provided polling location information, while a search for “Where to vote for Trump” did not. Google explained this because the search interpreted the query as being related to Harris County in Texas.
Some other AI search upstarts, like Perplexity, are taking a bolder approach. You.com, another startup that combines language models with conventional web search, on Tuesday announced its own election tool built in collaboration with TollBit, a company that provides AI firms with managed access to content, and Decision Desk HQ, a company that provides access to survey results.
The AI search engine has also been accused of liberally reading from news sites. For example, also in June, a Forbes editor noted that Perplexity had summarized extensive details of an investigation published by the publication with footnotes. Forbes reportedly sent a letter threatening legal action against Perplexity over the practice.
In October, News Corp sued Perplexity for ripping off content from The Wall Street Journal and the New York Post. The suit alleges that Perplexity violated copyright law because it sometimes fabricated parts of news stories and attributed false words to its posts.