The Chicago Tribune has initiated legal action against the AI search engine Perplexity, alleging unauthorized usage of its content. According to the complaint, the Tribune’s attorneys reached out to Perplexity in mid-October regarding potential content usage, to which Perplexity’s legal team responded that they do not train their models with the Tribune’s materials but acknowledged they might generate “non-verbatim factual summaries.”
However, the Tribune’s legal representatives counter that Perplexity is indeed providing content that closely resembles the original articles verbatim. A significant point of contention involves Perplexity’s use of Retrieval Augmented Generation (RAG) technology, which is designed to minimize inaccuracies by referencing verified data sources. The Tribune claims that this method includes scraping its content without consent. Furthermore, they argue that Perplexity’s Comet browser circumvents the newspaper’s paywall to offer detailed summaries of its articles.
The Tribune is not alone in its legal efforts; it is part of a coalition that includes 17 other publications from MediaNews Group and Tribune Publishing, which previously filed a lawsuit against OpenAI and Microsoft in April concerning training materials for AI models. A separate lawsuit involving nine publications from the same groups was launched in November against both the AI model creator and its cloud service provider.
These ongoing legal battles signify a growing concern among content creators regarding the implications of AI technologies like RAG. As various lawsuits unfold, the legal responsibilities surrounding AI content usage remain a critical issue awaiting judicial clarification.
Key points to consider:
– Chicago Tribune alleges Perplexity uses its content without permission.
– Legal complexities surround RAG technology and content sourcing.
– The Tribune is part of a broader coalition challenging AI model creators for similar issues.
