Keeping the Human in the Loop

AI Curation Is Broken: Why Every Model Compromises Independence

Every morning, somewhere between the first coffee and the first meeting, thousands of AI practitioners face the same impossible task. They need to stay current in a field where biomedical information alone doubles every two months, where breakthrough papers drop daily on arXiv, and where vendor announcements promising revolutionary capabilities flood their inboxes with marketing claims that range from genuinely transformative to laughably exaggerated. The cognitive load is crushing, and the tools they rely on to filter signal from noise are themselves caught in a fascinating evolution.

The landscape of AI content curation has crystallised around a fundamental tension. Practitioners need information that's fast, verified, and actionable. Yet the commercial models that sustain this curation, whether sponsorship-based daily briefs, subscription-funded deep dives, or integrated dashboards, all face the same existential question: how do you maintain editorial independence whilst generating enough revenue to survive?

When a curator chooses to feature one vendor's benchmark claims over another's, when a sponsored newsletter subtly shifts coverage away from a paying advertiser's competitor, when a paywalled analysis remains inaccessible to developers at smaller firms, these editorial decisions ripple through the entire AI ecosystem. The infrastructure of information itself has become a competitive battleground, and understanding its dynamics matters as much as understanding the technology it describes.

Speed, Depth, and Integration

The AI content landscape has segmented into three dominant formats, each optimising for different practitioner needs and time constraints. These aren't arbitrary divisions. They reflect genuine differences in how busy professionals consume information when 62.5 per cent of UK employees say the amount of data they receive negatively impacts their work, and 52 per cent of US workers agree the quality of their work decreases because there's not enough time to review information.

The Three-Minute Promise

Daily brief newsletters have exploded in popularity precisely because they acknowledge the brutal reality of practitioner schedules. TLDR AI, which delivers summaries in under five minutes, has built its entire value proposition around respecting reader time. The format is ruthlessly efficient: quick-hit news items, tool of the day, productivity tips. No lengthy editorials. No filler.

Dan Ni, TLDR's founder, revealed in an AMA that he uses between 3,000 to 4,000 online sources to curate content, filtering through RSS feeds and aggregators with a simple test: “Would my group chat be interested in this?” As TLDR expanded, Ni brought in domain experts, freelance curators paid $100 per hour to identify compelling content.

The Batch, Andrew Ng's weekly newsletter from DeepLearning.AI, takes a different approach. Whilst still respecting time constraints, The Batch incorporates educational elements: explanations of foundational concepts, discussions of research methodologies, explorations of ethical considerations. This pedagogical approach transforms the newsletter from pure news consumption into a learning experience. Subscribers develop deeper AI literacy, not just stay informed.

Import AI, curated by Jack Clark, co-founder of Anthropic, occupies another niche. Launched in 2016, Import AI covers policy, geopolitics, and safety framing for frontier AI. Clark's background in AI policy adds crucial depth, examining both technical and ethical aspects of developments that other newsletters might treat as purely engineering achievements.

What unites these formats is structural efficiency. Each follows recognisable patterns: brief introduction with editorial context, one or two main features providing analysis, curated news items with quick summaries, closing thoughts. The format acknowledges that practitioners must process information whilst managing demanding schedules and insufficient time for personalised attention to every development.

When Subscription Justifies Depth

Whilst daily briefs optimise for breadth and speed, paywalled deep dives serve a different practitioner need: comprehensive analysis that justifies dedicated attention and financial investment. The Information, with its $399 annual subscription, exemplifies this model. Members receive exclusive articles, detailed investigations, and access to community features like Slack channels where practitioners discuss implications.

The paywall creates a fundamentally different editorial dynamic. Free newsletters depend on scale, needing massive subscriber bases to justify sponsorship rates. Paywalled content can serve smaller, more specialised audiences willing to pay premium prices. Hell Gate's approach, offering free access alongside paid tiers at £6.99 per month, generated over £42,000 in monthly recurring revenue from just 5,300 paid subscribers. This financial model sustains editorial independence in ways that advertising-dependent models cannot match.

Yet paywalls face challenges in the AI era. Recent reports show AI chatbots have accessed paywalled content, either due to paywall technology load times or differences between web crawling and user browsing. When GPT-4 or Claude can summarise articles behind subscriptions, the value proposition of paying for access diminishes. Publishers responded by implementing harder paywalls that prevent search crawling, but this creates tension with discoverability and growth.

The subscription model also faces competition from AI products themselves. OpenAI's ChatGPT Plus subscriptions were estimated to bring in roughly $2.7 billion annually as of 2024. GitHub Copilot had over 1.3 million paid subscribers by early 2024. When practitioners already pay for AI tools, adding subscriptions for content about those tools becomes a harder sell.

Dynamic paywalls represent publishers' attempt to thread this needle. Frankfurter Allgemeine Zeitung utilises AI and machine learning to predict which articles will convert best. Business Insider reported that AI-based paywall strategies increased conversions by 75 per cent. These systems analyse reader behaviour, predict engagement, and personalise access in ways static paywalls cannot.

The Aggregation Dream

The third format promises to eliminate the need for multiple newsletters, subscriptions, and sources entirely. Integrated AI dashboards claim to surface everything relevant in a single interface, using algorithms to filter, prioritise, and present information tailored to individual practitioner needs.

The appeal is obvious. Rather than managing dozens of newsletter subscriptions and checking multiple sources daily, practitioners could theoretically access a single dashboard that monitors thousands of sources and surfaces only what matters. Tools like NocoBase enable AI employees to analyse datasets and automatically build visualisations from natural language instructions, supporting multiple model services including OpenAI, Gemini, and Anthropic. Wren AI converts natural language into SQL queries and then into charts or reports.

Databricks' AI/BI Genie allows non-technical users to ask questions about data through conversational interfaces, getting answers without relying on expert data practitioners. These platforms increasingly integrate chat-style assistants directly within analytics environments, enabling back-and-forth dialogue with data.

Yet dashboard adoption among AI practitioners remains limited compared to traditional newsletters. The reasons reveal important truths about how professionals actually consume information. First, dashboards require active querying. Unlike newsletters that arrive proactively, dashboards demand that users know what questions to ask. This works well for specific research needs but poorly for serendipitous discovery of unexpected developments.

Second, algorithmic curation faces trust challenges. When a newsletter curator highlights a development, their reputation and expertise are on the line. When an algorithm surfaces content, the criteria remain opaque. Practitioners wonder: what am I missing? Is this optimising for what I need or what the platform wants me to see?

Third, integrated dashboards often require institutional subscriptions beyond individual practitioners' budgets. Platforms like Tableau, Domo, and Sisense target enterprise customers with pricing that reflects organisational rather than individual value, limiting adoption among independent researchers, startup employees, and academic practitioners.

The adoption data tells the story. Whilst psychologists' use of AI tools surged from 29 per cent in 2024 to 56 per cent in 2025, this primarily reflected direct AI tool usage rather than dashboard adoption. When pressed for time, practitioners default to familiar formats: email newsletters that arrive predictably and require minimal cognitive overhead to process.

Vetting Vendor Claims

Every AI practitioner knows the frustration. A vendor announces breakthrough performance on some benchmark. The press release trumpets revolutionary capabilities. The marketing materials showcase cherry-picked examples. And somewhere beneath the hype lies a question that matters enormously: is any of this actually true?

The challenge of verifying vendor claims has become central to content curation in AI. When benchmark results can be gamed, when testing conditions don't reflect production realities, and when the gap between marketing promises and deliverable capabilities yawns wide, curators must develop sophisticated verification methodologies.

The Benchmark Problem

AI model makers love to flex benchmark scores. But research from European institutions identified systemic flaws in current benchmarking practices, including construct validity issues (benchmarks don't measure what they claim), gaming of results, and misaligned incentives. A comprehensive review highlighted problems including: not knowing how, when, and by whom benchmark datasets were made; failure to test on diverse data; tests designed as spectacle to hype AI for investors; and tests that haven't kept up with the state of the art.

The numbers themselves reveal the credibility crisis. In 2023, AI systems solved just 4.4 per cent of coding problems on SWE-bench. By 2024, that figure jumped to 71.7 per cent, an improvement so dramatic it invited scepticism. Did capabilities actually advance that rapidly, or did vendors optimise specifically for benchmark performance in ways that don't generalise to real-world usage?

New benchmarks attempt to address saturation of traditional tests. Humanity's Last Exam shows top systems scoring just 8.80 per cent. FrontierMath sees AI systems solving only 2 per cent of problems. BigCodeBench shows 35.5 per cent success rates against human baselines of 97 per cent. These harder benchmarks provide more headroom for differentiation, but they don't solve the fundamental problem: vendors will optimise for whatever metric gains attention.

Common vendor pitfalls that curators must navigate include cherry-picked benchmarks that showcase only favourable comparisons, non-production settings where demos run with temperatures or configurations that don't reflect actual usage, and one-and-done testing that doesn't account for model drift over time.

Skywork AI's 2025 guide to evaluating vendor claims recommends requiring end-to-end, task-relevant evaluations with configurations practitioners can rerun themselves. This means demanding seeds, prompts, and notebooks that enable independent verification. It means pinning temperatures, prompts, and retrieval settings to match actual hardware and concurrency constraints. And it means requiring change-notice provisions and regression suite access in contracts.

The Verification Methodology Gap

According to February 2024 research from First Analytics, between 70 and 85 per cent of AI projects fail to deliver desired results. Many failures stem from vendor selection processes that inadequately verify claims. Important credibility indicators include vendors' willingness to facilitate peer-to-peer discussions between their data scientists and clients' technical teams. This openness for in-depth technical dialogue demonstrates confidence in both team expertise and solution robustness.

Yet establishing verification methodologies requires resources that many curators lack. Running independent benchmarks demands computing infrastructure, technical expertise, and time. For daily newsletter curators processing dozens of announcements weekly, comprehensive verification of each claim is impossible. This creates a hierarchy of verification depth based on claim significance and curator resources.

For major model releases from OpenAI, Google, or Anthropic, curators might invest in detailed analysis, running their own tests and comparing results against vendor claims. For smaller vendors or incremental updates, verification often relies on proxy signals: reputation of technical team, quality of documentation, willingness to provide reproducible examples, and reports from early adopters in practitioner communities.

Academic fact-checking research offers some guidance. The International Fact-Checking Network's Code of Principles, adopted by over 170 organisations, emphasises transparency about sources and funding, methodology transparency, corrections policies, and non-partisanship. Peter Cunliffe-Jones, who founded Africa's first non-partisan fact-checking organisation in 2012, helped devise these principles that balance thoroughness with practical constraints.

AI-powered fact-checking tools have emerged to assist curators. Team CheckMate, a collaboration between journalists from News UK, dPA, Data Crítica, and the BBC, developed a web application for real-time fact-checking on video and audio broadcasts. Facticity won TIME's Best Inventions of 2024 Award for multilingual social media fact-checking. Yet AI fact-checking faces the familiar recursion problem: how do you verify AI claims using AI tools? The optimal approach combines both: AI tools for initial filtering and flagging, human experts for final judgement on significant claims.

Prioritisation in a Flood

When information doubles every two months, curation becomes fundamentally about prioritisation. Not every vendor claim deserves verification. Not every announcement merits coverage. Curators must develop frameworks for determining what matters most to their audience.

TLDR's Dan Ni uses his “chat test”: would my group chat be interested in this? This seemingly simple criterion embodies sophisticated judgement about practitioner relevance. Import AI's Jack Clark prioritises developments with policy, geopolitical, or safety implications. The Batch prioritises educational value, favouring developments that illuminate foundational concepts over incremental performance improvements.

These different prioritisation frameworks reveal an important truth: there is no universal “right” curation strategy. Different practitioner segments need different filters. Researchers need depth on methodology. Developers need practical tool comparisons. Policy professionals need regulatory and safety framing. Executives need strategic implications. Effective curators serve specific audiences with clear priorities rather than attempting to cover everything for everyone.

AI-powered curation tools promise to personalise prioritisation, analysing individual behaviour to refine content suggestions dynamically. Yet this technological capability introduces new verification challenges: how do practitioners know the algorithm isn't creating filter bubbles, prioritising engagement over importance, or subtly favouring sponsored content? The tension between algorithmic efficiency and editorial judgement remains unresolved.

The Commercial Models

The question haunting every serious AI curator is brutally simple: how do you make enough money to survive without becoming a mouthpiece for whoever pays? The tension between commercial viability and editorial independence isn't new, but the AI content landscape introduces new pressures and possibilities that make traditional solutions inadequate.

The Sponsorship Model

Morning Brew pioneered a newsletter sponsorship model that has since been widely replicated in AI content. The economics are straightforward: build a large subscriber base, sell sponsorship placements based on CPM (cost per thousand impressions), and generate revenue without charging readers. Morning Brew reached over £250 million in lifetime revenue by Q3 2024.

Newsletter sponsorships typically price between $25 and $250 CPM, with industry standard around £40 to £50. This means a newsletter with 100,000 subscribers charging £50 CPM generates £5,000 per sponsored placement. Multiple sponsors per issue, multiple issues per week, and the revenue scales impressively.

Yet the sponsorship model creates inherent tensions with editorial independence. Research on native advertising, compiled in Michelle Amazeen's book “Content Confusion,” delivers a stark warning: native ads erode public trust in media and poison journalism's democratic role. Studies found that readers almost always confuse native ads with real reporting. According to Bartosz Wojdynski, director of the Digital Media Attention and Cognition Lab at the University of Georgia, “typically somewhere between a tenth and a quarter of readers get that what they read was actually an advertisement.”

The ethical concerns run deeper. Native advertising is “inherently and intentionally deceptive to its audience” and perforates the normative wall separating journalistic responsibilities from advertisers' interests. Analysis of content from The New York Times, The Wall Street Journal, and The Washington Post found that just over half the time when outlets created branded content for corporate clients, their coverage of that corporation steeply declined. This “agenda-cutting effect” represents a direct threat to editorial integrity.

For AI newsletters, the pressure is particularly acute because the vendor community is both the subject of coverage and the source of sponsorship revenue. When an AI model provider sponsors a newsletter, can that newsletter objectively assess the provider's benchmark claims? The conflicts aren't hypothetical; they're structural features of the business model.

Some curators attempt to maintain independence through disclosure and editorial separation. The “underwriting model” involves brands sponsoring content attached to normal reporting that the publisher was creating anyway. The brand simply pays to have its name associated with content rather than influencing what gets covered. Yet even with rigorous separation, sponsorship creates subtle pressures. Curators naturally become aware of which topics attract sponsors and which don't. Over time, coverage can drift towards commercially viable subjects and away from important but sponsor-unfriendly topics.

Data on reader reactions to disclosure provides mixed comfort. Sprout's Q4 2024 Pulse Survey found that 59 per cent of social users say the “#ad” label doesn't affect their likelihood to engage, whilst 25 per cent say it makes them more likely to trust content. A 2024 Yahoo study found that disclosing AI use in advertisements boosted trust by 96 per cent. However, Federal Trade Commission guidelines require clear identification of advertisements, and the problem worsens when content is shared on social media where disclosures often disappear entirely.

The Subscription Model

Subscription models offer a theoretically cleaner solution: readers pay directly for content, eliminating advertiser influence. Hell Gate's success, generating over £42,000 monthly from 5,300 paid subscribers whilst maintaining editorial independence, demonstrates viability. The Information's £399 annual subscriptions create a sustainable business serving thousands of subscribers who value exclusive analysis and community access.

Yet subscription models face formidable challenges in AI content. First, subscriber acquisition costs are high. Unlike free newsletters that grow through viral sharing and low-friction sign-ups, paid subscriptions require convincing readers to commit financially. Second, the subscription market fragments quickly. When multiple curators all pursue subscription models, readers face decision fatigue. Most will choose one or two premium sources rather than paying for many, creating winner-take-all dynamics.

Third, paywalls create discoverability problems. Free content spreads more easily through social sharing and search engines. Paywalled content reaches smaller audiences, limiting a curator's influence. For curators who view their work as public service or community building, paywalls feel counterproductive even when financially necessary.

The challenge intensifies as AI chatbots learn to access and summarise paywalled content. When Claude or GPT-4 can reproduce analysis that sits behind subscriptions, the value proposition erodes. Publishers responded with harder paywalls that prevent AI crawling, but this reduces legitimate discoverability alongside preventing AI access.

The Reuters Institute's 2024 Digital News Report found that across surveyed markets, only 17 per cent of respondents pay for news online. This baseline willingness-to-pay suggests subscription models will always serve minority audiences, regardless of content quality. Most readers have been conditioned to expect free content, making subscription conversion inherently difficult.

Practical Approaches

The reality facing most AI content curators is that no single commercial model provides perfect editorial independence whilst ensuring financial sustainability. Successful operations typically combine multiple revenue streams, balancing trade-offs across sponsorship, subscription, and institutional support.

A moderate publication frequency helps strike balance: twice-weekly newsletters stay top-of-mind yet preserve content quality and advertiser trust. Transparency about commercial relationships provides crucial foundation. Clear labelling of sponsored content, disclosure of institutional affiliations, and honest acknowledgment of potential conflicts enable readers to assess credibility themselves.

Editorial policies that create structural separation between commercial and editorial functions help maintain independence. Dedicated editorial staff who don't answer to sales teams can make coverage decisions based on practitioner value rather than revenue implications. Community engagement provides both revenue diversification and editorial feedback. Paid community features like Slack channels or Discord servers generate subscription revenue whilst connecting curators directly to practitioner needs and concerns.

The fundamental insight is that editorial independence isn't a binary state but a continuous practice. No commercial model eliminates all pressures. The question is whether curators acknowledge those pressures honestly, implement structural protections where possible, and remain committed to serving practitioner needs above commercial convenience.

Curation in an AI-Generated World

The central irony of AI content curation is that the technology being covered is increasingly capable of performing curation itself. Large language models can summarise research papers, aggregate news, identify trends, and generate briefings. As these capabilities improve, what role remains for human curators?

Newsweek is already leaning on AI for video production, breaking news teams, and first drafts of some stories. Most newsrooms spent 2023 and 2024 experimenting with transcription, translation, tagging, and A/B testing headlines before expanding to more substantive uses.

Yet this AI adoption creates familiar power imbalances. A 2024 Tow Center report from Columbia University, based on interviews with over 130 journalists and news executives, found that as AI-powered search gains prominence, “a familiar power imbalance” is emerging between news publishers and tech companies. As technology companies gain access to valuable training data, journalism's dependence becomes entrenched in “black box” AI products.

The challenge intensifies as advertising revenue continues falling for news outlets. Together, five major tech companies (Alphabet, Meta, Amazon, Alibaba, and ByteDance) commanded more than half of global advertising investment in 2024, according to WARC Media. As newsrooms rush to roll out automation and partner with AI firms, they risk sinking deeper into ethical lapses, crises of trust, worker exploitation, and unsustainable business models.

For AI practitioner content specifically, several future scenarios seem plausible. In one, human curators become primarily editors and verifiers of AI-generated summaries. The AI monitors thousands of sources, identifies developments, generates initial summaries, and flags items for human review. Curators add context, verify claims, and make final editorial decisions whilst AI handles labour-intensive aggregation and initial filtering.

In another scenario, specialised AI curators emerge that practitioners trust based on their training, transparency, and track record. Just as practitioners currently choose between Import AI, The Batch, and TLDR based on editorial voice and priorities, they might choose between different AI curation systems based on their algorithms, training data, and verification methodologies.

A third possibility involves hybrid human-AI collaboration models where AI curates whilst humans verify. AI-driven fact-checking tools validate curated content. Bias detection algorithms ensure balanced representation. Human oversight remains essential for tasks requiring nuanced cultural understanding or contextual assessment that algorithms miss.

The critical factor will be trust. Research shows that only 44 per cent of surveyed psychologists never used AI tools in their practices in 2025, down from 71 per cent in 2024. This growing comfort with AI assistance suggests practitioners might accept AI curation if it proves reliable. Yet the same research shows 75 per cent of customers worry about data security with AI tools.

The gap between AI hype and reality complicates this future. Sentiment towards AI among business leaders dropped 12 per cent year-over-year in 2025, with only 69 per cent saying AI will enhance their industry. Leaders' confidence about achieving AI goals fell from 56 per cent in 2024 to just 40 per cent in 2025, a 29 per cent decline. When AI agents powered by top models from OpenAI, Google DeepMind, and Anthropic fail to complete straightforward workplace tasks by themselves, as Upwork research found, practitioners grow sceptical of expansive AI claims including AI curation.

Perhaps the most likely future involves plurality: multiple models coexisting based on practitioner preferences, resources, and needs. Some practitioners will rely entirely on AI curation systems that monitor custom source lists and generate personalised briefings. Others will maintain traditional newsletter subscriptions from trusted human curators whose editorial judgement they value. Most will combine both, using AI for breadth whilst relying on human curators for depth, verification, and contextual framing.

The infrastructure of information curation will likely matter more rather than less. As AI capabilities advance, the quality of curation becomes increasingly critical for determining what practitioners know, what they build, and which developments they consider significant. Poor curation that amplifies hype over substance, favours sponsors over objectivity, or prioritises engagement over importance can distort the entire field's trajectory.

Building Better Information Infrastructure

The question of what content formats are most effective for busy AI practitioners admits no single answer. Daily briefs serve practitioners needing rapid updates. Paywalled deep dives serve those requiring comprehensive analysis. Integrated dashboards serve specialists wanting customised aggregation. Effectiveness depends entirely on practitioner context, time constraints, and information needs.

The question of how curators verify vendor claims admits a more straightforward if unsatisfying answer: imperfectly, with resource constraints forcing prioritisation based on claim significance and available verification methodologies. Benchmark scepticism has become essential literacy for AI practitioners. The ability to identify cherry-picked results, non-production test conditions, and claims optimised for marketing rather than accuracy represents a crucial professional skill.

The question of viable commercial models without compromising editorial independence admits the most complex answer. No perfect model exists. Sponsorship creates conflicts with editorial judgement. Subscriptions limit reach and discoverability. Institutional support introduces different dependencies. Success requires combining multiple revenue streams whilst implementing structural protections, maintaining transparency, and committing to serving practitioner needs above commercial convenience.

What unites all these answers is recognition that information infrastructure matters profoundly. The formats through which practitioners consume information, the verification standards applied to claims, and the commercial models sustaining curation all shape what the field knows and builds. Getting these elements right isn't peripheral to AI development. It's foundational.

As information continues doubling every two months, as vendor announcements multiply, and as the gap between marketing hype and technical reality remains stubbornly wide, the role of thoughtful curation becomes increasingly vital. Practitioners drowning in information need trusted guides who respect their time, verify extraordinary claims, and maintain independence from commercial pressures.

Building this infrastructure requires resources, expertise, and commitment to editorial principles that often conflicts with short-term revenue maximisation. Yet the alternative, an AI field navigating rapid development whilst drinking from a firehose of unverified vendor claims and sponsored content posing as objective analysis, presents risks that dwarf the costs of proper curation.

The practitioners building AI systems that will reshape society deserve information infrastructure that enables rather than impedes their work. They need formats optimised for their constraints, verification processes they can trust, and commercial models that sustain independence. The challenge facing the AI content ecosystem is whether it can deliver these essentials whilst generating sufficient revenue to survive.

The answer will determine not just which newsletters thrive but which ideas spread, which claims get scrutinised, and ultimately what gets built. In a field moving as rapidly as AI, the infrastructure of information isn't a luxury. It's as critical as the infrastructure of compute, data, and algorithms that practitioners typically focus on. Getting it right matters enormously. The signal must cut through the noise, or the noise will drown out everything that matters.

References & Sources

  1. American Press Institute. “The four business models of sponsored content.” https://americanpressinstitute.org/the-four-business-models-of-sponsored-content-2/

  2. Amazeen, Michelle. “Content Confusion: News Media, Native Advertising, and Policy in an Era of Disinformation.” Research on native advertising and trust erosion.

  3. Autodesk. (2025). “AI Hype Cycle | State of Design & Make 2025.” https://www.autodesk.com/design-make/research/state-of-design-and-make-2025/ai-hype-cycle

  4. Bartosz Wojdynski, Director, Digital Media Attention and Cognition Lab, University of Georgia. Research on native advertising detection rates.

  5. beehiiv. “Find the Right Email Newsletter Business Model for You.” https://blog.beehiiv.com/p/email-newsletter-business-model

  6. Columbia Journalism Review. “Reuters article highlights ethical issues with native advertising.” https://www.cjr.org/watchdog/reuters-article-thai-fishing-sponsored-content.php

  7. DigitalOcean. (2024). “12 AI Newsletters to Keep You Informed on Emerging Technologies and Trends.” https://www.digitalocean.com/resources/articles/ai-newsletters

  8. eMarketer. (2024). “Generative Search Trends 2024.” Reports on 525% revenue growth for AI-driven search engines.

  9. First Analytics. (2024). “Vetting AI Vendor Claims February 2024.” https://firstanalytics.com/wp-content/uploads/Vetting-Vendor-AI-Claims.pdf

  10. IBM. (2025). “AI Agents in 2025: Expectations vs. Reality.” https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality

  11. International Fact-Checking Network (IFCN). Code of Principles adopted by over 170 organisations. Developed with contribution from Peter Cunliffe-Jones.

  12. JournalismAI. “CheckMate: AI for fact-checking video claims.” https://www.journalismai.info/blog/ai-for-factchecking-video-claims

  13. LetterPal. (2024). “Best 15 AI Newsletters To Read In 2025.” https://www.letterpal.io/blog/best-ai-newsletters

  14. MIT Technology Review. (2025). “The great AI hype correction of 2025.” https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/

  15. Newsletter Operator. “How to build a Morning Brew style newsletter business.” https://www.newsletteroperator.com/p/how-to-build-a-moring-brew-style-newsletter-business

  16. Nieman Journalism Lab. (2024). “AI adoption in newsrooms presents 'a familiar power imbalance' between publishers and platforms, new report finds.” https://www.niemanlab.org/2024/02/ai-adoption-in-newsrooms-presents-a-familiar-power-imbalance-between-publishers-and-platforms-new-report-finds/

  17. Open Source CEO. “How (This & Other) Newsletters Make Money.” https://www.opensourceceo.com/p/newsletters-make-money

  18. Paved Blog. “TLDR Newsletter and the Art of Content Curation.” https://www.paved.com/blog/tldr-newsletter-curation/

  19. PubMed. (2024). “Artificial Intelligence and Machine Learning May Resolve Health Care Information Overload.” https://pubmed.ncbi.nlm.nih.gov/38218231/

  20. Quuu Blog. (2024). “AI Personalization: Curating Dynamic Content in 2024.” https://blog.quuu.co/ai-personalization-curating-dynamic-content-in-2024-2/

  21. Reuters Institute. (2024). “Digital News Report 2024.” Finding that 17% of respondents pay for news online.

  22. Sanders, Emily. “These ads are poisoning trust in media.” https://www.exxonknews.org/p/these-ads-are-poisoning-trust-in

  23. Skywork AI. (2025). “How to Evaluate AI Vendor Claims (2025): Benchmarks & Proof.” https://skywork.ai/blog/how-to-evaluate-ai-vendor-claims-2025-guide/

  24. Sprout Social. (2024). “Q4 2024 Pulse Survey.” Data on “#ad” label impact on consumer behaviour.

  25. Stanford HAI. (2025). “Technical Performance | The 2025 AI Index Report.” https://hai.stanford.edu/ai-index/2025-ai-index-report/technical-performance

  26. TDWI. (2024). “Tackling Information Overload in the Age of AI.” https://tdwi.org/Articles/2024/06/06/ADV-ALL-Tackling-Information-Overload-in-the-Age-of-AI.aspx

  27. TLDR AI Newsletter. Founded by Dan Ni, August 2018. https://tldr.tech/ai

  28. Tow Center for Digital Journalism, Columbia University. (2024). Felix Simon interviews with 130+ journalists and news executives on AI adoption.

  29. Upwork Research. (2025). Study on AI agent performance in workplace tasks.

  30. WARC Media. (2024). Data on five major tech companies commanding over 50% of global advertising investment.

  31. Yahoo. (2024). Study finding AI disclosure in ads boosted trust by 96%.

  32. Zapier. (2025). “The best AI newsletters in 2025.” https://zapier.com/blog/best-ai-newsletters/


Tim Green

Tim Green
UK-based Systems Theorist & Independent Technology Writer

Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.

His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.

ORCID: 0009-0002-0156-9795
Email: tim@smarterarticles.co.uk

Discuss...