• Home
  • AI
  • OpenAI’s Grantwashing: a Costly Distraction in AI Safety Research
Beware of OpenAI's 'Grantwashing' on AI Harms

OpenAI’s Grantwashing: a Costly Distraction in AI Safety Research

The Grant Announcement and Its Implications

OpenAI recently unveiled a funding initiative of up to $2 million for research focused on AI’s impact on mental health, offering grants between $5,000 and $100,000 per project. This announcement follows serious legal challenges where OpenAI denied accountability for AI-induced harms, particularly in tragic cases involving user suicides. The funding appears to be a strategic move to address scrutiny while diverting attention from deeper systemic issues.

Grant Sizes: A Pittance Compared to Real Needs

Comparing OpenAI’s grants to the average public health funding reveals a striking disparity. The median grant from the National Institutes of Mental Health in 2024 sits at $642,918, dwarfing OpenAI’s offerings. This stark difference suggests a lack of genuine commitment to comprehensive mental health research, instead appearing as a façade to placate regulatory and public pressures.

Understanding Grantwashing

Grantwashing describes the phenomenon where companies allocate minimal funding for research while withholding essential data that could inform findings. This tactic not only stifles meaningful research but also contributes to a culture of misinformation. OpenAI’s approach mirrors past actions by tech giants, like Meta, who have similarly downplayed their responsibilities while offering token financial support for studies that lack the resources necessary for rigorous inquiry.

Legal Challenges and Corporate Responsibility

OpenAI faces ongoing lawsuits that allege its AI systems have directly contributed to mental health crises, including a case involving a teenager’s suicide. These legal battles highlight a critical tension: OpenAI’s lawyers argue misuse rather than product liability, further complicating public perception and trust. The company’s request for sensitive information from affected families during these proceedings raises ethical questions about their priorities.

Future Direction: What Needs to Change

To foster real advancements in AI safety, companies must allocate a fair percentage of their research budgets—3-5%—to independent studies. Such a model, akin to the Human Genome Project’s approach, could yield actionable insights into the mental health implications of AI technologies. Without transparency and genuine investment, the cycle of grantwashing will continue to undermine scientific integrity and public trust.

Over the next 6–12 months, expect increased scrutiny on OpenAI and similar corporations as regulatory bodies address data access requirements. Companies will need to either adapt to these demands or face mounting public and legal pressure, forcing them to reconsider their strategies around funding and research transparency.

Post List #3

Perplexity AI Interview Explains How AI Search Works via @sejournal, @martinibuster

Perplexity AI: a Shift in Search Dynamics and Seo Strategies

Marc LaClear Jan 22, 2026 3 min read

Understanding Perplexity AI’s Approach Perplexity AI has emerged as a notable player in the search engine arena, leveraging artificial intelligence to deliver conversational answers rather than lists of links. It combines large language models with real-time web search, aiming to…

Google brings Personal Intelligence to AI Mode in Google Search

Google’s Personal Intelligence: a New Revenue Stream for AI Subscribers

Marc LaClear Jan 22, 2026 2 min read

Overview of Personal Intelligence in AI Mode Google recently rolled out its Personal Intelligence feature within AI Mode for select users, specifically targeting AI Pro and AI Ultra subscribers in the U.S. This feature connects various Google services—Gmail, Photos, and…

56% Of CEOs Report No Revenue Gains From AI: PwC Survey via @sejournal, @MattGSouthern

Majority of Ceos See No Financial Benefit From AI Investments:…

Marc LaClear Jan 22, 2026 3 min read

Survey Overview According to PwC’s 29th Global CEO Survey, conducted with 4,454 executives across 95 countries, a staggering 56% of CEOs report no increase in revenue or reduction in costs from AI investments over the last year. This survey highlights…

LinkedIn cofounder says most companies are getting AI wrong

Reid Hoffman Critiques Flawed AI Adoption Strategies in Corporations

Marc LaClear Jan 22, 2026 3 min read

Misguided Approaches to AI Integration Reid Hoffman, LinkedIn co-founder, asserts that most corporations misjudge AI integration. Instead of focusing on pilot projects led by chief AI officers and specialized teams, companies should emphasize automating routine tasks. This misalignment becomes evident…

Shopify Shares More Details On Universal Commerce Protocol (UCP) via @sejournal, @martinibuster

Shopify’s Universal Commerce Protocol: a New Era for AI-Driven Shopping

Marc LaClear Jan 22, 2026 3 min read

What is the Universal Commerce Protocol? Shopify and Google recently unveiled the Universal Commerce Protocol (UCP), an open-source standard aimed at revolutionizing how AI agents interact with online commerce. UCP allows these agents to discover products, negotiate checkouts, and complete…