Former OpenAI, DeepMind Staff Secure $150M to Develop Tools Tackling AI Hallucinations and Enhancing Accuracy.
  • 150 views
  • 2 min read

A new venture aimed at tackling the pervasive issue of "AI hallucinations" and enhancing the accuracy of AI systems has emerged, backed by $150 million in funding. The company is spearheaded by former researchers and engineers from leading AI organizations, including OpenAI and DeepMind.

The primary focus is to develop tools and techniques that can identify, mitigate, and ultimately prevent AI models from generating false, misleading, or nonsensical information. AI hallucinations, where an AI model confidently produces outputs that deviate from reality or lack a factual basis, have become a significant concern as AI systems are increasingly deployed in various critical applications. These hallucinations can manifest in different forms, including factually incorrect responses, fabricated stories, made-up citations, or even the invention of nonexistent people or features.

The newly secured funding will be used to build a "model design environment," a platform that leverages interpretability techniques to allow users to understand, debug, and intentionally design AI models at scale. This platform will enable users to reach inside models, identify the specific components responsible for undesirable behaviors, and then train or modify those subunits directly. The company aims to rethink AI training methodologies to make AI more reliable and ensure that model behavior can be precisely controlled without unintended consequences.

The team's approach involves developing methods to efficiently retrain a model's behavior by precisely targeting parts of its inner workings. In one application of these methods, they were able to reduce hallucinations by half in a large language model. The company believes this approach will lead to a paradigm shift in how AI is built, making it more reliable and controllable.

Several existing tools and techniques can detect AI hallucinations in real-time. These tools often integrate with AI development environments to monitor outputs and apply rules to filter out factual inaccuracies. Prompt engineering techniques, such as directing the model to reference specific, reliable sources, can also help mitigate hallucinations. Other methods include Chain-of-Verification (CoVe), Step-Back Prompting and Retrieval-Augmented Generation (RAG). Explainable AI (XAI) can also increase transparency by revealing the reasoning behind AI outputs, allowing users to assess their validity. Integrated fact-checking systems can cross-reference generated outputs with trusted databases in real-time.

The team comprises AI researchers from DeepMind and OpenAI, academics from Harvard and Stanford, and machine learning engineers from OpenAI and Google. Their expertise in neural network interpretability will be crucial in achieving their goals.

The rise of AI hallucinations poses significant challenges across various sectors. For example, a transcription tool was found to fabricate text, which included attributing race, violent rhetoric, and nonexistent medical treatments. In another instance, a chatbot cited a made-up company policy. These examples highlight the importance of developing robust tools to detect and prevent AI hallucinations, especially in high-stakes applications.

Advertisement

Latest Post


A new venture aimed at tackling the pervasive issue of "AI hallucinations" and enhancing the accuracy of AI systems has emerged, backed by $150 million in funding. The company is spearheaded by former researchers and engineers from leading AI organi...
  • 149 views
  • 2 min

Samsung is pushing the boundaries of mobile technology with its next-generation creaseless foldable screens, designed to provide users with a seamless and immersive mobile experience. This innovation addresses a long-standing challenge in the foldab...
  • 136 views
  • 2 min

In early February 2026, Substack, the popular newsletter and publishing platform, confirmed a data breach that exposed user data. The company revealed that an unauthorized third party accessed parts of its systems in October 2025. Substack discovere...
  • 250 views
  • 2 min

Cerebras Systems has secured $1 billion in Series H funding, catapulting its post-money valuation to approximately $23 billion. This investment, led by Tiger Global with participation from investors like AMD, Benchmark, Fidelity Management & Res...
  • 264 views
  • 2 min

Advertisement
About   •   Terms   •   Privacy
© 2026 TechScoop360