How to Detect Hallucinations in Retrieval Augmented Systems: A Primer

Hallucinations—model-generated outputs that appear confident yet contain fabricated or incorrect information—remain one of the peskiest issues facing AI engineers today. As Retrieval- and Search-Augmented Systems have proliferated, systematically identifying and mitigating hallucinations is now critical. ...

May 28, 2025 · Julia Neagu

What OpenAI’s Sycophancy Problem Teaches Us About Using User Data to Evaluate AI

On April 25th, OpenAI shared a surprising update: after introducing thumbs-up/down feedback from ChatGPT users into their GPT‑4o fine-tuning process, the model got noticeably worse. ...

May 2, 2025 · Julia Neagu

Eval-driven development for the AI Engineer

While there is currently an explosion of developers working in the AI space, only 5% of LLM-based projects lead to significant monetization of new product offerings. Many products remain stuck in the proof-of-concept or beta phase and never make their way to users. ...

August 6, 2024 · Julia Neagu