Report finds newer inferential models hallucinate nearly half the time while experts warn of unresolved flaws, deliberate deception and a long road to human-level AI reliability
Yeah, I think that workarounds with o3 is where we’re at until Altman figures out that just saying the latest oX mini high is “great at coding” is bad marketing when it can’t accomplish the task.
Yeah, I think that workarounds with o3 is where we’re at until Altman figures out that just saying the latest oX mini high is “great at coding” is bad marketing when it can’t accomplish the task.