KDD '25 AI Reasoning Day keynote: Improving AI Reasoning through Intent, Interaction, and Inspection

398
7.8
Опубликовано 9 января 2026, 0:57
ai-reasoning.github.io

AI models are increasingly capable of solving sophisticated tasks that require reasoning. But how do we improve the quality of that reasoning, especially when the models operate as black boxes? In this talk, Sumit Gulwani shares practical strategies for improving AI reasoning in the domain of code and structured tasks.

A first idea is to capture richer forms of user intent. Input-output examples not only enable post-hoc validation, but also guide the model toward correct generations up front. Temporal context (such as recent user actions) can help infer evolving intent and keep users in flow. Secondly, we can give the model an escape mechanism, allowing it to abstain or initiate collaborative interaction when it lacks sufficient information. This raises new challenges in evaluating interactive workflows, which we address through rubric-based assessments of conversation quality (grounded in principles like the Gricean maxims) and automation using simulated user proxies. Finally, we can strengthen reasoning via automated inspection. Symbolic checkers or programmatic validators can uncover hallucinations and inconsistencies in both online and offline settings. These signals can then guide the model through iterative refinement or prompt updates. Sumit illustrates these ideas through real-world applications spanning spreadsheet tasks and software development, highlighting how AI reasoning can be improved using structured intent, collaborative interaction, and systematic inspection.
автотехномузыкадетское