Skip to content. | Skip to navigation

Personal tools

Theme for TIFR Centre For Applicable Mathematics, Bangalore

Navigation

You are here: Home / Events / Large Language Models: Better Reasoners or Storehouses of Knowledge?

Large Language Models: Better Reasoners or Storehouses of Knowledge?

Ayush Agrawal, Research Fellow, Microsoft Research India
Speaker
Ayush Agrawal, Research Fellow, Microsoft Research India
When Oct 26, 2023
from 02:00 PM to 03:00 PM
Where Ground Floor Lecture Hall 006
Add event to calendar vCal
iCal

Abstract: Large Language Models (LLMs) have been increasingly exploited in various areas, from mathematics to information retrieval. In this talk, I would like to take a voyage across two fascinating areas where LLMs have been helpful: formal theorem proving and factual question answering. Although LLMs show increasing promise in these fields, they are marred by several problems that limit their adoption.

Formal theorem proving involves generating proofs that are machine-checkable using software like Lean. I will primarily focus on leveraging AI to address two major problems: the formalization of natural proofs into a formal language (autoformalization) and finding a verifiable proof for a theorem statement (proof search).

Factual question answering involves searching for answers to factual questions. I will focus on a prevalent phenomenon in which existing LLMs fabricate facts, known as hallucinations, targeting the reference generation task.

While formal theorem proving requires thinking and reasoning, factual QA predominantly involves memorization. Through this talk, I aim to provide an overview of how these models are effective in these two areas, emphasizing the pressing need to develop a deeper understanding of their capabilities in order to make them more reliable and beneficial.

Download slides

Filed under: