You know how some AI can reason through problems (like chain-of-thought prompting), and others can take actions (like generating plans)? The ReAct paper shows what happens when you combine both.
The Core Idea
ReAct stands for Reasoning + Acting. Instead of just thinking OR doing, the AI alternates between the two:
- Reason: Think about what to do next
- Act: Take an action (like querying a knowledge base)
- Observe: See what happened
- Repeat: Reason about the new information and act again
Why It Matters
This simple pattern solves a huge problem: hallucination. When AI just reasons without checking facts, it makes stuff up. When it can act (like searching Wikipedia), it stays grounded in reality.
The results are impressive:
- On HotpotQA (question answering): ReAct beats pure reasoning approaches
- On ALFWorld and WebShop (interactive tasks): It outperforms imitation learning by 34% and 10%
- All this with just 1-2 examples in the prompt
The Pattern in Practice
Here's what a ReAct trace looks like:
Task: What's the elevation of the mountain where the 2018 Winter Olympics were held?
Thought: I need to find where the 2018 Winter Olympics were held. Action: Search "2018 Winter Olympics location" Observation: The 2018 Winter Olympics were held in Pyeongchang, South Korea.
Thought: Now I need to find mountains near Pyeongchang. Action: Search "Pyeongchang mountains" Observation: The games were held near Mount Taebaek.
Thought: Now I need the elevation of Mount Taebaek. Action: Search "Mount Taebaek elevation" Observation: Mount Taebaek has an elevation of 1,567 meters.
Answer: 1,567 meters
Why This Works
By interleaving reasoning and acting:
- The AI can check its assumptions
- It handles exceptions better (what if the first search doesn't work?)
- It's way more interpretable (you can see the AI's thinking)
- It stays factual by grounding in external sources
The Takeaway
ReAct shows that the best AI systems aren't just thinkers or just doers - they're both. And that's exactly how modern AI agents work today (like the ones in LangChain).
Read the full paper: ReAct: Synergizing Reasoning and Acting in Language Models (arXiv:2210.03629)
Authors: Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao
Project site: https://react-lm.github.io/