Tin Rabzelj
Tin Rabzelj
Dashed Line

Tree of Thoughts: Deliberate Problem Solving with Large Language Models | Paper Notes

8/26/2025

https://arxiv.org/abs/2305.10601

Introduces "Tree of Thoughts" (ToT), a framework for LM inference that enhances problem-solving abilities on complex tasks requiring exploration and strategic lookahead. The key idea is to move away from the linear, token-level decision-making of methods like Chain of Thought (CoT) prompting. ToT allows LMs to explore a tree of possible reasoning paths, where each "thought" is a coherent unit of text representing an intermediate step in problem-solving.

It enables the model to consider multiple different paths of reasoning, evaluate their progress, and decide which path to pursue next. It makes more global and strategic choices. It's analogous to how humans often explore different possibilities when solving a difficult problem.

Thoughts are generated by the LM itself, guided by specific prompts. The goal is to create a set of potential next steps (thoughts) from the current state of the problem. They talk about two strategies, sampling independently and proposing sequentially.

They test the ToT framework on three novel tasks:

  • Game of 24: GPT-4 with standard CoT prompting solved only 4% of the tasks, the ToT method achieved a 74% success rate.
  • Creative Writing: ToT-generated passages were judges as more coherent.
  • Mini Crosswords: achieves a 60% word-level success rate, compared to less than 16% for other methods, and successfully solved 4 out of 20 games completely.

They argue that ToT is a more general and flexible framework for problem-solving with LMs It's capable of using different search algorithms (like breadth-first and depth-first search) and heuristics. It's a step towards enabling LMs to perform more deliberate and systematic "System 2" reasoning, along with their fast, intuitive "System 1" capabilities.

8/26/2025

Read more