Chain-of-Thought Prompting
Generative AI

Chain-of-Thought Prompting

The chain-of-thought(CoT) prompting method enables LLMs to explain their reasoning while enhancing their computational capabilities and understanding of complex problems.

What is Chain-of-Thought Prompting? 

While prompt engineering is a maturing field focussed on designing and refining transformer-based large language models (LLMs) with specific text prompts, the chain of thought prompting method enables LLMs to explain their reasoning. The CoT approach enhances the model's computational capabilities and understanding of complex problems.  

'Chain-of-Thought Prompting Elicits Reasoning in Large Language Models' paper presents these four properties of the CoT approach:

  1. The chain-of-thought prompting breaks down multi-step problems into intermediate steps, facilitating additional computation by your models whenever required. It is a good approach for intricate computational problems where traditional methods fail to compute.  
  2. This method provides insights into the model's behavior, directing users on how it arrived at a specific answer and allowing them to correct or debug the path if the reasoning was wrong. 
  3. The CoT applies to computational tasks such as math word problems, symbolic manipulation, commonsense reasoning, and other NLP tasks. 
  4. CoT reasoning can be elicited in large ready-to-use language models by combining thought sequences into the examples of few-shot prompting. 

An Example explaining CoT 

The picture below compares standard and CoT prompts. The latter computes the correct output for a math reasoning problem. 

Standard Prompting Vs. CoT Prompting
Standard Prompting Vs. CoT Prompting | Source

 

Why Is Chain-of-Thought Prompting Essential? 

CoT prompting is an excellent approach to complex tasks and supports many use cases involving arithmetic, commonsense, and reasoning tasks. It overcomes the limitations of few-shot prompting for complex problems and specifically benefits larger models.

This approach brings higher visibility into the model's reasoning process, making them easier to understand, interpret and debug. Overall, CoT prompting helps AI researchers and practitioners better understand the AI model's decision-making process leading to trustworthy AI systems. 

Techniques Used In CoT Prompting

Crucial techniques applied in CoT prompting include: 

Few-shot CoT

This approach provides limited instructions or examples to guide a language model's thought process. Instead of relying solely on a single prompt, this approach allows you to offer a small number of prompts that cover different aspects or perspectives of a topic. It helps the model generalize and reason based on the provided examples.

An Example of Few Shot CoT Prompting
An Example of Few Shot CoT Prompting

Self-consistency Prompting

This approach combines diverse reasoning paths with few-shot CoT to find the answer with the highest consistency. This method performs well with arithmetic and commonsense reasoning problems. Self-consistency prompt samples a diverse set of reasoning paths instead of the greedy one and then finalizes the most consistent answer by marginalizing out the sampled reasoning paths. 

An Example Illustrating Self-consistency Prompting
An Example Illustrating Self-consistency Prompting | Source

Zero-shot CoT 

An approach proposed in zero-shot CoT (Kojima et al. 2022) refines zero-shot prompts by adding "Let's think step by step" to the original prompt.

An Example of Zero-shot-CoT
An Example of Zero-shot-CoT | Source

Further Reading 

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models 

Chain-of-Thought Prompting

SELF-CONSISTENCY IMPROVES CHAIN OF THOUGHT REASONING IN LANGUAGE MODELS