CriticBench:
Benchmarking LLMs for
Critique-Correct Reasoning

Zicheng Lin1*, Zhibin Gou1*, Tian Liang1,
Ruilin Luo1, Haowei Liu2, Yujiu Yang1†
1Tsinghua University 2 University of Hong Kong
*Equal Contribution Corresponding Author

The ability of Large Language Models (LLMs) to critique and refine their reasoning is crucial for their application in evaluation, feedback provision, and self-improvement. This paper introduces CriticBench, a comprehensive benchmark designed to assess LLMs' abilities to critique and rectify their reasoning across a variety of tasks. CriticBench encompasses five reasoning domains: mathematical, commonsense, symbolic, coding, and algorithmic. It compiles 15 datasets and incorporates responses from three LLM families. Utilizing CriticBench, we evaluate and dissect the performance of 17 LLMs in generation, critique, and correction reasoning, i.e., GQC reasoning, and analyze the key factors affecting LLM critical reasoning.
Our findings reveal: (1) a linear relationship in GQC capabilities, with critique-focused training markedly enhancing performance; (2) a task-dependent variation in critique and correction effectiveness, with logic-oriented tasks being more amenable to correction; (3) GQC knowledge inconsistencies that decrease as model size increases; and (4) an intriguing inter-model critiquing pattern, where stronger models are better at critiquing weaker ones, while weaker models can surprisingly surpass stronger ones in their self-critique.
We hope these insights into the nuanced critique-correct reasoning of LLMs will foster further research in LLM critique and self-improvement.

CriticBench Curation

Figure 1: An overview for CriticBench curation.

CriticBench is a benchmark comprising five different domains to systematically assess critique and correction reasoning in various LLMs.

RQ1: Key Factors in Critical Reasoning

RQ2: Correlations of GQC Abilities

RQ3: Impact of Task Type

RQ4: Consistency of GQC Knowledge

RQ5: Patterns of Inter-Model Critique

BibTeX

@misc{lin2024criticbench,
    title={CriticBench: Benchmarking LLMs for Critique-Correct Reasoning}, 
    author={Zicheng Lin and Zhibin Gou and Tian Liang and Ruilin Luo and Haowei Liu and Yujiu Yang},
    year={2024},
    eprint={2402.14809},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}