How AI tools can redefine universal design to increase accessibility
SOURCE: RESEARCH.GOOGLE
FEB 07, 2026
Explanation-based retrieval boosts grammatical error correction
SOURCE: EUREKALERT.ORG
JAN 25, 2026
Higher Education Press
FacebookXLinkedInWeChatBlueskyMessageWhatsAppEmail
image:
Examples of retrieving reference examples
Credit: HIGHER EDUCATON PRESS
Grammatical error correction (GEC) is a key task in natural language processing (NLP), widely applied in education, news, and publishing. Traditional methods mainly rely on sequence-to-sequence (Seq2Seq) and sequence-to-edit (Seq2Edit) models, while large language models (LLMs) have recently shown strong performance in this area.
A research team has now introduced a new method, called RE² (Retrieving Examples with similar Explanation), to address a long-standing challenge: finding reference examples that truly help models correct grammatical errors. The research team led by Baoxin WANG published their new research on 15 December 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
Most existing approaches retrieve examples for correction based on surface-level text similarity. However, sentences with similar wording may contain very different types of errors, leading to mismatched references. For example, two sentences might share the phrase “?????” (“the green mountain and clear waters”), but their grammatical mistakes could be unrelated.
To solve this problem, the team proposed using grammatical error explanations instead of raw text for retrieval. By matching sentences with similar error patterns, RE² provides models with more relevant examples that directly address the mistakes in the input.
To make this approach possible, the researchers built a large-scale dataset of Chinese grammatical error explanations, named FCGEE (Fine-grained Chinese Grammatical Error Explanation). They used GPT-4o and data from the FCGEC dataset to generate detailed explanations, and further refined them with official exam materials. This dataset enables reliable retrieval of examples based on explanation similarity.
The RE² method was tested on two benchmark datasets, FCGEC and NaCGEC, which contain errors from primary and secondary school Chinese examinations. These errors are often challenging even for native speakers. By integrating RE² into both in-context learning (ICL) and supervised fine-tuning (SFT) frameworks, the researchers demonstrated significant improvements in correction accuracy compared with text-similarity-based methods. The results confirm that explanation-driven retrieval offers more effective guidance for large language models.
The team plans to expand the dataset to cover more error types and explore multilingual applications. They believe that explanation-based retrieval could make LLMs not only better at correcting grammar but also more interpretable and helpful for language learning.
Frontiers of Computer Science
Experimental study
Not applicable
RE^2: improving Chinese grammatical error correction via retrieving appropriate examples with explanation
15-Dec-2025
LATEST NEWS
WHAT'S TRENDING
Data Science
5 Imaginative Data Science Projects That Can Make Your Portfolio Stand Out
OCT 05, 2022
SOURCE: RESEARCH.GOOGLE
FEB 07, 2026
SOURCE: EUREKALERT.ORG
JAN 25, 2026
SOURCE: THEQUANTUMINSIDER.COM
JAN 13, 2026
SOURCE: COLUMBIATRIBUNE.COM
JAN 13, 2026