Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning

Runchuan Zhu*, Zhipeng Ma*, Jiang Wu*, Junyuan Gao, Jiaqi Wang, Dahua Lin, Conghui He
Shanghai Artificial Intelligence Laboratory
Peking University
Southwest Jiaotong University
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences

*Equal contribution, Project lead, Corresponding authors

Abstract

Refusal-Aware Instruction Tuning (RAIT) enables Large Language Models (LLMs) to refuse to answer unknown questions. By modifying responses of unknown questions in the training data to refusal responses such as "I don't know", RAIT enhances the reliability of LLMs and reduces their hallucination. Generally, RAIT modifies training samples based on the correctness of the initial LLM's response. However, this crude approach can cause LLMs to excessively refuse answering questions they could have correctly answered, the problem we call over-refusal. In this paper, we explore two primary causes of over-refusal: Static conflict occurs when similar samples within the LLM's feature space receive differing supervision signals (original vs. modified "I don't know"). Dynamic conflict arises as the LLM's evolving knowledge during SFT enables it to answer previously unanswerable questions, but the now-answerable training samples still retain the original "I don't know" supervision signals from the initial LLM state, leading to inconsistencies. These conflicts cause the trained LLM to misclassify known questions as unknown, resulting in over-refusal. To address this issue, we introduce Certainty Represented Knowledge Flow for Refusal-Aware Instructions Tuning (CRaFT). CRaFT centers on two main contributions: First, we additionally incorporate response certainty to selectively filter and modify data, reducing static conflicts. Second, we implement preliminary rehearsal training to characterize changes in the LLM's knowledge state, which helps mitigate dynamic conflicts during the fine-tuning process. We conducted extensive experiments on open-ended question answering and multiple-choice question task. Experiment results show that CRaFT can improve LLM's overall performance during the RAIT process.

Overview

Previous RAIT methods resulted in significant over-refusal, while we proposed CRaFT to mitigate this issue, improving the reliability and helpfulness of large language models.



Conflicts in RAIT

(1) Static conflict: In the feature space of the LLM, two samples with close representations may be assigned different supervision signals under the previous RAIT framework.

(2) Dynamic conflict: During Supervised Fine-Tuning (SFT), the knowledge state of the LLM may change, with questions transitioning from unknown to known, and the supervision signals should change accordingly.



Analysis on Static Conflict

The method of constructing datasets using only Correctness leads to severe static conflict, which can be significantly mitigated by introducing Certainty.

Methodology

CRaFT contains two stages: Stage 1, Query the Knowledge State and Flow of LLM; Stage 2, Refusal-Aware Instructions Construction and Tuning.

Experiment

We analyzed the shortcomings of previous metrics and proposed a more comprehensive metric: the Truthful Helpfulness Score (THS).


The experimental results show that CRaFT surpassed Cor-RAIT under all tasks and settings, the improvement was limited under the OOD setting for OEQA due to training paradigm.

LLMS QA Type MCQA OEQA
Dataset MMLU (ID) ARC-c (OOD) TriviaQA (ID) NQ (OOD)
Metric Pc Pw THS Pc Pw THS Pc Pw THS Pc Pw THS
LLaMA2- 7B- CHAT Baselines Init-Basic 45.6 52.8 00.0 53.9 46.0 0.00 54.0 46.0 00.0 29.3 70.7 00.0
Init-Refuse 36.4 38.9 03.9 44.4 35.7 02.6 37.0 21.7 11.5 20.8 38.6 04.8
Van-Tuning 46.9 53.1 01.2 54.5 45.5 01.2 48.6 44.5 -03.7 18.3 50.2 -02.5
R-Tuning 44.5 39.6 11.3 55.8 38.1 11.1 41.3 18.3 19.7 16.2 27.6 04.7
Ours CRaFT 43.9 36.4 12.5 54.7 35.9 12.6 38.5 12.9 23.3 15.8 22.4 06.5
Ablations w/o flow 39.7 31.0 13.0 51.4 32.3 13.5 45.2 20.5 21.1 21.2 38.8 05.2
w/o cer 38.4 32.1 11.5 52.5 32.9 13.9 38.5 15.7 20.1 14.6 22.1 05.4
LLaMA3- 8B- INSTRUCT Baselines Init-Basic 66.8 33.1 00.0 80.6 19.5 00.0 66.8 33.2 00.0 40.3 59.7 00.0
Init-Refuse 50.0 17.0 15.6 65.3 14.4 05.6 53.9 20.8 12.0 31.1 38.6 05.0
Van-Tuning 69.5 30.5 08.0 80.3 19.7 -01.3 55.0 38.1 -21.8 21.0 48.5 -11.7
R-Tuning 63.9 21.6 20.4 79.4 16.2 12.2 45.4 13.2 18.8 17.2 25.6 -00.1
Ours CRaFT 53.3 09.6 34.0 74.1 12.7 21.4 43.5 10.9 21.5 19.0 27.5 00.4
Ablations w/o flow 57.5 15.3 27.2 75.8 14.9 13.9 49.1 18.0 12.8 22.3 41.6 -05.8
w/o cer 62.1 18.4 25.0 78.2 17.3 06.5 43.0 11.2 20.5 15.8 23.5 -00.1

BibTeX

@article{zhu2024utilize,
        title={Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning},
        author={Zhu, Runchuan and Ma, Zhipeng and Wu, Jiang and Gao, Junyuan and Wang, Jiaqi and Lin, Dahua and He, Conghui},
        journal={arXiv preprint arXiv:2410.06913},
        year={2024}
}