Skip to content

Latest commit

 

History

History
33 lines (25 loc) · 1.84 KB

README.md

File metadata and controls

33 lines (25 loc) · 1.84 KB

Zero-shot Factual Consistency Evaluation Across Domains

Code, Data, and Models for the paper Zero-shot Factual Consistency Evaluation Across Domains (arxiv)

Abstract: This work addresses the challenge of factual consistency in text generation systems. We unify the tasks of Natural Language Inference, Summarization Evaluation, Factuality Verification and Factual Consistency Evaluation to train models capable of evaluating the factual consistency of source-target pairs across diverse domains. We rigorously evaluate these against eight baselines on a comprehensive benchmark suite comprising 22 datasets that span various tasks, domains, and document lengths. Results demonstrate that our method achieves state-of-the-art performance on this heterogeneous benchmark while addressing efficiency concerns and attaining cross-domain generalization.

Models:

Data:

Results:

  • Overall Results available here
  • Dataset-specific Results available here

Cite this work as follows:

@misc{agarwal2024zeroshotfactualconsistencyevaluation,
      title={Zero-shot Factual Consistency Evaluation Across Domains}, 
      author={Raunak Agarwal},
      year={2024},
      eprint={2408.04114},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.04114}, 
}