This repository is for the course: L101 Machine Learning for Language Processing.
Project: An In-depth Analysis of the Readers in Open-domain Question Answering Pipeline Systems
This repository is largely based on SF-QA original repo.
SF-QA helps you to evaluate your open-domain QA without building the entire open-domain QA pipeline. It supports:
- Efficient Reader Comparison
- Reproducible Research
- Knowledge Source for Applications
✨ Easy evaluation framework: Build especially for open-domain QA
✨ Pre-trained Wiki dataset: No need to train it yourself
✨ Scaleable: set your own configurations and evaluate on the open domain scale.
✨ Open source: Everyone can contribute
- DrQA [codes] [paper]
- BERTserini [codes] [paper]
- DS-QA [paper] [codes]
- Paragraph Ranker [codes]
- Multi-passage BERT [codes] [paper]
- Answer re-ranker [codes]
- SPARTA [codes] [paper]
- FID [paper] [codes]
Notes:
- I will (try to) sort out the full version of codes after the release of the final grade.