CoQA

A Conversational Question Answering Challenge

What is CoQA?

CoQA is a large-scale dataset for building Conversational Question Answering systems. The goal of the CoQA challenge is to measure the ability of machines to understand a text passage and answer a series of interconnected questions that appear in a conversation. CoQA is pronounced as coca .

CoQA paper

CoQA contains 127,000+ questions with answers collected from 8000+ conversations. Each conversation is collected by pairing two crowdworkers to chat about a passage in the form of questions and answers. The unique features of CoQA include 1) the questions are conversational; 2) the answers can be free-form text; 3) each answer also comes with an evidence subsequence highlighted in the passage; and 4) the passages are collected from seven diverse domains. CoQA has a lot of challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning.

Download

Browse the examples in CoQA:

Download a copy of the dataset in json format:


Evaluation

To evaluate your models, use the official evaluation script. To run the evaluation, use python evaluate-v1.0.py --data-file <path_to_dev-v1.0.json> --pred-file <path_to_predictions>.

Once you are satisfied with your model performance on the dev set, you submit it to get the official scores on the test sets. We have two test sets, an in-domain set which constitutes the domains present in the training and the dev sets, and an out-of-domain set which constitutes unseen domains (see the paper for more details). To preserve the integrity of the test results, we do not release the test set to the public. Follow this tutorial on how to submit your model for an official evaluation:

Submission Tutorial

License

CoQA contains passages from seven domains. We make five of these public under the following licenses:

  • Literature and Wikipedia passages are shared under CC BY-SA 4.0 license.
  • Children's stories are collected from MCTest which comes with MSR-LA license.
  • Middle/High school exam passages are collected from RACE which comes with its own license.
  • News passages are collected from the DeepMind CNN dataset which comes with Apache license.

Questions?

Ask us questions at our google group or at sivar@cs.stanford.edu or danqi@cs.stanford.edu.

Acknowledgements

We thank the SQuAD team for allowing us to use their code and templates for generating this website.

Leaderboard

RankModelIn-domainOut-of-domainOverall
Human Performance

Stanford University

(Reddy et al. '18)
89.487.488.8

1

Jan 03, 2019
BERT + Answer Verification (single model)

Sogou Search AI Group

83.880.282.8

2

Jan 06, 2019
BERT with History Augmented Query (single model)

Fudan University NLP Lab

82.778.681.5

3

Dec 12, 2018
D-AoA + BERT (single model)

Joint Laboratory of HIT and iFLYTEK Research

81.477.380.2

4

Nov 29, 2018
SDNet (ensemble model)

Microsoft Speech and Dialogue Research Group

https://arxiv.org/abs/1812.03593
80.775.979.3

5

Dec 30, 2018
BERT-base finetune (single model)

Tsinghua University CoAI Lab

79.874.178.1

6

Nov 26, 2018
SDNet (single model)

Microsoft Speech and Dialogue Research Group

https://arxiv.org/abs/1812.03593
78.073.176.6

7

Oct 06, 2018
FlowQA (single model)

Allen Institute for Artificial Intelligence

https://arxiv.org/abs/1810.06683
76.371.875.0

8

Jan 14, 2019
RNet + PGNet + BERT (single model)

Nanjing University

74.770.073.3

9

Dec 30, 2018
DrQA + marker features (single model)

Stanford University

71.665.169.7

10

Dec 10, 2018
BiDAF++ (single model)

Beijing University of Posts and Telecommunications

71.165.569.5

11

Sep 27, 2018
BiDAF++ (single model)

Allen Institute for Artificial Intelligence

https://arxiv.org/abs/1809.10735
69.463.867.8

12

Nov 22, 2018
Bert Base Augmented (single model)

Fudan University NLP Lab

68.461.866.5

13

Dec 17, 2018
RNet_DotAtt + seq2seq with copy attention (single model)

University of Science and Technology of China

68.162.366.4

14

Dec 30, 2018
Simplified BiDAF++ (single model)

Peking University

68.760.566.3

15

Aug 21, 2018
DrQA + seq2seq with copy attention (single model)

Stanford University

https://arxiv.org/abs/1808.07042
67.060.465.1

16

Aug 21, 2018
Vanilla DrQA (single model)

Stanford University

https://arxiv.org/abs/1808.07042
54.547.952.6