CoQA

A Conversational Question Answering Challenge

What is CoQA?

CoQA is a large-scale dataset for building Conversational Question Answering systems. The goal of the CoQA challenge is to measure the ability of machines to understand a text passage and answer a series of interconnected questions that appear in a conversation. CoQA is pronounced as coca .

CoQA paper

CoQA contains 127,000+ questions with answers collected from 8000+ conversations. Each conversation is collected by pairing two crowdworkers to chat about a passage in the form of questions and answers. The unique features of CoQA include 1) the questions are conversational; 2) the answers can be free-form text; 3) each answer also comes with an evidence subsequence highlighted in the passage; and 4) the passages are collected from seven diverse domains. CoQA has a lot of challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning.

Download

Browse the examples in CoQA:

Download a copy of the dataset in json format:


Evaluation

To evaluate your models, use the official evaluation script. To run the evaluation, use python evaluate-v1.0.py --data-file <path_to_dev-v1.0.json> --pred-file <path_to_predictions>.

Once you are satisfied with your model performance on the dev set, you submit it to get the official scores on the test sets. We have two test sets, an in-domain set which constitutes the domains present in the training and the dev sets, and an out-of-domain set which constitutes unseen domains (see the paper for more details). To preserve the integrity of the test results, we do not release the test set to the public. Follow this tutorial on how to submit your model for an official evaluation:

Submission Tutorial

License

CoQA contains passages from seven domains. We make five of these public under the following licenses:

  • Literature and Wikipedia passages are shared under CC BY-SA 4.0 license.
  • Children's stories are collected from MCTest which comes with MSR-LA license.
  • Middle/High school exam passages are collected from RACE which comes with its own license.
  • News passages are collected from the DeepMind CNN dataset which comes with Apache license.

Questions?

Ask us questions at our google group or at sivar@cs.stanford.edu or danqi@cs.stanford.edu.

Acknowledgements

We thank the SQuAD team for allowing us to use their code and templates for generating this website.

Leaderboard

RankModelIn-domainOut-of-domainOverall
Human Performance

Stanford University

(Reddy & Chen et al. TACL '19)
89.487.488.8

1

Mar 29, 2019
Google SQuAD 2.0 + MMFT (ensemble)

MSRA + SDRG

89.988.089.4

2

Mar 29, 2019
Google SQuAD 2.0 + MMFT (single model)

MSRA + SDRG

88.586.087.8

2

Mar 29, 2019
ConvBERT (ensemble)

Joint Laboratory of HIT and iFLYTEK Research

88.785.487.8

3

Mar 28, 2019
ConvBERT (single model)

Joint Laboratory of HIT and iFLYTEK Research

87.784.686.8

3

Jan 25, 2019
BERT + MMFT + ADA (ensemble)

Microsoft Research Asia

87.585.386.8

4

Jan 21, 2019
BERT + MMFT + ADA (single model)

Microsoft Research Asia

86.481.985.0

5

Jan 03, 2019
BERT + Answer Verification (single model)

Sogou Search AI Group

https://github.com/sogou/SMRCToolkit
83.880.282.8

6

Jan 06, 2019
BERT with History Augmented Query (single model)

Fudan University NLP Lab

82.778.681.5

7

Jan 31, 2019
BERT Large Finetuned Baseline (single model)

Anonymous

82.678.481.4

8

Jan 21, 2019
BERT Large Augmented (single model)

Microsoft Dynamics 365 AI Research

82.577.681.1

9

Dec 12, 2018
D-AoA + BERT (single model)

Joint Laboratory of HIT and iFLYTEK Research

81.477.380.2

10

Aug 01, 2019
BERT Augmented + AoA (single model)

Netease Games AI Lab

81.177.480.0

11

Mar 10, 2019
CNet (single model)

Anonymous

80.977.179.8

12

Nov 29, 2018
SDNet (ensemble)

Microsoft Speech and Dialogue Research Group

https://github.com/Microsoft/SDNet
80.775.979.3

13

Feb 22, 2019
CQANet (single model)

Nanjing University

80.276.579.1

14

May 09, 2019
CANet (single model)

Northwestern Polytechnical University

80.175.778.9

15

Jul 13, 2019
BERT Finetuned Baseline

single model

79.776.378.7

15

Apr 13, 2019
BERT w/ 2-context (single model)

NTT Media Intelligence Laboratories

https://arxiv.org/pdf/1905.12848
79.875.978.7

16

Dec 30, 2018
BERT-base finetune (single model)

Tsinghua University CoAI Lab

79.874.178.1

17

Apr 19, 2019
Bert-FlowDelta (single model)

National Taiwan University, MiuLab

https://arxiv.org/abs/1908.05117
79.274.177.7

18

Feb 28, 2019
GraphFlow (single model)

RPI and IBM Research

https://graphreason.github.io/papers/13.pdf
78.474.577.3

19

Nov 26, 2018
SDNet (single model)

Microsoft Speech and Dialogue Research Group

https://github.com/Microsoft/SDNet
78.073.176.6

20

Aug 12, 2019
Flow Framework (single model)

SIAT NLP Group

77.073.175.8

21

Oct 06, 2018
FlowQA (single model)

Allen Institute for Artificial Intelligence

https://arxiv.org/abs/1810.06683
76.371.875.0

22

Jul 16, 2019
HisFurC + BERT

single model

76.070.474.4

23

Jan 14, 2019
RNet + PGNet + BERT (single model)

Nanjing University

74.770.073.3

24

Jan 31, 2019
XyzNet (single model)

Beijing Normal University

74.368.872.7

25

Dec 30, 2018
DrQA + marker features (single model)

Stanford University

71.665.169.7

26

Dec 10, 2018
BiDAF++ (single model)

Beijing University of Posts and Telecommunications

71.165.569.5

27

Sep 27, 2018
BiDAF++ (single model)

Allen Institute for Artificial Intelligence

https://arxiv.org/abs/1809.10735
69.463.867.8

28

Nov 22, 2018
Bert Base Augmented (single model)

Fudan University NLP Lab

68.461.866.5

29

Dec 17, 2018
RNet_DotAtt + seq2seq with copy attention (single model)

University of Science and Technology of China

68.162.366.4

30

Dec 30, 2018
Simplified BiDAF++ (single model)

Peking University

68.760.566.3

31

Aug 21, 2018
DrQA + seq2seq with copy attention (single model)

Stanford University

https://arxiv.org/abs/1808.07042
67.060.465.1

32

Aug 21, 2018
Vanilla DrQA (single model)

Stanford University

https://arxiv.org/abs/1808.07042
54.547.952.6