by Michele Laurelli
Reading comprehension dataset with 100k+ questions on Wikipedia articles.
Extractive QA: answers are spans from context. SQuAD 2.0 includes unanswerable questions. Benchmark for question answering models.
BERT fine-tuning for QA
Extractive question answering
Reading comprehension