Scientific Statement Classification over arXiv.org
Bruce R. Miller, Deyan Ginev
We introduce a new classification task for scientific statements and release a large-scale dataset for supervised learning. Our resource is derived from a machine- readable representation of the arXiv.org collection of preprint articles. We explore fifty author-annotated categories and empirically motivate a task design of grouping 10.5 million annotated paragraphs into thirteen classes. We demonstrate the task setup aligns with known success rates from the state of art, peaking at a 0.91 F1- score via a BiLSTM encoder-decoder model. Additionally, we introduce a lexeme serialization for mathematical formulas, and demonstrate that context-aware models could improve when also trained on the symbolic modality. Finally, we discuss the limitations of both data and task design, and outline potential directions towards increasingly complex models of scientific discourse, beyond isolated statements.
Language Resources and Evaluation Conference (LREC2020)
and Ginev, D.
Scientific Statement Classification over arXiv.org, Language Resources and Evaluation Conference (LREC2020), Marseille, -1, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=928554
(Accessed March 23, 2023)