Science has emerged as a dominant engine of innovation for modern society. Moreover, its rich published traces allow us to understand, predict and guide its advance and utility like never before. Research papers are the dominant media for state-of-art knowledge. Therefore, if we can develop models that understand research papers, we can greatly enhance the ability of computers to understand knowledge.
The competition will provide a large paper dataset, which contains roughly 200K papers, along with paragraphs or sentences which describe the research papers. These pieces of description are mainly from paper text which introduces citations.
An efficient implementation based on BERT  and graph neural network (GNN)  is introduced.
 BERT: Pre-training of deep bidirectional transformers for language understanding.
 Relational inductive biases, deep learning, and graph networks.
Sponsor：AMiner · Microsoft · biendata