ETRI Journal
- Volume 41 Issue 3
- /
- Pages.371-382
- /
- 2019
- /
- 1225-6463(pISSN)
- /
- 2233-7326(eISSN)
DOI QR Code
S2-Net: Machine reading comprehension with SRU-based self-matching networks
-
Park, Cheoneum
(Department of Computer Science, Kangwon National University) ;
- Lee, Changki (Department of Computer Science, Kangwon National University) ;
- Hong, Lynn (SKtelecom) ;
- Hwang, Yigyu (MindsLab) ;
- Yoo, Taejoon (MindsLab) ;
- Jang, Jaeyong (LG Uplus) ;
- Hong, Yunki (Naver) ;
- Bae, Kyung-Hoon (LG Uplus) ;
- Kim, Hyun-Ki (SW and Contents Research Laboratory, Electronics and Telecommunications Research Institute)
- Received : 2017.11.15
- Accepted : 2018.12.03
- Published : 2019.06.03
Abstract
Machine reading comprehension is the task of understanding a given context and finding the correct response in that context. A simple recurrent unit (SRU) is a model that solves the vanishing gradient problem in a recurrent neural network (RNN) using a neural gate, such as a gated recurrent unit (GRU) and long short-term memory (LSTM); moreover, it removes the previous hidden state from the input gate to improve the speed compared to GRU and LSTM. A self-matching network, used in R-Net, can have a similar effect to coreference resolution because the self-matching network can obtain context information of a similar meaning by calculating the attention weight for its own RNN sequence. In this paper, we construct a dataset for Korean machine reading comprehension and propose an
File
Acknowledgement
Grant : Development of Knowledge Evolutionary WiseQA Platform Technology for Human Knowledge Augmented Services
Supported by : Institute for Information & Communications Technology Promotion (IITP)
References
- P. Rajpurkar et al., SQuAD: 100,000+ questions for machine comprehension of text, arXiv preprint arXiv:1606.05250, 2016.
- F. Hill et al., The goldilocks principle: reading children's books with explicit memory representations, arXiv preprint arXiv:1511.02301, 2015.
- T. Nguyen et al., MS MARCO: A human generated machine reading comprehension dataset, arXiv preprint arXiv:1611.09268, 2016.
- D. Chen et al., Reading Wikipedia to answer open-domain questions, arXiv preprint arXiv:1704.00051, 2017.
- D. Weissenborn, G. Wiese, and L. Seiffe, Making neural QA as simple as possible but not simpler, in Proc. 21st Conf. Comput. Nat. Lang. Learning (CoNLL 2017), Vancouver, Canada, 2017.
- W. Wang et al., Gated self‐matching networks for reading comprehension and question answering, in Proc. 55th Annu. Meeting Assoc. Comput. Linguistics, Vancouver, Canada, July 2017, pp. 189-198.
- Y. Cui et al., Attention-over-attention neural networks for reading comprehension, arXiv preprint arXiv:1607.04423, 2016.
- M. Seo et al., Bidirectional attention flow for machine comprehension, arXiv preprint arXiv:1611.01603, 2016.
- S. Wang and J. Jiang, Machine comprehension using match-LSTM and answer pointer, arXiv preprint arXiv:1608.07905, 2016.
- O. Vinyals, M. Fortunato, and N. Jaitly, Pointer networks, in Adv. Neural Inform. Process. Syst., Montreal, Canada, 2015, pp. 2674-2682.
- D. Bahdanau et al., Neural machine translation by jointly learning to align and translate, Proc. ICLR '15, arXiv:1409.0473, 2015.
- K. Cho et al., Learning phrase representation using RNN encoder-decoder for statistical machine translation, in Proc. EMNLP '14, Doha, Qatar, Oct. 25-29, 2014.
- S. Hochreiter and J. Schmidhuber, Long short‐term memory, Neural Comput. 9 (1997), no. 8, 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
- T. Lei and Y. Zhang, Training RNNs as fast as CNNs, arXiv preprint arXiv:1709.02755, 2017.
- C. Lee, J. Kim, and J. Kim, Korean dependency parsing using deep learning, in Proc. KIISE HCLT, 2014, pp. 87-91(in Korean).
- Y. Kim, Convolutional neural networks for sentence classification, in Proc. EMNLP '14, Doha, Qatar, Oct. 25-29, 2014.
- D. Kingma and J. Ba. Adam, A method for stochastic optimization, arXiv preprint arXiv:1412.6980, 2014.
- K. Lee et al., Learning recurrent span representations for extractive question answering, arXiv:1611.01436, 2017.
- Z. Wang et al., Multi‐perspective context matching for machine comprehension, arXiv:1612.04211, 2016.
- Z. Chen et al., Smarnet: Teaching machines to read and comprehend like human, arXiv:1710.02772, 2017.
- J. Pennington, R. Socher, and C. Manning, Glove: Global vectors for word representation, in Proc. EMNLP '14, Doha, Qatar, Onct. 25-29, 2014, pp. 1532-1543.
- M. E. Peters et al., Deep contextualized word presentations, Int. Conf. Learning Representations, 2018. https://openreview.net/pdf?xml:id=S1p31z-Ab.