Sangam: A Confluence of Knowledge Streams

SYMBOLIC AND NEURAL APPROACHES TO NATURAL LANGUAGE INFERENCE

Show simple item record

dc.contributor Moss, Lawrence
dc.creator Hu, Hai
dc.date 2021-07-13T06:47:44Z
dc.date 2021-07-13T06:47:44Z
dc.date 2021-06
dc.date.accessioned 2023-02-24T18:26:37Z
dc.date.available 2023-02-24T18:26:37Z
dc.identifier http://hdl.handle.net/2022/26642
dc.identifier.uri http://localhost:8080/xmlui/handle/CUHPOERS/260291
dc.description Thesis (Ph.D.) - Indiana University, Department of Linguistics, 2021
dc.description Natural Language Inference (NLI) is the task of predicting whether a hypothesis text is entailed (or can be inferred) from a given premise. For example, given the premise that two dogs are chasing a cat, it follows that some animals are moving, but it does not follow that every animal is sleeping. Previous studies have proposed logic-based, symbolic models and neural network models to perform inference. However, in the symbolic tradition, relatively few systems are designed based on monotonicity and natural logic rules; in the neural network tradition, most work is focused exclusively on English. Thus, the first part of the dissertation asks how far a symbolic inference system can go relying only on monotonicity and natural logic. I first designed and implemented a system that automatically annotates monotonicity information on input sentences. I then built a system that utilizes the monotonicity annotation, in combination with hand-crafted natural logic rules, to perform inference. Experimental results on two NLI datasets show that my system performs competitively to other logic-based models, with the unique feature of generating inferences as augmented data for neural-network models. The second part of the dissertation asks how to collect NLI data that are challenging for neural models, and examines the cross-lingual transfer ability of state-of-the-art multilingual neural models, focusing on Chinese. I collected the first large-scale NLI corpus for Chinese, using a procedure that is superior to what has been done with English, along with four types of linguistically oriented probing datasets in Chinese. Results show the surprising transfer ability of multilingual models, but overall, even the best neural models still struggle on Chinese NLI, exposing the weaknesses of these models.
dc.language en
dc.publisher [Bloomington, Ind.] : Indiana University
dc.rights https://creativecommons.org/licenses/by-nc/4.0/
dc.subject natural language inference
dc.subject symbolic reasoning
dc.subject neural modeling
dc.subject monotonicity
dc.subject natural language understanding
dc.title SYMBOLIC AND NEURAL APPROACHES TO NATURAL LANGUAGE INFERENCE
dc.type Doctoral Dissertation


Files in this item

Files Size Format View
dissertation_final_hai_hu.pdf 1.580Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse