Transformer-based models for answer extraction in text-based question/answering
dc.contributor.advisor | Davoudi, Heidar (Kourosh) | |
dc.contributor.author | Ahmadi Najafabadi, Marzieh | |
dc.date.accessioned | 2023-04-24T15:19:18Z | |
dc.date.available | 2023-04-24T15:19:18Z | |
dc.date.issued | 2023-04-01 | |
dc.degree.discipline | Computer Science | |
dc.degree.level | Master of Science (MSc) | |
dc.description.abstract | The success of transformer-based language models has led to a surge of research in various natural language processing tasks, among which extractive question-answering/answer span detection, has received considerable attention in recent years. However, to date, no comprehensive studies have been conducted to compare and examine the performance of different transformer-based language models in the task of question-answering (QA). Furthermore, while these models can capture significant semantic and syntactic knowledge of a natural language, their potential for enhancing performance, in QA, through the incorporation of linguistic features remains unexplored. In this study, we compare the efficacy of multiple transformer-based models for the task of QA, as well as their performance on particular question types. Moreover, we investigate whether augmenting a set of linguistic features extracted from the question and context passage can enhance the performance of transformer-based language models in QA. In particular, we examine a few feature-augmented transformer-based architectures for the task of QA to explore the impact of these linguistic features on several transformer-based language models. Furthermore, an ablation study is conducted to analyze the individual effect of each feature. Through conducting extensive experiments on two question-answering datasets (i.e., SQuAD and NLQuAD), we show that the proposed framework can improve the performance of transformer-based models. | en |
dc.description.sponsorship | University of Ontario Institute of Technology | en |
dc.identifier.uri | https://hdl.handle.net/10155/1598 | |
dc.language.iso | en | en |
dc.subject | Question-answering | en |
dc.subject | Answer span detection | en |
dc.subject | Transformer-based models | en |
dc.subject | Pre-trained models | en |
dc.title | Transformer-based models for answer extraction in text-based question/answering | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | University of Ontario Institute of Technology | |
thesis.degree.name | Master of Science (MSc) |