Syntactic Learning from Ambiguous Evidence: Errors and End-States

Isaac Gould, 2015

for $24.95 x

In this thesis I explore the role of ambiguous evidence in first language acquisition by using a probabilistic learner for setting syntactic parameters. As ambiguous evidence is input to the learner that is compatible with multiple grammars or hypotheses, it poses learnability and acquisition challenges because it underdetermines the correct analysis. However, a probabilistic learning model with competing hypotheses can address these challenges by learning from general tendencies regarding the shape of the input, thereby finding the most compatible set of hypotheses, or the grammar with the ‘best fit’ to the input. This enables the model to resolve the challenge of learning the grammar of a subset language: it can reach such a target end-state by learning from implicit negative evidence. Moreover, ambiguous evidence can provide insight into two phenomena characteristic of language acquisition: variability (both within speakers and across a population) and learning errors. Both phenomena can be accounted for under a model that is attempting to learn a grammar of best fit.

Three case studies relating to word order and phrase structure are investigated with simulations of the model. First I show how the model can account for embedded clause verb placement errors in child Swiss German by learning from ambiguous input. I then show how learning from ambiguous input allows the model to account for grammatical variability across speakers with regard to verb movement in Korean. Finally, I show that the model is successfully able to learn the grammar of a subset language with the example of zero-derived causatives in English.