Abstract:
Phrasal verbs pose a significant chal
lenge in natural language understanding due to their
context-dependent meanings. This study investi
gates the impact of fine-tuning BERT on the task of
context-aware phrasal verb disambiguation. Using
a curated dataset of phrasal verbs with annotated
meanings, we first evaluate a baseline model’s per
formance on semantic similarity and text generation
metrics. Subsequently, we fine-tune BERT on the
dataset and re-assess its effectiveness. The results
demonstrate consistent improvements across all met
rics after fine-tuning: Cosine Similarity increased
from 0.5889 to 0.6189, BLEU Score improved from
0.2570 to 0.3150, ROUGE-L rose from 0.4623 to
0.4901, Jaccard Similarity from 0.3484 to 0.3648,
and METEORfrom0.3555 to 0.3607. These findings
highlight that fine-tuning BERT enhances its ability
to capture the nuanced meanings of phrasal verbs
in context, which is critical for advancing semantic
classification tasks. Future work will extend this
approach to idiomatic expressions and broader lin
guistic ambiguity.