ACL Vienna 2025
Eye-tracking tutorial I attended the “Eye Tracking and NLP” tutorial in Vienna (Jul 27, 2025). If you work on cognitively‑informed models, evaluation beyond static labels, or human‑centered NLP, this one’s for you!! Why gaze? Eye movements reflect online processing (not just end products), letting us probe difficulty, attention, and strategies during reading. That’s gold for modeling and evaluation. (PubMed) Data is maturing: Multilingual, multi‑lab efforts (e.g., MECO, MultiplEYE) + tooling (e.g., pymovements) have made high‑quality datasets and pipelines more accessible. (meco-read.com, multipleye.eu, arXiv) Models & evals: Gaze can improve certain NLP tasks and also evaluate systems with behavioral signals (e.g., readability, MT, summarization). But gains are often modest unless modeling is careful or data is task‑aligned. Open debates: How well LLM surprisal predicts human reading times varies with model size, layers, and populations; adding recency biases can help fit human behavior. (ACL Anthology, dfki.de, ACL Anthology, ACL Anthology) Eye‑tracking 101 (super fast) 🧠👁️ Fixations & saccades. Reading is a hop‑and‑pause routine: brief saccades (tens of ms) between ~200–250 ms fixations; perception occurs mostly during fixations, not saccades. The classic eye‑mind assumption: minimal lag between what’s fixated and what’s processed. (andrewd.ces.clemson.edu, PubMed) ...