Quantifying Anchoring Effects in Structured Evaluation: A Mixed-Effects Regression Analysis
Research Proposal • Behavioral Economics Coursework • May 2025
Abstract: How do evaluators make decisions when faced with structured but sequential information? This paper investigates how dynamic anchoring and trait-level salience shape candidate evaluations in high-stakes screening environments. I propose two behavioral models: a dynamic anchoring model, which captures how evaluators update reference points over time, and a trait salience model, which models how attention weights shift across observable attributes. While prior work focused on narrative or interview-based evaluations to highlight subjective biases in high-stakes evaluation, this study uses purely quantitative resumes to isolate cognitive distortions in numeric data.
Through an online experiment, participants evaluate sequences of fictional resumes under time pressure. Experimental treatments manipulate resume ordering, the presence of decoy candidates, and access to an ideal reference profile. The results will test whether biases persist between traits, notably though evolving reference points, and whether introducing a stable benchmark can mitigate distortions in perception. This project contributes to literatures on bounded rationality, decision-making under uncertainty, and behavioral design in evaluation contexts such as hiring, admissions, and peer review.
Download Full Paper (PDF)
Download Dataset
← Back to Portfolio