Items

Tag Media bias
Item
Enhancing media literacy: The effectiveness of (Human) annotations and bias visualizations on bias detection
Marking biased texts effectively increases media bias awareness, but its sustainability across new topics and unmarked news remains unclear, and the role of AI-generated bias labels is untested. This study examines how news consumers learn to perceive media bias from human- and AIgenerated labels and identify biased language through highlighting, neutral rephrasing, and political orientation cues. We conducted two experiments with a teaching phase exposing them to various bias-labeling conditions and a testing phase evaluating their ability to classify biased sentences and detect biased text in unlabeled news on new topics. We find that, compared to the control group, both human- and AI-generated sentential bias labels significantly improve bias classification (p < .001), though human labels are more effective (d = 0.42 vs. d = 0.23). Additionally, among all teaching interventions, participants best detectbiased sentences when taught with biased sentence or phrase labels (p < .001), while politicized phrase labels reduce accuracy. The effectiveness of different media literacy interventions remains independent of political ideology, but conservative participants are generally less accurate (p =.011), suggesting an interaction between political inclinations and bias detection. Our research provides a novel experimental framework into assessing the generalizability of media bias awareness and offer practical implications for designing bias indicators in newsreading platforms and media literacy curricula.