Algorithmic selection is omnipresent in various domains of our online everyday lives: it ranks our search results, curates our social media news feeds, or recommends videos to watch and music to listen to.This widespread application of algorithmic selection on the internet can be associated with risks like feeling surveilled (S), feeling exposed to distorted information Ash Catchers (D), or feeling like one is using the internet too excessively (O).One way in which internet users can cope with such algorithmic risks is by applying self-help strategies such as adjusting their privacy settings (Sstrat), double-checking information (Dstrat), or deliberately ignoring automated recommendations (Ostrat).This article determines the association of the theoretically derived factors risk awareness (1), personal risk affectedness (2), and algorithm Tracksuit skills (3) with these self-help strategies.
The findings from structural equation modelling on survey data representative for the Swiss online population (N2018=1,202) show that personal affectedness by algorithmic risks, awareness of algorithmic risks and algorithm skills are associated with the use of self-help strategies.These results indicate that besides implementing statutory regulation, policy makers have the option to encourage internet users’ self-help by increasing their awareness of algorithmic risks, clarifying how such risks affect them personally, and promoting their algorithm skills.