Operant keypress tasks in a reinforcement-reward framework where behavior is shaped by its consequence, show lawful relationships in human preference behavior (i.e., approach/avoidance) and have been analogized to "wanting". However, they take 20-40 min as opposed to short non-operant rating tasks, which can be as short as 3 min and unsupervised, thus more readily applied to internet research. It is unknown if non-operant rating tasks where each action does not have a consequence, analogous to "liking", show similar lawful relationships. We studied non-operant, picture-rating data from three independent population cohorts (N = 501, 506, and 4019 participants) using the same 7-point Likert scale for negative to positive preferences, and the same categories of images from the International Affective Picture System. Non-operant picture ratings were used to compute location, dispersion, and pattern (entropy) variables, that in turn produced similar value, limit, and trade-off functions to those reported for operant keypress tasks, all with individual R2 > 0.80. For all three datasets, the individual functions were discrete in mathematical formulation. They were also recurrent or consistent across the cohorts and scaled between individual and group curves. Behavioral features such as risk aversion and other interpretable features of the graphs were also consistent across cohorts. Together, these observations argue for lawfulness in the modeling of the ratings. This picture rating task demonstrates a simple, quick, and low-cost framework for quantitatively assessing human preference without forced choice decisions, games of chance, or operant keypressing. This framework can be easily deployed on any digital device worldwide.
Keywords: Approach; Aversion; Avoidance; Big data; Judgment; Liking; Preference; Relative preference theory; Reward.
© 2024. The Author(s).