Jason


My research lies broadly in the area of judgment and decision making, with a particular focus on ethical decision making. Often through the use of experimental economic methods, I have investigated such topics as how people avoid their own generosity and justify dishonesty. A second, separate focus of my research is the development of accuracy benchmarks for estimating parameters in psychological experiments that can allow us to measure replicability and set standards for acceptable measurement error. Some working papers below. Comments are always welcome.

"The Nature of Gender Discrimination Under Competition: Evidence from The Price is Right" with Pavel Atanasov pdf
Abstract: Gender discrimination has been demonstrated in a variety of field settings, but convincingly parsing its source - tastes vs. stereotypes - has proven notoriously difficult. We address this gap using evidence from a game on The Price Is Right television show in which contestants bid sequentially to get closest to the price of a prize without exceeding it. The last bidder's dominant strategy is to overbid another contestant by $1, leaving that contestant almost no chance to win. In over 5,000 games, last bidders of both genders use this "cutoff" strategy less often against same-gender opponents. We show that this result is consistent with gender-based preferences for whom to cut off rather than gender-based beliefs about who is best to cut off. Final bidders average $147 more in expected prizes when the best opponent to cut off is opposite-gendered, and as much as $522 in the penultimate round of an episode.

"Acceptable Measurement Error in Psychology" with Clintin Stober pdf
Abstract: We derive standards for acceptable measurement error when inferring treatment effects from sample means. The standards arise from the amount of measurement accuracy required for sample means to beat a naive benchmark estimator that randomizes the direction and magnitude of treatment effects. We show that at sample and effect sizes common to many areas of psychology, particularly in human subjects research, measurement error is too substantial to make meaningful claims about treatment effects. Alternatively, given some amount of measurement error, we provide minimum sample size recommendations for conducting meaningful experiments.

"Paying People to Look at the Consequences of their Actions" with Daylian Cain pdf
Abstract: Prior research has suggested that people prefer to remain uncertain about the possible negative social consequences of their actions and that this uncertainty facilitates selfish behavior. Our participants played an economic game where they were uncertain about how selfish actions would affect other players; we offered participants various incentives to "look" at the potential consequences of their actions, and this reduced selfishness. Contrary to the predictions of both "crowding out" and adverse selection, participants who were paid to look were more generous than participants who looked without payment. We also find that these payments can be cost-effective: small payments can lead to social welfare gains that are larger than the total cost of subsidies. Our results suggest an efficient way of changing behavior because it may be cheaper to pay someone to look at information about their social footprint - thus activating their social preferences - than it would be to directly monitor/reward prosocial behavior.

"Is Profit Evil? Associations of Profit with Social Harm" with Amit Bhattacharjee and Jon Baron pdf
Abstract: In opposition to economic first principles, four studies show that people appear to view profit as necessarily socially harmful. Studies 1 and 2 find a strong negative correlation between profit and perceived social value across both real firms and entire industries. This relationship holds for both perceived profit and actual profit information for public firms. Study 3 confirms that this effect holds when profit motive is manipulated. Otherwise identically-described organizations are seen as providing less value and doing more harm when described as "for-profit" rather than nonprofit. Study 4 demonstrates that people hold a zero-sum conception of profit that neglects the disciplining effects of competition. People see harmful business practices as profitable, even after an intervention encouraging the consideration of long-term consequences under competition. This tendency is not significantly related to self-reported political ideology. Even in one of the most market-oriented cultures in the world's history, people doubt the ability of profit-seeking business to benefit society.

"Comparing the accuracy of experimental estimates to guessing: A new perspective on replication and the 'Crisis of Confidence' in psychology" with Clint Stober pdf
Abstract: We develop a general measure of estimation accuracy for fundamental research designs called v. The v measure compares the estimation accuracy of the ubiquitous Ordinary Least Squares estimator, which includes sample means as a special case, to a benchmark estimator that randomizes the direction of treatment effects. For sample and effect sizes common to experimental psychology, v suggests that OLS produces estimates that are insuciently accurate for the type of hypotheses being tested. We demonstrate how v can be used to determine sample sizes to obtain minimum acceptable estimation accuracy.

Jason Dana [C.V.]
Visiting Assistant Professor
Organizational Behavior
School of Management
Yale University
jason.dana AT yale dot edu
135 Prospect Street
New Haven, CT 06511

Judgment and Decision Making at Yale
Behavioral Ethics at Penn


last update: 10 29 2013