Research
Moral circle expansion
Our moral circle refers to who we do and do not think of as worthy of moral concern. My research aims to understand what shapes these judgments. This includes factors about the person making the judgement, and factors about the entity being judged. I am particularly interested in how we ascribe moral concern to distant others, such as people who live far away or who are not yet born, non-human animals, and artificial entities. In this work I also examine how children and adults differ in their ascriptions of moral worth - finding that children appear to be much more willing to grant moral concern to distant others.
Unusually altruistic groups
Most people are kind and generous towards friends and family. But some people engage in acts of altruism towards distant others - such as non-directed kidney donation. My research aims to understand what is unique about those who engage in unusually altruistic acts (put another way, those who have an expansive moral circle). To date, I have conducted research with people who have taken the Giving What We Can pledge to donate at least 10% of their income to effective charities, and with children who choose to become vegetarian in meat-eating families. I am always interested in meeting and working with other altruistic groups or individuals so please feel free reach out to me!
Cultured meat and the natural-is-better bias
Despite the possible benefits, many people hold negative views about cultured (i.e., lab grown) meat. One of the most pervasive views is that cultured meat is unnatural. I am interested in understanding attitudes towards cultured meat generally, as well as the specific links between the natural-is-better bias and attitudes towards cultured meat. In particular, I am to understand the bounds of what we do and do not consider natural and the factors that may shape this (e.g., age, culture, personality). I am also interested in exploring how this has shifted across historical contexts.
Moral psychology x AI
Research in psychology typically aims to understand how we make moral judgments about and engage with AI systems. While valuable, researchers in AI safety instead tend to focus on questions of global cooperation and value alignment. I believe that there are a number of areas in which psychological research can contribute to these questions. For example, how can we apply findings from climate change psychology to consideration for the development and regulation of safe and aligned AI? How can our knowledge of human values inform the approaches taken to AI alignment, including which values we prioritize? And how can we our knowledge of biases in moral consideration (e.g., cognitive dissonance) help us to understand barriers to granting moral consideration towards future AIs?