Classifier accuracy over time. Self decoding occured first (around 200 ms), then Other decoding.
Dataviz,  Machine Learning,  Python

Decoding Fairness

Part of a my dissertation looked at how people balance self-interest with fairness for others. It may seem obvious that people pay attention to their own interests first, but there is some good evidence that people are actually intuitive cooperators and default to considering how their choices affect other people.

I tested these two views using a modified verison of the Ultimatum Game while I recorded electroencephalography (EEG). On each trial participants saw $12 split between three people and were asked to accept or reject the offer. I used this setup to independently manipulate fairness for the self or another person, then trained two families of support vector machines to classify distributions as "Fair" or "Unfair" from the perspective of the self or another.

It turns out that scalp EEG voltages can be used to predict Self-Fairness by 200 ms after the distribution appears on screen. However, Other-Fairness takes longer to appear. In other words, people seem to prioritize their own payoffs first.

Classifier accuracy over time. Self decoding occured first (around 200 ms), then Other decoding.
Accuracy of the SVM classifiers trained on EEG data to classify distributions as fair or unfair for the self (red) or other (blue). Darker lines are indicate when the classifier was significantly above chance (50%).

You can read the published Neuropsychologia paper here, or a more accessible piece I wrote for The Conversation. And of course here's the code for the figure.