Credit attribution
Problems
- A tension between interests of scientists and interests of science as a whole
- Quantification of reputation/achievement. Failure to do so well affects both researchers and hiring committees
- Need to be careful if metrics, not the underlying constructs we’re interested in, are being optimized
How to have a fair attribution system?
How do we ensure that credit attribution is maximally fair and unbiased, and that the new tools of open research do not simply create additional ‘artificial’ methods of measuring contribution?
Contribution metrics
What would new ways of measuring contributions in research look like? How can we challenge existing reward systems that have a negative impact on science?
-quantify contributions through %s, badges, to different activities
Concrete actions
The publication step is the point where authors are especially likely to do something. As editors and referees, we can encourage journals (or encourage our colleagues to encourage journals) to take these steps: Apply more pressure to get authors to make their data and code available. Allow more types of citations, e.g., citing video lectures. (New citation types are weird, but citation to the arXiv started off weird.) Ask authors to declare “super citations”, e.g., pick a small number of their citations as the key ideas that they are building on. Allow for formal categorization of citation types, e.g., citations you’re disagreeing with, or citations that you’re taking code from. Ask authors to describe their individual contributions, or even self-apportion different percentages of “credit” for the paper. New journals that doesn’t have long-standing/entrenched policies are especially tractable places to apply these pressures, such as the new overlay journal Quantum.