(download pdf)

Prototyping in Public Servicesdescribes an approach that can be used to help develop new and innovative services by testing ideas out early in the development cycle.

NESTA has produced a guide for policymakers, strategy leads, heads of service, commissioners and anyone else in a public service looking for new methodologies that can help them to better meet the needs of their communities. It sits alongside the Prototyping Framework: A guide to prototyping new ideas which provides examples of activities that can happen at different stages of a prototyping project.

The guide and toolkit are early outputs from our prototyping work and are based on work NESTA and its partners have been doing with several local authorities and third sector organisations. We will continue to learn about prototyping as an approach that can be used to develop public services, through our practical programmes.

… four fundamental goals:

  • Transparency in experimental methodology, observation, and collection of data.
  • Public availability and reusability of scientific data.
  • Public accessibility and transparency of scientific communication.
  • Using web-based tools to facilitate scientific collaboration.

The idea I’ve been most involved with is the first one, since granting access to source code is really equivalent to publishing your methodology when the kind of science you do involves numerical experiments. I’m an extremist on this point, because without access to the source for the programs we use, we rely on faith in the coding abilities of other people to carry out our numerical experiments. In some extreme cases (i.e. when simulation codes or parameter files are proprietary or are hidden by their owners), numerical experimentation isn’t even science. A “secret” experimental design doesn’t give skeptics the ability to repeat (and hopefully verify) your experiment, and the same is true with numerical experiments. Science has to be “verifiable in practice” as well as “verifiable in principle”.

In general, we’re moving towards an era of greater transparency in all of these topics (methodology, data, communication, and collaboration). The problems we face in gaining widespread support for Open Science are really about incentives and sustainability. How can we design or modify the scientific reward systems to make these four activities the natural state of affairs for scientists? Right now, there are some clear disincentives to participating in these activities. Scientists are people, and we’re motivated by most of the same things as normal people:

  • Money, for ourselves, for our groups, and to support our science.
  • Reputation, which is usually (but not necessarily) measured by citations, h-indices, download counts, placement of students, etc.
  • Sufficient time, space, and resources to think and do our research (which is, in many ways, the most powerful motivator).

Right now, the incentive network that scientists work under seems to favor “closed” science. Scientific productivity is measured by the number of papers in traditional journals with high impact factors, and the importance of a scientists work is measured by citation count. Both of these measures help determine funding and promotions at most institutions, and doing open science is either neutral or damaging by these measures. Time spent cleaning up code for release, or setting up a microscopy image database, or writing a blog is time spent away from writing a proposal or paper. The “open” parts of doing science just aren’t part of the incentive structure.