The public domain, as we understand it, is the wealth of information that is free from the barriers to access or reuse usually associated with copyright protection, either because it is free from any copyright protection or because the right holders have decided to remove these barriers. It is the raw material from which new knowledge is derived and new cultural works are created.
After decades of measures that have drastically reduced the public domain, typically by extending the terms of protection, it is time to strongly reaffirm how much our societies and economies rely on a vibrant and ever expanding public domain. The role of the public domain, in fact, already crucial in the past, it is even more important today, as the Internet and digital technologies enable us to access, use and re-distribute culture with an ease and a power unforeseeable even just a generation ago. The Public Domain Manifesto aims at reminding citizens and policy-makers of a common wealth that, since it belongs to all, it is often defended by no-one. In a time where we for the first time in history have the tools to enable direct access to most of our shared culture and knowledge it is important that policy makers and citizens strengthen the legal concept that enables free and unrestricted access and reuse.
We invite you to read the Manifesto and sign it, if you wish to show your support. We also invite you to share this site (http://publicdomainmanifesto.org) with your contacts and friends. The Public Domain Manifesto is also onFacebook. Also remember to celebrate the Public Domain Day every year on New Year’s Day.
[acceso a documento en pdf]
Moving from a Web of documents to a Web of data (or of Linked Open Data) is an oft-cited goal in the sciences. The Web of data would allow us to link together disparate information from unrelated disciplines, run powerful queries, and get precise answers to complex, data-driven questions. It’s an undoubtedly desirable extension of the way that the existing networks increase the value of documents and computers through connectivity - Metcalfe’s Law applied to more complex information and systems.
However, making the Web of data turns out to be a deeply complex endeavor. Data - here, a catchall word covering databases and datasets and generally meaning here information that is gathered in the sciences as a result of either experimental work or environmental observation - require a much more robust and complete set of standards to achieve the same “web” capabilities we take for granted in commerce and culture.
Unlike documents, the ultimate intended reader of most data is a machine. Some classic examples include search engines, analytic software, database back ends, and more. There is simply too much data in production to place people on the front lines of analysis. When data scales easily into the petabytes, we just can’t keep up using the existing systems.
There are three interlocking dimensions to interoperability in data: legal, technical, and semantic. By legal, we mean the contractual and intellectual property rights associated with the data; by technical, the standard systems (especially the computer languages) in which the data is published; and by semantic, the actual meaning of the data itself - what it describes, and how it relates to the broader world.
Each of these dimensions is complex on its own. Taken together, the three represent unsolvable complexity. The semantic layer alone requires an almost miraculous level of agreement on “what things mean,” and anyone who has witnessed argument among scientists, be they economists of physicists, knows that even apparently simple topics turn contentious over matters as basic as definitions. Consensus on the technical layer is somewhat easier - the existence of the Web and the Semantic Web “stack” of standard technologies has begun to take a leadership position in data networking - but still difficult, long, and open to argument. One of the only opportunities we have is in the legal layer, where we can look to a broad set of successes in legal interoperability through the use of a simple, flat standard: the public domain.
The public domain is a very simple concept - no rights are reserved to owners, and all rights are granted to users. The public domain exists as a counterweight to copyright in the creative space, but in some countries - especially the United States - as a first option for data that is not considered “creative.”
The public domain option currently underpins a wide variety of linked data that is already well on its way to achieving Web scale. From the International Virtual Observatory, whose members build an international data net on norms of “acknowledgment” rather than contracts of “attribution”, to the world of genomics, where entire genomes and related data are harmonized nightly across multiple countries, the public domain creates complete interoperability at the legal layer of the data network, and serves as a foundation for the next layer of technical interoperability.
Interestingly we have yet to observe similar network effects emerging in cases where the underlying data is treated in a more conservative “intellectual property” context by using copyright licenses or database licenses inspired by copyright. Indeed, in the case of the international consortium mapping human genomic variation, the implementation of a “click through” license was found in practice to impede integration of that mapped variation with other public domain data, limiting the value of the map. The license was removed, thepublic domain option instated, and the database was immediately technically integrated with the rest of the international web of gene data.
We have seen the public domain option work, again and again, across the scientific disciplines. Implementing the public domain as the interoperability standard for the legal dimension of the web of data holds the greatest promise for scalability and long-term achievement of the network effect for data, as it permits the widest range of experimentation and development at the technical and semantic layers.