… As we move from the realm of “pure” software – that is, programs running on generalised computers producing essentially digital output (even if that is converted into analogue formats like sounds, images or printouts) – to that of “applied” software, there is a new element: the device itself.
For example, in the case of the pacemakers, having the software that drives the computational side of things is only part of the story: just as important is knowing what the software does in the real world, and that depends critically on the design of the hardware. Knowing that a particular sub-routine controls a particular aspect of the pacemaker tells us little unless we also know how the sub-routine’s output is implemented in the device.
What that means is that not only do we need the source code for the programs that run the devices, we also need details about the hardware – its design, its mechanical properties etc. That takes us into the area of open hardware, and here things start to get tricky …
The problem with hardware specifications is that they are only really useful to those with the facilities to implement them – that is, hardware manufacturers. In fact, those best placed to explore the hardware are the original designers and engineers with their prototyping machines. So what is needed is some way for others to get involved in that design process right at the start, not after everything has been decided. Of course, there are technical areas that few have the competence to comment upon – but some do: there are bound to be designers and engineers outside the company who are able to make useful comments. And even non-technical people can comment on other aspects – for example the appearance of devices, or assumptions about how they will be used.
Companies already gather that kind of information through market research, but there’s a key difference here. Instead of the company paying a specialist market research organisation to go out and ask people what they think about a possible new product, this would entail opening up the entire design process to let anyone comment. Where the former depends on finding enough people who may or may not have interesting things to say, the latter is self-selecting: those who have opinions are given a way of expressing them.
This is not a new idea. It was formally dubbed “open innovation” by Henry Chesbrough a decade ago, notably in his book of the same name. It’s based on the simple but powerful idea that there are always more people outside a company than inside it who know about any given subject – it’s never possible to hire all of the world’s experts. And so it makes sense to open up the development process to tap into that pool of expertise that would otherwise be missed …
A project aiming to give citizens a way to participate in the conversation about air quality. It is composed of a sensing device that measures the air quality in the immediate environment and an on-line community that is sharing this information in real-time.
It is a community-developed, open source project that is driven by people who care about the air they breathe.
The real-time discussion will be happening in this open Google Group:
… The Internet, on the other hand, was designed and deployed by small groups of researchers following the credo of one of its chief architects, David Clark: “rough consensus and running code.” Its early standards — uncomplicated, consensual — were stewarded by small organizations that resisted permission or authority. And they won: The Internet Protocol on which every connected device relies was a triumph of distributed innovation over centralized expertise.
The ethos of the Internet is that everyone should have the freedom to connect, to innovate, to program, without asking permission. No one can know the whole of the network, and by design it cannot be centrally controlled. This network was intended to be decentralized, its assets widely distributed. Today most innovation springs from small groups at its “edges.”
This technical strategy has led to the creation of a gigantic network of far-flung innovators who develop standards with one another and share the products of their work in the form of free and open-source software. The architecture of the Internet and its abundance of free software and components has driven down the cost of manufacturing, distribution and collaboration — of innovation. It used to cost millions of dollars to start a software company. Today, for little or no money, entrepreneurs are able to develop and release a “minimum viable product” and test it with real users on the Internet before they have to raise any money from investors.
I don’t think education is about centralized instruction anymore; rather, it is the process establishing oneself as a node in a broad network of distributed creativity.
A taxonomy for measuring the success of open source software projects. by Amir Hossein Ghapanchi, Aybuke Aurum, and Graham Low. First Monday, Volume 16, Number 8 - 1 August 2011
Open source software (OSS) has been widely adopted by organizations as well as individual users and has changed the way software is developed, deployed and perceived. Research into OSS success is critical since it provides project leaders with insights into how to manage an OSS project in order to succeed. However, there is no universally agreed definition of “success” and researchers employ different dimensions (e.g., project activity and project performance) to refer to OSS success. By conducting a rigorous literature survey, this paper seeks to take a holistic view to explore various areas of OSS success that have been studied in prior research. Finally it provides a measurement taxonomy including six success areas for OSS projects. Implications for theory and practice are presented.
This definition has been prepared by the “Peer-to-peer Urbanism Task Force” consisting of Antonio Caperna, Michael Mehaffy, Geeta Mehta, Agatino Rizzo, Nikos A. Salingaros, Stefano Serafini, and Emanuele Strano (September 2010). References for further info are here. “P2P (PEER-TO-PEER) URBANISM is an innovative way of conceiving, constructing, and repairing the city that rests upon five basic principles. 1) P2P Urbanism defends the fundamental human right to choose the built environment in which to live, selecting from amongst diverse features those that best meet our needs. 2) All citizens must have access to information concerning their environment so that they can engage in the decision-making process. This can be actively supported by ICT (Information and Communication Technology). 3) The users themselves should participate on all levels in co-designing and in some cases building their city. They should be stakeholders in any changes that are being contemplated in their environment by governments or developers. 4) P2P Urbanism relies on available open-source knowledge, theories, technologies, and implemented practices for human-scale urban fabric that are free for anyone to use and review. 5) Residents have the right to implement evolutionary repositories of knowledge, skills, and practices, which give them increasingly sophisticated and well-adapted urban tools.”
This definition has been prepared by the “Peer-to-peer Urbanism Task Force” consisting of Antonio Caperna, Michael Mehaffy, Geeta Mehta, Agatino Rizzo, Nikos A. Salingaros, Stefano Serafini, and Emanuele Strano (September 2010).
References for further info are here.
“P2P (PEER-TO-PEER) URBANISM is an innovative way of conceiving, constructing, and repairing the city that rests upon five basic principles.
1) P2P Urbanism defends the fundamental human right to choose the built environment in which to live, selecting from amongst diverse features those that best meet our needs.
2) All citizens must have access to information concerning their environment so that they can engage in the decision-making process. This can be actively supported by ICT (Information and Communication Technology).
3) The users themselves should participate on all levels in co-designing and in some cases building their city. They should be stakeholders in any changes that are being contemplated in their environment by governments or developers.
4) P2P Urbanism relies on available open-source knowledge, theories, technologies, and implemented practices for human-scale urban fabric that are free for anyone to use and review.
5) Residents have the right to implement evolutionary repositories of knowledge, skills, and practices, which give them increasingly sophisticated and well-adapted urban tools.”
CrisisCommons is a global network of volunteers who use creative problem solving and open technologies to help people and communities in times and places of crisis. We seek not only coders, programmers, geospatial and visualization ninjas but collaborative, smart and savvy folks who can lead teams, manage projects, search the internet, translate languages, apply intuitive and universal access interfaces. We embrace innovation and open systems. We believe an idea can change the world. As they say, it takes a village. Won’t you join our tribe?
Mark Gorton (founder of LimeWire) started TOPP in 1999. His goal was to promote alternatives to automobile dependency. While maintaining this focus, TOPP has become a kind of incubator for projects that support open participation in urban development. Their approach is rooted in the idea of open source, most commonly associated with free computer programs that can be shared, adapted, and further developed by anyone with the ability to contribute. While TOPP has much expertise in programming, they’ve also applied the open source model to urban planning and governance. With projects ranging from Portland’s TriMet transit system map to the closing of Times Square to traffic, TOPP has been using technology for public work in many creative ways…
Harvard Business School Finance Working Paper No. 10-038
In this paper we assess the economic viability of innovation by producers relative to two increasingly important alternative models: innovations by single user individuals or firms, and open collaborative innovation projects. We analyze the design costs and architectures and communication costs associated with each model. We conclude that innovation by individual users and also open collaborative innovation increasingly compete with - and may displace – producer innovation in many parts of the economy. We argue that a transition from producer innovation to open single user and open collaborative innovation is desirable in terms of social welfare, and so worthy of support by policymakers.
En el blog de Creative Commons:
Their first policy recommendation should come as no surprise:
The roots of this apparent bias in favor of closed, producer-centered innovation are certainly understandable – the ascendent models of innovation we have discussed in this paper were less prevalent before the radical decline in design and communication costs brought about by computers and the Internet. But once the welfare-enhancing benefits of open single user innovation and open collaborative innovation are understood, policymakers can – and we think should – take steps to offset any existing biases. Examples of useful steps are easy to find.
First, as was mentioned earlier, intellectual property rights grants can be used as the basis for licenses that help keep innovation open as well as keep it closed (O’Mahony 2003). Policymakers can add support of “open licensing” infrastructures such as the Creative Commons license for writings, and the General Public License for open source software code, to the tasks of existing intellectual property offices. More generally, they should seek out and eliminate points of conflict between present intellectual property policies designed to support closed innovation, but that at the same time inadvertently interfere with open innovation.
Here’s a thought experiment: try to imagine what it would have been like to create Google before the era of the Internet and open standards. You would probably have had to pay millions of dollars to create the necessary software on a proprietary operating system. The effort would have required a huge team of people taking many years. Since Google is a search engine, it most likely would have been given to the phone company to design and run. If you were using X.25, the international networking standard (the Internet equivalent of its time), you would have been charged for each packet of information that you sent or received, in a network in which each network operator had a bilateral agreement with every other network operator. This total project probably would have taken a decade, cost a billion dollars, and not have worked very well.
In fact, the actual cost of building and launching the first Google server was probably only thousands of dollars using standard PC components, mostly open-source software as the base, and connecting to the Stanford University network, which immediately made the service available, at no additional cost, to everyone else on the Internet.
Ciencia, Código abierto, Software libre, Propiedad intelectual, Acceso abierto, DIY, Biología