Jump to content
Sign in to follow this  

Recommended Posts

I also never found out what happened to the creative commons thread on here which was one of the most viewed and replied to thread on cross-x.com, but found the original shell in the dusty corners of the internet cache and thought i'd repost that post here. Ossdebate.org is also no longer active.

 

http://cedadebate.org/pipermail/mailman/2005-December/061022.html

 

____________________________________________________________

gordon mitchell writes (in his book 'strategic deception', xvi-xvii) :Perhaps the most strange and idiosyncratic aspect of the contemporary intercollegiate debate community is that, by and large, it keeps to itself. Contrary to the populist tradition of debate as the quintessential genre of public discourse, contemporary intercollegiate debate is an insular and specialized academic activity. The research products generated by thousands of debaters nation-wide are generally put toward a singular end: winning tournament competitions. Sometimes this insularity appears absurd to those who stumble across a slice of the debate community for the first time. In the summer of 1990, Madison Laird (then captain of the Loyola University debate squad) was assigned the task of entertaining Earth Day organizer Bill Keepin during Keepin's visit to the Loyola campus in Los Angeles, California. After Keepin delivered a speech on nuclear power to the student body, Laird led him on a campus tour that ended up in the debate squad room, where yards and yards of argument briefs were stowed away in filing drawers. When Keepin asked to see the files containing research on nuclear power, Laird pulled open one file drawer stuffed to the gills with high-quality research. Keepin was stunned, asking incredulously "how long have you folks kept this stuff locked up?!" In a small way, this vignette illustrates the folly associated with the intercollegiate debate community's insular nature. Indeed, it would not be surprising to find countless other Bill Keepins out there who could make tremendous use of the research and knowledge generated out of intercollegiate policy debate competition. To reach them, debaters need only to realize that they can make vital contributions to public arguments swirling beyond the rarefied confines of debate tournament sites.__in the spirit of the above, i'd ask those concerned to consider the following in response to stefan bauschard's reservations regarding 'open-sourcing' debate (which he elaborated in this post (among others): http://www.ndtceda.com/archives/200512/0080.html) :at the n.d.t., many teams chose to post their first constructive speeches on an accessible website -- that's the internet disclosure which stefan has worked hard to achieve. yet some debaters chose not to do so, although they may've likely read the blocks of their opponents prior to the round. (i even recall stefan and others stopping just short of calling such free-riders 'cheaters'.) this begs the question, how does this community intend to enforce this norm? i'd suggest that the short-term answer is not top-down punishment from tourney directors, but debaters themselves taking ballots away from free-riders, fair and square.everyone knows there are dominant players who benefit immensely from the status quo: teams which can afford to hire extra staff, students who can afford to go to pricey institutes, companies which can afford to sue you if you share their evidence. despite the lipservice paid to the educational mission of debate, until this competitive incentive changes, nothing will magically 'level the playing field'. so how do participants alter competitive incentive? again, by winning ballots. blatantly non-topical cases, for example, are liabilities. if/when the 'open source / creative commons' position wins more ballots, it will more likely compel debaters to put their briefs online. quite simply, the 'solvency mechanism' - at least for the immediate future - is winning the position itself.with this in mind i offer the following first draft of a shell for your consideration. print it out, take some notes on it, prod at the points you find naive and weak, re-write the tags, add something to the on-going discussion (http://www.debatecooperative.net/forums/showthread.php?t=189, http://cross-x.com/vb/showthread.php?t=39492).why waste your time?, you ask. well... who wants to get their asses handed to them by losing to something so 'silly'?                       .kevin.sanchez at gmail.com_http://ossdebate.org/index.php?title=Creative_Commons_CONTENTION ONE:DIGITAL PEER REVIEW ENHANCES THE QUALITY OF BRIEFS AND FILES, FOSTERING MORE EFFICIENT NETWORKS FOR ARGUMENT DEVELOPMENT.In the status quo, the typical debate position is only seen by the team running it, their coaching staff, and the judges and opponents who happen to hear it in-round - plus, any modification is usually done in secret. Harvard's OpenLaw project, on the other hand, offers specific, empirical proof that an online process can help debaters craft higher quality arguments and disseminate their work to concerned citizens elsewhere. Open source, in addition to making better software, takes a political stand against propetiary control over knowledge by openly inviting others to find and fix bugs as well as advancing public licenses which prevent subsequent cooption. This round is now a test of the value of this experiment.New Scientist. July 2002. (Graham Lawton, Features Editor. 'The Great Giveaway'. Available here : http://fossforum.tacticaltech.org/node/114.)_What started as a technical debate over the best way to debug computer programs is developing into a political battle over the ownership of knowledge and how it is used, between those who put their faith in the free circulation of ideas and those who prefer to designate them "intellectual property". No one knows what the outcome will be. But in a world of growing opposition to corporate power, restrictive intellectual property rights and globalisation, open source is emerging as a possible alternative, a potentially potent means of fighting back. And you're helping to test its value right now.The open source movement originated in 1984 when computer scientist Richard Stallman quit his job at MIT and set up the Free Software Foundation. His aim was to create high-quality software that was freely available to everybody. Stallman's beef was with commercial companies that smother their software with patents and copyrights and keep the source code--the original program, written in a computer language such as C++--a closely guarded secret. Stallman saw this as damaging. It generated poor-quality, bug-ridden software. And worse, it choked off the free flow of ideas. Stallman fretted that if computer scientists could no longer learn from one another's code, the art of programming would stagnate (New Scientist, 12 December 1998, p 42).Stallman's move resonated round the computer science community and now there are thousands of similar projects. The star of the movement is Linux, an operating system created by Finnish student Linus Torvalds in the early 1990s and installed on around 18 million computers worldwide.What sets open source software apart from commercial software is the fact that it's free, in both the political and the economic sense. If you want to use a commercial product such as Windows XP or Mac OS X you have to pay a fee and agree to abide by a licence that stops you from modifying or sharing the software. But if you want to run Linux or another open source package, you can do so without paying a penny--although several companies will sell you the software bundled with support services. You can also modify the software in any way you choose, copy it and share it without restrictions. This freedom acts as an open invitation--some say challenge--to its users to make improvements. As a result, thousands of volunteers are constantly working on Linux, adding new features and wrinkling out bugs. Their contributions are reviewed by a panel and the best ones are added to Linux. For programmers, the kudos of a successful contribution is its own reward. The result is a stable, powerful system that adapts rapidly to technological change. Linux is so successful that even IBM installs it on the computers it sells.To maintain this benign state of affairs, open source software is covered by a special legal instrument called the General Public License. Instead of restricting how the software can be used, as a standard software license does, the GPL--often known as a "copyleft"--grants as much freedom as possible (see http://www.fsf.org/licenses/gpl.html). Software released under the GPL (or a similar copyleft licence) can be copied, modified and distributed by anyone, as long as they, too, release it under a copyleft. That restriction is crucial, because it prevents the material from being co-opted into later proprietary products. It also makes open source software different from programs that are merely distributed free of charge. In FSF's words, the GPL "makes it free and guarantees it remains free".Open source has proved a very successful way of writing software. But it has also come to embody a political stand--one that values freedom of expression, mistrusts corporate power, and is uncomfortable with private ownership of knowledge. It's "a broadly libertarian view of the proper relationship between individuals and institutions", according to open source guru Eric Raymond.But it's not just software companies that lock knowledge away and release it only to those prepared to pay. Every time you buy a CD, a book, a copy of New Scientist, even a can of Coca-Cola, you're forking out for access to someone else's intellectual property. Your money buys you the right to listen to, read or consume the contents, but not to rework them, or make copies and redistribute them. No surprise, then, that people within the open source movement have asked whether their methods would work on other products. As yet no one's sure--but plenty of people are trying it. {...}Another experiment that's proved its worth is the OpenLaw project at the Berkman Center for Internet and Society at Harvard Law School. Berkman lawyers specialise in cyberlaw--hacking, copyright, encryption and so on--and the centre has strong ties with the EFF and the open source software community. In 1998 faculty member Lawrence Lessig, now at Stanford Law School, was asked by online publisher Eldritch Press to mount a legal challenge to US copyright law. Eldritch takes books whose copyright has expired and publishes them on the Web, but new legislation to extend copyright from 50 to 70 years after the author's death was cutting off its supply of new material. Lessig invited law students at Harvard and elsewhere to help craft legal arguments challenging the new law on an online forum, which evolved into OpenLaw.Normal law firms write arguments the way commercial software companies write code. Lawyers discuss a case behind closed doors, and although their final product is released in court, the discussions or "source code" that produced it remain secret. In contrast, OpenLaw crafts its arguments in public and releases them under a copyleft. "We deliberately used free software as a model," says Wendy Selzer, who took over OpenLaw when Lessig moved to Stanford. Around 50 legal scholars now work on Eldritch's case, and OpenLaw has taken other cases, too."The gains are much the same as for software," Selzer says. "Hundreds of people scrutinise the 'code' for bugs, and make suggestions how to fix it. And people will take underdeveloped parts of the argument, work on them, then patch them in." Armed with arguments crafted in this way, OpenLaw has taken Eldritch's case--deemed unwinnable at the outset--right through the system and is now seeking a hearing in the Supreme Court.There are drawbacks, though. The arguments are in the public domain right from the start, so OpenLaw can't spring a surprise in court. For the same reason, it can't take on cases where confidentiality is important. But where there's a strong public interest element, open sourcing has big advantages. Citizens' rights groups, for example, have taken parts of OpenLaw's legal arguments and used them elsewhere. "People use them on letters to Congress, or put them on flyers," Selzer says._In contrast to a tightfisted approach, the Open Source developmental model promises rapid improvement. The OpenLaw project proves this model can work for debate - producing better briefs, sharing a mountain of information, and bolstering the depth and breadth of argument. As this position starts winning ballots, it'll snowball to widespread adoption until participants become intrinsically motivated to contribute solid work.Linus Torvalds. Creator of Linux. & David Diamond. Freelance contributor to the New York Times and Business Week. November/December 2001. ('Why Open Source Makes Sense'. Educause Review. p71-2.)_In its purest form, the open source model allows anyone to participate in a project's development or commercial exploitation. Linux is obviously the most successful example. What started out in my messy Helsinki bedroom has grown to become the largest collaborative project in the history of the world. It began as an ideology shared by software developers who believed that computer source code should be shared freely, with the General Public License--the anticopyright--as the movement's powerful tool. It evolved to become a method for the continuous development of the best technology. And it evolved further to gain widespread market acceptance, as seen in the snowballing adoption of Linux as an operating system for Web servers, and in its unexpectedly generous IPOs.What was inspired by ideology has proved itself as technology and is working in the marketplace. Now open source is expanding beyond the technical and business domains . At Harvard University Law School, professors Larry Lessig (who is now at Stanford) and Charles Nelson have brought the open source model to law. They started the Open Law Project, which relies on volunteer lawyers and law students posting opinions and research to the project's Web site to help develop arguments and briefs challening the United States Copyright Extension Act. The theory is that the strongest arguments will be deyeloped when the largest number of legal minds are working on a project, and as a mountain of information is generated through postings and repostings. The site nicely sums up the tradeoff from the traditional approach: "What we lose in secrecy, we expect to regain in depth of sources and breadth of argument." (Put in another context: With a million eyes, all software bugs will vanish.)It's a wrinkle on how academic research has been conducted for years, but one that makes sense on a number of fronts. Think of how this approach could speed up the development of cures for disease, for example. Or how, with the best minds on the task, international diplomacy could be strengthened. As the world becomes smaller, as the pace of life and business intensifies, and as the technology and information become available, people realize the tightfisted approach is becoming incresingly outmoded.The theory behind open source is simple. In the case of an operating system, the source code--the programming instructions underlying the system--is free. Anyone can improve it, change it, and exploit it. But those improvements, changes, and exploitations have to be made freely available. Think Zen. The project belongs to no one and to everyone. When a proiect is opened up, there is rapid and continual improvement. With teams of contributors working in paraIlel the results can happen far more speedily and successfully than if the work were being conducted behind closed doors.That's what we experienced with Linux. Imagine: Instead of a tiny cloistered development team working in secret, you have a monster on your side. Potentially millions of the brightest minds are contributing to a project, and are supported by a peer-review process that has no, er, peer.The first time people hear about the open source approach, it sounds ludicrous. That's why it has taken years for the message of its virtues to sink in. Ideology isn't what has sold the open source model. It started gaining attention when it was obvious that open source was the best method of developing and improving the highest quality technology. And now it is winning in the marketplace, an accomplishment has brought open source its greatest acceptance. Companies were able to be created around numerous value-added services, or to use open source as a way of making a technology popular. When the money rolls in, people get convinced.One of the least understood pieces of the open source puzzle is how so many good programmers would deign to work for absolutely no money. A word about motivation is in order. In a society where survival is more or less assured, money is not the greatest of motivators. It's been well established that folks do their best work when they are driven by a passion. When they are having fun. This is as true for playwrights and sculptors and entrepreneurs as it is for software engineers. The open source model gives people the opportunity to live their passion. To have fun and to work with the world's best programmers, not the few who happen to be employed by their company. Open source developers strive to earn the esteem of their peers. That's got to be highly motivating._Academic citations conclusively demonstrate that publishing online increases readership - debate should join the numerous disciplines that've switched to open access.Eric von Hippel. Head of the Innovation and Entrepreneurship Group in the Sloan School of Management at the Massachusetts Institute of Technology. 2005. (Democratizing Information. p88-9. Available online here: http://web.mit.edu/evhippel/www/books.htm.)_In the case of academic publications, we see evidence that free revealing does increase reuse?a matter of great importance to academics. A citation is an indicator that information contained in an article has been reused: the article has been read by the citing author and found useful enough to draw to readers' attention. Recent empirical studies are finding that articles to which readers have open access?articles available for free download from an author?s website, for example?are cited significantly more often than are equivalent articles that are available only from libraries or from publishers? fee-based websites. Antelman (2004) finds an increase in citations ranging from 45 percent in philosophy to 91 percent in mathematics. She notes that "scholars in diverse disciplines are adopting open-access practices at a surprisingly high rate and are being rewarded for it, as reflected in [citations]."_Debate scholars like legal scholars are prisoners of obsolete data structures. At your camp or squad, as in mine, there are coaches whose primary metric of success is the quantity of evidence cut per week. This leads to poorly-cut files filled with blippy cards, which a year later everyone has forgotten about. To remedy this, network publication creates a way to continuously revise and update your work. It also offers unprecendented opportunities for collaboration. This means fewer tubs full of redundant information and higher quality scholarship.Eben Moglen. Professor of Law & Legal History at Columbia Law School. 5 January 1995. ('The Virtual Scholar and Network Liberation'. http://emoglen.law.columbia.edu/my_pubs/nospeech.html.)_The organization of information determines what kinds of learning are practicable given limited time and resources. In addition, the prevailing systems of information organization give rise to the social customs that define what kinds of scholarly activity are appropriate and useful. Until the beginning of the digital revolution, "data structures" meant primarily the physical organization of written information. How data were preserved affected what could be learned. For the authors of the book we call Bracton, for example--working in the middle of the 13th century--information about the laws and customs of England was contained--in dilute sequential form--in the mass of the plea rolls, to which they had preferential access.Scholarship, in that context, meant epitomizing the plea rolls, to communicate to others in compressed form how their contents did and did not reflect the more familiar conceptual categories of the Romanized European law.To a significant extent, our legal scholarship has remained fixed within this model of converting sequentially-stored dilute information into useful epitomes conforming to the intellectual prepossessions of the era. Littleton, Coke, Blackstone and Story--as I labor to make my students understand in my seminar on the intellectual history of the treatise tradition--all attempted to articulate the loose bones of the English law into a skeleton recognizable given the fashions of the time. Though the forms changed significantly with the eras, each of these types of scholarship was aimed at overcoming the same fundamental constraint. In modern jargon, the material of the law is produced and stored sequentially; the primary goal of legal scholarship has been to access that material associatively, by linking temporally displaced segments in topical relations. The scholar, however awkward it may sound, has been a specialized device for the performance of a sort and merge operation, either using internal memory or sorting externally, using whatever equivalent his generation offered for the three-by-five card.If the information-theoretic significance of scholarship did not much change between the time of Bracton and our contemporaries, the primary problem in the intellectual organization of the law has been to get the scholar to the raw data to be sorted. In the beginning, as with those of us who must still make annual journeys to the English Public Record Office, the solution was to move the scholar around.Since the European adoption of movable-type printing at the end of the 15th century, however, the technical infrastructure of scholarship has largely depended on the hope that the distribution of books could replace the peregrinations of scholars. Scholarship became, as much as possible, the consultation of static volumes of printed information, or the rendering of unprinted information suitable for reprocessing by the printing press. The emphasis was still upon making associative links between previously compiled sources of more dilute information.Along with the process for consultation of sources, scholarship has consisted also of the process for consultation of other scholars. This meant either personal travel or the exchange of written correspondence until the development of technologies for voice transmission at the turn of the twentieth century. As we all know, however, the telephone has been more of a barrier to scholarship than an assistance, and only the development of the answering machine, I think, has prevented the telephone from extirpating scholarship altogether.So, let us now consider what has happened to the media of scholarly communication. In principle, the infrastructural problems that have beset scholarship for one thousand years can now be eliminated. Already digital media directly replacing older analog media are coming into existence. Email is replacing the point-to-point media such as snail mail and telephone calls. Broadcast media--including primitive list servers and the more sophisticated structure of Usenet news--are beginning to serve some of the purposes previously served by scholarly pilgrimage, including organizational meetings, collaborative inquiry, exchange of notes and queries, and the like. Unfortunately, the poor design and low quality of commercial software threatens the vitiation of these new media, a point to which I return below.In addition to new media of personal communication, the network has begun to resolve a few other problems of data organization. The linking of library catalogs has made traditional bibliographic research a trivial task. The fulltext retrieval services, though inadequate in many important respects, have at least rendered the basic sources of most legal scholarship accessible from anywhere in the world where a pair of copper wires is connected to a telephone switching office. Experiments with more extensive digitization of library collections, such as Columbia Law School's Project Janus, may within another generation make possible the global frictionless consultation of the entire existing body of our legal culture. Here the primary impediment is mindless adherence to the antiquated conception of "intellectual property," to whose well-deserved destruction I shall return in a few minutes.But these new media are not just inadequately implemented in the existing technological and legal context. While they substantially reduce the friction in scholarly communication, avoiding the need to move people to data, they are not designed to solve the other primary problem that has beset the scholarship of the past. Even given email, netnews, automated catalogs and the virtual library--and assuming away the ridiculous limitations on use posed by rules protecting the non-productive middlemen called publishers--the scholar is still engaged in using carbon-based intelligence to make static links among existing sources, thus predetermining the obsolescence of her enterprise in the face of future developments. We are still the prisoners of outmoded data structures, and it is time to reconsider the essence of what we do. As Albert Einstein said, our experience in the 20th century is that everything has changed except the nature of men's minds. Fortunately, what we do has become so altogether foolish that it should not be difficult to change. {...}But can the network provide an appropriate alternative for the resuscitation of legal scholarship? The answer is most certainly yes. The first step is the elimination of publication as presently understood. Placing on the network a version of my work in a portable page-description language (such as PostScript) allows anyone caring to read my scholarship, whether online or in a permanent form, to receive it with no loss in production values over the present system of physical reproduction. The digital broadcast media like netnews and the listservs, or their more usable successors, will then replace the existing finding aids.But we will do more by network publication than saving the costs of law review publication and reversing the noxious effect of the middlemen on the culture of scholarship. Network publication will for the first time directly confront the static quality of all prior scholarly data structures. Placing my work in the net means that I can continuously revise and expand it. In addition, our cumbersome citation mechanisms can be replaced by direct active links to other materials on the net, so that the footnote--which is surely the bane of legal scholarship--can be replaced by proliferated cross-linkages of the kind primitively modeled in the current world by the citation links of the commercial fulltext systems, and slightly more sophisticatedly by the existing webform hypertext formats, such as the World Wide Web. Such links can be created by machine control as well as human intervention, so that case citations, legislative updates, and other purely mechanical incorporations can occur without my having to do more than make occasional editorial foray to prune back the accretion of new links.The webform systems also model for us, in a fairly simple way, the unprecedented opportunities for collaborative work that the network has created. Within the next generation we shall see the successors of the webform hypertext systems facilitating collaborative projects in the humanities on a scale previously only dreamed of. The conception of the History Workshop or the Sixieme Section will be revitalized, for example, along with kindred conceptions in many disciplines.For the common lawyers, too, limitations in place for centuries will suddenly give way. The low quality of the common law's encyclopedic sources {such as debate handbooks!!!}, largely the consolidated output of headnote writers working like Grub Street hacks for the booksellers, should be replaced by a far richer literature, achieving the breadth of scale of the Romanist tradition without its limited conceptual categories. Primary sources, commentary, counter-commentary and scholarly debate should all be joined in a single dynamic web, collaboratively edited. Our contributions to this web will be much less bulky than our existing screeds, reflecting the higher priority given to the making of links over the self-assertive announcement of one's own brilliant conceptualizations. But the result will be finally to concentrate the activity of scholars where the need has always been: on the human mind's unparalleled capacity to connect apparently disparate materials. This is what carbon-based intelligence is for; the rest, may I say, is silicon._Debate continues to operate on an industrial model of closed workshops with separate assembly lines. Internet technology has paved the way for a new broadly applicable informational model of peer-to-peer networks dedicated to shared resources and reciprocal collaboraton. This serves as a revolutionary alternative - an ant colony of creative debaters working better together.Wired. November 2003. (Thomas Goetz, Editor. 'Open Source Everywhere'. Available here: http://www.wired.com/wired/archive/1...source_pr.html.)_Cholera is one of those 19th-century ills that, like consumption or gout, at first seems almost quaint, a malady from an age when people suffered from maladies. But in the developing world, the disease is still widespread and can be gruesomely lethal. When cholera strikes an unprepared community, people get violently sick immediately. On day two, severe dehydration sets in. By day seven, half of a village might be dead.Since cholera kills by driving fluids from the body, the treatment is to pump liquid back in, as fast as possible. The one proven technology, an intravenous saline drip, has a few drawbacks. An easy-to-use, computer-regulated IV can cost $2,000 - far too expensive to deploy against a large outbreak. Other systems cost as little as 35 cents, but they're too complicated for unskilled caregivers. The result: People die unnecessarily."It's a health problem, but it's also a design problem," says Timothy Prestero, a onetime Peace Corps volunteer who cofounded a group called Design That Matters. Leading a team of MIT engineering students, Prestero, who has master's degrees in mechanical and oceanographic engineering, focused on the drip chamber and pinch valve controlling the saline flow rate.But the team needed more medical expertise. So Prestero turned to ThinkCycle, a Web-based industrial-design project that brings together engineers, designers, academics, and professionals from a variety of disciplines. Soon, some physicians and engineers were pitching in - vetting designs and recommending new paths. Within a few months, Prestero's team had turned the suggestions into an ingenious solution. Taking inspiration from a tool called a rotameter used in chemical engineering, the group crafted a new IV system that's intuitive to use, even for untrained workers. Remarkably, it costs about $1.25 to manufacture, making it ideal for mass deployment. Prestero is now in talks with a medical devices company; the new IV could be in the field a year from now.ThinkCycle's collaborative approach is modeled on a method that for more than a decade has been closely associated with software development: open source. It's called that because the collaboration is open to all and the source code is freely shared. Open source harnesses the distributive powers of the Internet, parcels the work out to thousands, and uses their piecework to build a better whole - putting informal networks of volunteer coders in direct competition with big corporations. It works like an ant colony, where the collective intelligence of the network supersedes any single contributor.Open source, of course, is the magic behind Linux, the operating system that is transforming the software industry. Linux commands a growing share of the server market worldwide and even has Microsoft CEO Steve Ballmer warning of its "competitive challenge for us and for our entire industry." And open source software transcends Linux. Altogether, more than 65,000 collaborative software projects click along at Sourceforge.net, a clearinghouse for the open source community. The success of Linux alone has stunned the business world.But software is just the beginning. Open source has spread to other disciplines, from the hard sciences to the liberal arts. Biologists have embraced open source methods in genomics and informatics, building massive databases to genetically sequence E. coli, yeast, and other workhorses of lab research. NASA has adopted open source principles as part of its Mars mission, calling on volunteer "clickworkers" to identify millions of craters and help draw a map of the Red Planet. There is open source publishing: With Bruce Perens, who helped define open source software in the '90s, Prentice Hall is publishing a series of computer books open to any use, modification, or redistribution, with readers' improvements considered for succeeding editions. There are library efforts like Project Gutenberg, which has already digitized more than 6,000 books, with hundreds of volunteers typing in, page by page, classics from Shakespeare to Stendhal; at the same time, a related project, Distributed Proofreading, deploys legions of copy editors to make sure the Gutenberg texts are correct. There are open source projects in law and religion. There's even an open source cookbook.In 2003, the method is proving to be as broadly effective - and, yes, as revolutionary - a means of production as the assembly line was a century ago. ...If the ideas behind it are so familiar and simple, why has open source only now become such a powerful force? Two reasons: the rise of the Internet and the excesses of intellectual property. The Internet is open source's great enabler, the communications tool that makes massive decentralized projects possible. Intellectual property, on the other hand, is open source's nemesis: a legal regime that has become so stifling and restrictive that thousands of free-thinking programmers, scientists, designers, engineers, and scholars are desperate to find new ways to create.We are at a convergent moment, when a philosophy, a strategy, and a technology have aligned to unleash great innovation. Open source is powerful because it's an alternative to the status quo, another way to produce things or solve problems. And in many cases, it's a better way. Better because current methods are not fast enough, not ambitious enough, or don't take advantage of our collective creative potential.Open source has flourished in software because programming, for all the romance of guerrilla geeks and hacker ethics, is a fairly precise discipline; you're only as good as your code. It's relatively easy to run an open source software project as a meritocracy, a level playing field that encourages participation. But those virtues aren't exclusive to software. Coders, it could be argued, got to open source first only because they were closest to the tool that made it a feasible means of production: the Internet.The Internet excels at facilitating the exchange of large chunks of information, fast. From distributed computation projects such as SETI at home to file-swapping systems like Grokster and Kazaa, many efforts have exploited the Internet's knack for networking. Open source does those one better: It's not only peer-to-peer sharing - it's P2P production. With open source, you've got the first real industrial model that stems from the technology itself, rather than simply incorporating it."There's a reason we love barn raising scenes in movies. They make us feel great. We think, 'Wow! That would be amazing!'" says Yochai Benkler, a law professor at Yale studying the economic impact of open source. "But it doesn't have to be just a romanticized notion of how to live. Now technology allows it. Technology can unleash tremendous human creativity and tremendous productivity. This is basically barn raising through a decentralized communication network." ..."Open source can build around the blockages of the industrial producers of the 20th century," says Yale's Benkler. "It can provide a potential source of knowledge materials from which we can build the culture and economy of the 21st century."If that sounds melodramatic, consider how far things have come in the past decade. Torvalds' hobbyists have become an army. Britannica's woes are Wikipedia's gains. In genetics and biotech, open source promises a sure path to breakthroughs. These early efforts are mere trial runs for what open source might do out in the world at large. The real test, the real potential, lies not in the margins. It lies in making something new, in finding a better way. Open source isn't just about better software. It's about better everything._The Internet Engineering Task Force (ITEF) proves that open documentation standards are a catalyst for significant progress while restricting access only adds to flaws and fragmentation. Voluntarily-enforced standards help to keep everyone on the same page.Scott Bradner. Data Network Designer at Harvard University. 1999. ('The Internet Engineering Task Force'. Opensources: Voices from the Open Source Revolution. Pages 47, 51-2.)_For something that does not exist, the Internet Engineering Task Force (lETF) has had quite an impact. Apart from TCP/lP itself, all of the basic technology of the Internet was developed or has been refined in the IETF. IETF working groups created the routing, management, and transport standards without which the Internet would not exist. IETF working groups have defined the security standards that will help secure the Internet, the quality of service standards that will make the Internet a more predictable environment, and the standard for the next generation of the Internet protocol itself.These standards have been phenomenally successful. The Internet is growing faster than any single technology in history, far faster than the railroad, electric light, telephone, or television, and it is only getting started. All of this has been accomplished with voluntary standards. No government requires the use of IETF standards. Competing standards, some mandated by governments around the world, have come and gone and the IETF standards flourish. But not all IETF standards succeed. It is only the standards that meet specific real-world requirements and do well that become true standards in fact as well as in name.The IETF and its standards have succeeded for the same sorts of reasons that the Open Source community is taking off. IETF standards are developed in an open, all-inclusive process in which any interested individual can participate. All IETF documents are freely available over the Internet and can be reproduced at will. In fact the IETF's open document process is a case study in the potential of the Open Source movement. {...}Open Standards, Open Documents, and Open SourceIt is quite clear that one of the major reasons that the IETF standards have been as successful as they have been is the IETF's open documentation and standards development policies. The IETF is one of the very few major standards organizations that make all of their documents openly available, as well as all of its mailing lists and meetings. In many of the traditional standards organizations, and even in some of the newer Internet-related groups, access to documents and meetings is restricted to members or only available by paying a fee. Sometimes this is because the organizations raise some of the funds to support themselves through the sale of their standards. In other cases it is because the organization has fee-based memberships, and one of the reasons for becoming a member is to be able participate in the standards development process and to get access to the standards as they are being developed.Restricting participation in the standards development process often results in standards that do not do as good a job of meeting the needs of the user or vendor communities as they might or are more complex than the operator community can reasonably support. Restricting access to work-in-progress documents makes it harder for implementors to understand what the genesis and rational is for specific features in the standard, and this can lead to flawed implementations. Restricting access to the final standards inhibits the ability for students or developers from small startups to understand, and thus make use of, the standards.The IETF supported the concept of open sources long before the Open Source movement was formed. Up until recently, it was the normal case that "reference implementations" of IETF technologies were done as part of the multiple implementations requirement for advancement on the standards track. This has never been a formal part of the IETF process, but it was generally a very useful by-product. Unfortunately this has slowed down somewhat in this age of more complex standards and higher economic implications for standards. The practice has never stopped, but it would be very good if the Open Source movement were to reinvigorate this unofficial part of the IETF standards process.It may not be immediately apparent, but the availability of open standards processes and documentation is vital to the Open Source movement. Without a clear agreement on what is being worked on, normally articulated in standards documents, it is quite easy for distributed development projects, such as the Open Sources movement, to become fragmented and to flounder. There is an intrinsic partnership between open standards processes, open documentation, and open sources. This partnership produced the Internet and will produce additional wonders in the future._
  • Upvote 2

Share this post


Link to post
Share on other sites
CONTENTION TWO:FAIRER RESOURCE DISTRIBUTION MITIGATES INEQUALITY, BUILDING AN EDUCATIONAL COMMONS FOR DEMOCRATIC EMPOWERMENT.In debate today, only insiders can afford expensive evidenciary resources - such as Lexis codes, institute files, and card-cutting assisant coaches. Open Source projects, on the contrary, initiate a communal development style based on providing access to everyone.Eric von Hippel. Head of the Innovation and Entrepreneurship Group in the Sloan School of Management at the Massachusetts Institute of Technology. & Georg von Krogh. Director of the Institute of Management at the University of St. Gallen. March-April 2003. ('Open Source Software and the "Private-Collective" Innovation Model'. Organization Science. Volume 14; Number 2. Pages 210-1. Also available here: http://opensource.mit.edu/papers/hippelkrogh.pdf.)_Software can be termed open source independent of how or by whom it has been developed: The term denotes only the type of license under which it is made available. However, the fact that open source software is freely accessible to all has created some typical open source software development practices that differ greatly from commercial software development models?and that look very much like the ?hacker culture? behaviors described earlier.Because commercial software vendors typically wish to sell the code they develop, they sharply restrict access to the source code of their software products to firm employees and contractors. The consequence of this restriction is that only insiders have the information required to modify and improve that proprietary code further (see Meyer and Lopez 1995, also Young et al. 1996, Conner and Prahalad 1996). In sharp contrast, all are offered free access to the source code of open source software. This means that anyone with the proper programming skills and motivations can use and modify any open source software written by anyone. In early hacker days, this freedom to learn and use and modify software was exercised by informal sharing and codevelopment of code?often by the physical sharing and exchange of computer tapes and disks upon which the code was recorded. In current Internet days, rapid technological advances in computer hardware and software and networking technologies have made it much easier to create and sustain a communal development style at ever-larger scales. Also, implementing new projects is becoming progressively easier as effective project design becomes better understood, and as prepackaged infrastructural support for such projects becomes available on the Web._No traditional case can prevent the proprietary control of ideas from widening the digital divide and excluding thousands of idea-rich students from policy debate. Now imagine if open-sourcing debate catches on - the backfiles of big budget schools and handbook companies will be open to the world at no cost, thereby empowering currently marginalized folks to achieve greater possibilities for intellectual growth.Ganesh Prasad. Software Design Specialist. 29 May 2001. ('Open Source-onomics: Examining some pseudo-economic arguments about Open Source'. FreeOS.com: The Resource Center for Free Operating Systems. Available here: http://www.freeos.com/articles/4087.)_To play in a market, you need to have money. That automatically excludes all the people who can't pay. It's a shame that in a world of over 6 billion people, about half are just bystanders watching the global marketplace in action. There are brains ticking away in that half-world of market outcasts that could contribute to making the world better in a myriad little ways that we fortunate few don't bother to think about. There are problems to be solved, living standards to be raised, yes, value to be created, and the "market" isn't doing it fast enough.God, Government, Market and CommunityThere are millions who have been waiting for generations for their lot to improve. Religion has promised them a better afterlife, but no God has seen fit to improve their present one. In a world where socialism has been humiliatingly defeated, governments seem ashamed to spend money on development. Everyone now seems to believe that governments must be self-effacingly small. The market is now the politically correct way to solve all problems. But the market, as we have seen, doesn't recognize the existence of those who have nothing to offer as suppliers and nothing to pay as consumers. They are invisible people.Therefore it falls to the miserable to improve their lot themselves. Given the tools, they can raise themselves out of their situation. They will then enter the market, which will wholeheartedly welcome them (though it hadn't the foresight to help them enter it in the first place).Where will such tools come from? In a world where intellectual property has such vociferous defenders that people must be forced to pay for software, information technology widens the gap between the haves and the have-nots, a phenomenon known as the digital divide. If producers of software deserve to be paid, then that means hundreds of thousands of people will never have access to that software. That's a fair market, but a lousy community.Open Source is doing what God, government and market have failed to do. It is putting powerful technology within the reach of cash-poor but idea-rich people. Analysts could quibble about whether that is creating or merely releasing value, but we could do with a bit of either. And yes, that is revolutionary.ConclusionIs it possible to make money off Open Source? In the light of all that we have discussed, this now seems a rather petty and inconsequential question to ask. There is great wealth that will be created through Open Source in the coming months and years, and very little of that will have anything to do with money. A lot of it will have to do with people being empowered to help themselves and raise their living standards. No saint, statesman or scholar has ever done this for them, and certainly no merchant. If this increase in the overall size of the economic pie results in proportionately more wealth for all, then that's the grand answer to our petty question.Economics is all about human achievement. It wasn't aliens from outer space who raised us from our caves to where we are today. It was the way we organized ourselves to create our wealth, rather like the donkey with a carrot dangling before it that pulls a cart a great distance. Open Source gives means to human aspiration. It breaks the artificial mercantilist limits of yesterday's software market and unleashes potentially limitless growth._Treating information as a proprietary finite resource holds back efforts to increase minority participation in debate; it reinforces inequalities between well-off and not-so-well-off skools by creating dependency on costly knowledge-manufacturers. Building an Open Source debate community, on the other hand, amplifies the voices of everyone.Danny Yee. Board member of Electronic Frontiers Australia. December 1999. ('Development, Ethical Trading and Free Software'. First Monday. Volume 4; Number 12. http://www.firstmonday.org/issues/issue4_12/yee/.)"This is the context for intellectual property rights enforcement. This world market in knowledge is a major and profoundly anti-democratic new stage of capitalist development. The transformation of knowledge into property necessarily implies secrecy: common knowledge is no longer private. In this new and chilling stage, communication itself violates property rights. The WTO is transforming what was previously a universal resource of the human race - its collectively, historically and freely-developed knowledge of itself and nature - into a private and marketable force of production." - Allan Freeman, Fixing up the world? GATT and the World Trade OrganisationA good deal of the world's primary resources are located in the poorer countries of the world's "South", even if their exploitation is often in the hands of external corporations. Systems for controlling the distribution of information, on the other hand, are (like possession of capital) overwhelmingly centralised in the rich "North". This should be of great concern to organisations {like debate!} such as Oxfam International members which take a long-term perspective in their attempts to reduce the inequitable distribution of resources. {...}Proprietary software increases the dependence of individuals, organisations, and communities on external forces - typically large corporations with poor track records on acting in the public interest. There are dependencies for support, installation and problem fixing, sometimes in critical systems. There are dependencies for upgrades and compatibility. There are dependencies when modification or extended functionality is required. And there are ongoing financial dependencies if licensing is recurrent.Political dependencies can result from the use of proprietary software, too. For example, an Irish ISP under attack for hosting the top level East Timor domain .tp was helped by hackers and community activists in setting up a secure Linux installation. Given that this attack was probably carried out with the connivance of elements of the Indonesian government, it is hard to imagine a commercial vendor with a significant market presence in Indonesia being so forthcoming with support.Nearly exact parallels to this exist in agriculture, where the patenting of seed varieties and genome sequences and the creation of non-seeding varieties are used to impose long-term dependencies on farmers.An Analogy: Baby-milk Powder : The effects of baby-milk powder on poor infants (which has sparked a Nestle campaign/boycott) provide an analogy to the effects of proprietary software.Sending information in Microsoft Word format to correspondents in Eritrea is analogous to Nestle advertising baby milk powder to Indian mothers. It encourages the recipients to go down a path which is not in their best interests, and from which it is not easy for them to recover. The apparent benefits (the doctor recommended it; we will be able to read the documents sent to us) may be considerable and the initial costs involved (to stop breast-feeding and switch to milk powder; to start using Microsoft Office) may be subsidised, hidden, or zero (with "piracy"), but the long-term effects are to make the recipients dependent on expensive recurrent inputs, and to burden them with ultimately very high costs.Moreover, because documents can be easily copied and because there are strong pressures to conform to group/majority standards in document formats, pushing individuals towards proprietary software and document formats can snowball to affect entire communities, not just the individuals initially involved.Proprietary software not only creates new dependencies: it actively hinders self-help, mutual aid, and community development.Users cannot freely share software with others in the community, or with other communities.The possibilities for building local support and maintainance systems are limited.Modification of software to fit local needs is not possible, leaving communities with software designed to meet the needs of wealthy Northern users and companies, which may not be appropriate for them.An Example: Language Support : Language support provides a good example of the advantages of free software in allowing people to adapt products to their own ends and take control of their lives. Operating systems and word processing software support only a limited range of languages. Iceland, in order to help preserve its language, wants Icelandic support added to Microsoft Windows - and is even willing to pay for it. But without access to the source code - and the right to modify it - they are totally dependent on Microsoft's cooperation. See, for example, an article in the Seattle Times and an article by Martin Vermeer which argues that lack of software localisation is a threat to cultural diversity.Whatever the outcome of this particular case, it must be noted that Iceland is hardly a poor or uninfluential nation. There is absolutely no hope of Windows being modified to support Aymara or Lardil or other indigenous languages. The spread of such proprietary software will continue to contribute to their marginalisation.In contrast, the source code to the GNU/Linux operating system is available and can be freely modified, so groups are able to add support for their languages. See, as an example, the KDE Internationalization Page - KDE is a desktop for GNU/Linux. Access to source code also allows experiments like the the Omega Typesetting System, a modification of the TeX typesetting system "designed for printing all of the world's languages, modern or ancient, common or rare". This sort of extension or modification is simply not possible with proprietary word-processing packages.Sustainable development should favour unlimited resources over finite ones. But while software appears to be a renewable resource, its control by profit-making corporations, as intellectual property, effectively turns it into a finite resource. {...}Free software both encourages learning and experimentation and in turn benefits from it. Free software is widespread in educational institutions, since access to the source code makes free software an ideal tool for teaching; indeed much free software began as learning exercises.Due to low start-up costs and rapid change, software development and the information economy more generally offer a possible way for the South to build high value industries, leapfrogging older technologies and even modes of production. The flourishing Indian software industry provides an obvious example. But if these industries are built on proprietary products and protocols owned by multinational corporations, then this will only reinforce one-sided dependencies. Free software has obvious advantages here.Free software lends itself to collaborative, community-based development at all scales from cottage industry to world-wide efforts involving the collaboration of thousands of people. Internet access potentially offers the poor the ability to communicate directly with the rest of the world, to directly present their own ideas and perspectives. Combined with the free software development model, it allows them to participate in creating and moulding the technologies and systems that will determine their future._Creating an innovative commons serves the core pedagogical mission of debate. Online textbooks prove that scholars from across the globe can collaborate to review and update informational resources, free-of-charge. This empowers educators and provides fresh learning opportunities for students. Since the more skools participate, the better, use your ballot to encourage active contribution.Gary Hepburn. Assistant Professor at Acadia University's School of Education. August 2004. ('Seeking an educational commons: The promise of open source development models'. First Monday. http://www.firstmonday.org/issues/issue9_8/hepburn/.)_Most of us have at least a passing familiarity with the concept of a commons. According David Bollier (2003), the term refers to "a wide array of creations of nature and society that we inherit freely, share and hold in trust for future generations." Well?known examples of commons that exist or have existed include grazing land, the Internet, fresh water supplies, and roadways. Lawrence Lessig (2001) pushes the concept of a commons further in his book, The Future of Ideas, as he describes the role of an innovative commons in society:"They create the opportunity for individuals to draw upon resources without connections, permission, or access granted by others. They are environments that commit themselves to being open. Individuals and corporations draw upon the value created by this openness. They transform that value into other value, which they then consume privately."[1]The fact that society has always used the value of that which we hold in common to build greater value allows us to see an important reason why maintaining common resources is good for all. Even private enterprises benefit from that fact that we hold some resources in common. To appreciate this point, all we need to do is consider the value of roadways to individual and commercial activities. Recognizing the importance of common resources is not anti?private or anti?commercial. Providing some common resources and seeking a reasonable balance between that which is privately owned and that which is held in common benefits society.Public institutions, such as schools, can be thought of as a type of cultural commons (Bollier, 2001, 2002; Reid, 2003). Societies around the world recognize the importance of providing education for all and have made substantial investments to do so. Thought of as a commons, schools ideally ought to be able to provide the resources needed to support optimal learning experiences for students. Our societal investment in education is an attempt to enable this, but we often encounter limitations as providing education is complicated and costly. In reality, schools have trouble living up to the ideal of an educational commons. Clearly, schools do not meet some of the criteria Lessig described above for an innovative commons to exist. There are many cases in which schools are not able "to draw upon resources without connections, permission, or access granted by others" [2].Assuming we want to establish an educational commons that supports innovation, we need to reconsider some of the conditions under which education is conducted. Exploring the concept of an educational commons can bring about a fresh perspective, revealing current blind spots as well as future strategies that may lead us closer to an educational commons. Recent technological developments and, in particular, the Internet have provided some ways in which we can draw upon common resources to aid us in our educational activities. Before I explore these developments further, I will briefly discuss the principle threat to our ability to realize an educational commons. {...}There are many other types of open source projects emerging in addition to those aimed at software development (Stalder and Hirsh, 2002) that can benefit schools. Internet?based collaborative technologies are being used to develop online, text?based materials that are intended for educational purposes. Such projects allow subject experts from around the world to work together to produce materials that are freely available to download, modify, print and distribute. Like software projects, these content development projects are noted for their rigorous review process and ability to be quickly updated as the need arises.Many examples of text?based, content development projects are emerging. There are initiatives underway to develop online textbooks that can be used in subject areas commonly taught in schools. Wikibooks is a project "dedicated to developing and disseminating free, open content textbooks and other classroom texts." It currently hosts over 50 textbooks in varying stages of development. A similar project that is at earlier stages of development is the Open Textbook Project. It has the goal of developing "openly copyrighted (copylefted) textbooks using the free software development model." In addition to textbooks, an encyclopedia development project has proven very successful. Wikipedia has recently surpassed the Internet traffic received by the online version of Encyclopaedia Britannica.Schools, in particular, can benefit from these projects as they get the chance to obtain high quality text?based resources, free of cost or usage restrictions. Unlike open source software projects that may prove technically challenging to educators who wish to participate in development, textbook and encyclopedia projects are closely aligned with the expertise of educators. Once educators become aware of these projects as users and contributors, a resource of immense value will be available to schools to be used as they see fit. Open source models can become a revolutionary source of innovation and opportunity for schools.Returning to the notion of schools as a commons, open source development has the potential to place resources in the hands of educators and students that can be used in ways that best support educational processes. One of the main advantages of using the products of open source development is that schools are able to avoid market enclosure. Commercial products are no longer an obligatory passage point (Callon, 1986; Latour, 1987) in obtaining many resources that are required in education. By eliminating the expense and constraints that accompany commercial products, educators and students gain greater control over the ways in which education is conducted. Open source products can be used by anyone, at anytime, in most any way they choose. The money that is no longer required for commercial products that have been replaced by open source products can be used to support other areas of need within the school.Interestingly, an important advantage of schools using open source resources appears to be a reversal of one of the problems that has confronted traditional commons. One of the fundamental problems with most commons is overuse of the resources. Indeed, this concern is the basis of Hardin?s (1968) well?known essay, "The Tragedy of the Commons." As more consumers of the resources provided by a particular commons take advantage of it, the resource can become depleted. In order to preserve the resource in a traditional commons, some sort of management strategy needs to be put in place. In contrast to traditional commons, open source projects can actually benefit from increased numbers of users. Software and Web sites are not depleted by those who copy or view the resources. Indeed, users can become co?developers as they provide feedback, suggestions, and improvements (Raymond, 1998). As Raymond (2000) points out, "widespread use of open-source software tends to increase its value ... In this inverse commons, the grass grows taller when it?s grazed upon."As schools begin to use open source products they will move closer to the ideal of a commons, while solving many problems that have confronted them in the past. As more schools move in this direction, the value and quality of the resources are likely to increase rather than be depleted. There are, however, several challenges that must be considered in order to begin taking advantage of open source products in a productive way.Beginning to use open source products requires educators to revisit some of their basic assumptions about the types of resources we use in schools and from where those resources should come. I am assuming that few educators would object to the concept of an educational commons, but many may have some anxiety about giving up many of the commercial products with which they have become comfortable. Commercial products are often useful and of high quality, but using them in cases where open source alternatives exist tends to lead to many of the problems I have been discussing in this article. Knowing this, educators need to become familiar with open source resources and explore their appropriateness for teaching and learning. If the resources are found to be appropriate, they should be used in place of commercial resources. In the case of software, for example, I would challenge educators to explain why OpenOffice could not replace the commercial office suites that are currently used on most school computers. Unless there is an excellent reason, the open source software should be used due to its overall suitability, low cost, and better alignment with educational values.The sort of mindset that would move education toward greater use of open source resources is not currently in place. Most educators are not outraged by the corporate intrusion in the educational commons. We have a long history of such intrusions, although they seem to have intensified in recent times. Educators have become resigned to the necessity of some corporate involvement in education. From this perspective, it may appear more extreme to consider making use of open source resources than to continue using commercial ones. The ideal of an educational commons may serve to highlight that which is being lost as we hand more control over the educational enterprise to corporate interests. Becoming involved with open source resources offers more than just a way to cut costs: it contributes to returning the control of education back to the educators. The new mindset that will take education in the direction of leveraging open source development to support a commons is one that will come about partly as a result of educating educators and partly as an educational policy direction.A second challenge faced in implementing open source resources is in educators taking on roles in open source development processes. To have high quality resources that meet the educational needs, it is important that educators be willing to participate in the development of various products. It is not uncommon that educators give feedback to producers of commercial products, particularly when opinions are solicited, but they must be more proactive about participating in open source projects. These projects do not typically have resources to solicit extensive feedback and contributions. Educators must understand the nature of open source development and seek ways to become involved. The development of software and other types of educational resources requires a wide variety of contributions and competencies. Becoming an active contributor to projects will ensure that a broad array of resources is produced that is educationally appropriate. The ultimate beneficiaries of such involvement will be students and schools.The vision of an educational commons characterized by easily available resources that are flexible, affordable, and high quality is an appealing one. Further, reducing corporate intrusion into education at the resource level is desirable. By providing the medium that enables collaborative, open source projects to thrive, the Internet is emerging as a key technological innovation that will allow schools to overcome some significant challenges. Already, resources are available that can be used in schools immediately. Others are under active development and will soon be ready for mainstream use. Perhaps most exciting are those that have not been developed yet. As educators learn about open source development models and re?consider some long held assumptions about how educational resources are produced, they can leverage open source processes to take control of meeting educational needs. In addition to producing substitutes for commercial resources, educators are likely to begin producing resources that are new and innovative. Education can quickly move toward the ideal of a commons and, perhaps more importantly, embrace the ideal of fostering a true innovative commons._An information commons fosters democratic empowerment for 6 reasons : it produces better policies, draws on localized decision-making, values diversity, democratizes resources, creates social trust, and re-invigorates markets.David Bollier. Cofounder of Public Knowledge. Spring 2004. ('Why We Must Talk about the Information Commons'. Law Library Journal. p275: 36-42. www.aallnet.org/products/2004-17.pdf.)_As a concept, the commons has much to commend to any democratic assessment of our nation?s media and information infrastructure because it emphasizes values that market discourse largely ignores. Just as economic analyses tend to focus on efficiency, productivity, and profitability (among other economic and market indices), students of the commons tend to focus on a range of social, civic, and humanistic concerns. These include:Openness and feedback. As scholars of common-pool resources have shown, people living under a successful commons regime tend to know what is going on. When there is open feedback and a sharing of ideas, the community is more likely to discover flaws, debate different options, and choose the best policies. Such transparency lies at the root of science, the democratic process (hence the First Amendment), and free software and open source software development.Shared decisionmaking. A commons is flexible yet hardy precisely because it draws intelligence from everyone in a bottom-up flow. This means that rules are smarter because they reflect knowledge about highly specific, local realities. By contrast, centralized power tends to have less democratic accountability and to be less responsive to conditions that are local and particular.Diversity within the commons. Diversity combined with openness can yield phenomenal creativity and innovation. This is the story of the United States (E pluribus unum), the Internet, the free software movement, and the evolution of species. The greater the diversity in a democratic polity, cyberspace, a programming community, or a gene pool, the more likely it is that better, more adaptive innovations will materialize and prevail.Society equity within the commons. While a commons need not be a system of strict egalitarianism, it is predisposed to honor a rough social equity and legal equality among its members. A key goal of commons management is to democratize social benefits that can otherwise be obtained only through private purchase. The free market, of course, has little interest in social equity.Sociability in the commons. In gift economies, such as an online community or a professional discipline, transactions take on a more personal, social dimension. This can be tremendously powerful in creating certain kinds of wealth (e.g., the Linux operating system, genealogical databases) while fostering social connections among people.Having sketched the contrasting field of vision that a commons analysis provides, it bears emphasizing that the commons is not necessarily hostile to the market. We need both. The point is that there must be an appropriate equilibrium between the two. They must be separated by a semi-permeable barrier that allows both to retain their essential integrity while invigorating each other._In debate, there's little space for citizen deliberation, and extremist rhetoric abounds. Yet blogs prove that the web can serve as an efficient, multimedia tool for promoting public discourse. Not only will writing arguments online qualify as an essential skill for future decision-makers, but debaters today can use the net to distill their own ideas through internet peer review and link to a broader range of sources, as well as criticize politicans and raise awareness. Open-source software is the best example of this collaborative approach, helping 21st-century ecologies for education grow. Every new website offers a tiny chance to increase learning and improve democracy, realistically outweighing the most gigantic of impacts which only occur on the flow.Lawrence Lessig. Law Professor at Stanford Law School. 2004. (Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. p40-5. Also available here: http://libreria.sourceforge.net/libr...CHAPTER02.html.)_When two planes crashed into the World Trade Center, another into the Pentagon, and a fourth into a Pennsylvania field, all media around the world shifted to this news. Every moment of just about every day for that week, and for weeks after, television in particular, and media generally, retold the story of the vents we have just witnessed. The telling was a retelling, because we had seen the events that we described. The genius of this awful act of terrorism was that the delayed second attack was perfectly timed to assure that the whole world would be watching.These retellings had an increasingly familiar feel. There was music scored for the intermissions, and fancy graphics that flashed across the screen. There was a formula to interviews. There was "balance," and seriousness. This was news choreographed in the way we have increasingly come to expect it, "news as entertainment," even if the entertainment is tragedy.But in addition to this produced news about the "tragedy of September 11," those of us tied to the Internet came to see a very different production as well. The Internet was filled with accounts of the same events. Yet these Internet accounts had a very different flavor. Some people constructed photo pages that captured images from around the world and presented them as slide shows with text. Some offered open letters. There were sound recordings. There was anger and frustration. There were attempts to provide context. There was, in short, an extraordinary worldwide barn raising, in the sense Mike Godwin uses the term in his book Cyber Rights, around a news event that had captured the attention of the world. There was ABC and CBS, but there was also the Internet.I don't mean simply to praise the Internet - though I do think the people who supported this form of speech should be praised. I mean instead to point to a significance in this form of speech. For like Kodak, the Internet enables people to capture images. And like in a movie by a student on the "Just Think!" bus, the visual images could be mixed with sound or text.But unlike any technology for simply capturing images, the Internet allows these creations to be shared with an extraordinary number of people, practically instantaneously. This is something new in our tradition - not just that culture can be captured mechanically, and obviously not just that events are commented upon critically, but that this mix of captured images, sound, and commentary can be widely spread practically instantaneously.September 11 was not an aberration. It was a beginning. Around the same time, a form of communication that has grown dramatically was just beginning to come into public consciousness: the Web-log, or blog. The blog is a kind of public diary, and within some cultures, such as in Japan, it functions very much like a diary. In those cultures, it records private facts in a public way - it's a kind of electronic Jerry Springer, available anywhere in the world.But in the United States, blogs have taken on a very different character. There are some who use the space simply to talk about their private life. But there are many who use the space to engage in public discourse. Discussing matters of public import, criticizing others who are mistaken in their views, criticizing politicians about the decisions they make, offering solutions to problems we all see: blogs create the sense of a virtual public meeting, but one in which we don't all hope to be there at the same time and in which conversations are not necessarily linked. The best of the blog entries are relatively short; they point directly to words used by others, criticizing with or adding to them. They are arguably the most important form of uncoreographed public discourse that we have.That's a strong statement. Yet it says as much about our democracy as it does about blogs. This is the part of America that is most difficult for those of us who love America to accept: Our democracy has atrophied. Of course we have elections, and most of the time the courts allow those elections to count. A relatively small number of people vote in those elections. The cycle of these elections has become totally professionalized and rountinized. Most of us think this is democracy.But democracy has never just been about elections. Democracy means rule by the people, but rule means something more than mere elections. In our tradition, it also means control through reasoned discourse. This was the idea that captured the imagination of Alexis de Tocqueville, the nineteenth-century French lawyer who wrote the most important account of early "Democracy in America." It wasn't popular elections that fascinated him - it was the jury, an institution that gave ordinary people the right to choose life or death for other citizens. And most fascinating for him was that the jury didn't just vote about the outcome they would impose. They deliberated. Members argued about the "right" result; they tried to persuade each other of the "right" result, and in criminal cases at least, they have to agree upon an unanimous resolt for the process to come to an end. [15]Yet even this institution flags in American life today. And in its place, there is no systematic effort to enable citizen deliberation. Some are pushing to create ust such an institution. [16] And in some towns in New England, something close to deliberation remains. But for most of us for most of the time, there is no time or place for "democratic deliberation" to occur.More bizarrely, there is generally not even permission for it to occur. We, the most powerful democracy in the world, have developed a strong norm against talking about politics. It's fine to talk about politics with people you agree with. But it is rude to argue about politics with people you disagree with. Political discourse becomes isolated, and isolated discourse becomes more extreme. [17] We say what our friends wnat to hear, and hear very little beyond what our friends say.Enter the blog. The blog's very architecture solves one part of this problem. People post when they want to post, and people read when they want to read. The most difficult time is synchronous time. Technologies that enable asynchronous communication, such as e-mail, increase the opportunity for communication. Blogs allow for public discourse without the public ever needing to gather in a single public place.But beyond architecture, blogs also have solved the problem of norms. There's no norm (yet) in blog space not to talk about politics. Indeed, the space is filled with political speech, on both the right and the left. Some of the most popular sites are conservative or libertarian, but there are many of all political stripes. And even blogs that are not political cover political issues when the occasion merits.The significance of these blogs is tiny now, though not so tiny. The name Howard Dean may well have faded from the 2004 presidential race but for blogs. Yet even if the number of readers is small, the reading is having an effect.One direct effect is on stories that had a different life cycle in the mainstream media. The Trent Lott affair is an example. When Lott "misspoke" at a party for Senator Strom Thurmond, essentially praising Thurmond's segregationist policies, he calculated correctly that this story would disappear from the mainstream press within forty-eight hours. It did. But he didn't calculate its life cycle in blog space. The bloggers kept researching the story. Over time, more and more instances of the same "misspeaking" emerged. Finally, the story broke back into the mainstream press. In the end, Lott was forced to resign as senate majority leader. [18]This different cycle is possible because the same commericial pressures don't exist with blogs as with other ventures. Television and newspapers are commercial entities. They must work to keep attention. If they lose readers, they lose revenue. Like sharks, they must move on.But bloggers don't have a similar constraint. They can obsess, they can focus, they can get serious. If a particular blogger writes a particularly interesting story, more and more people link to that story. And as the number of links to a particular story increases, it rises in the ranks of stories. People read what is popular; what is popular has been selected by a very democratic process of peer-generated rankings.There's a second way, as well, in which blogs have a different cycle from the mainstream press. As Dave Winer, one of the fathers of this movement and a software author for many decades, told me, another difference is the absence of a financial "conflict of interest." "I think you have to take the conflict of interest" out of journalism, Winer told me. "An amateur journalist simply doesn't have a conflict of interest, or the conflict of interest is so easily disclosed that you know you can sort of get it out of the way."These conflicts become more important as media becomes more concentrated (more on this below). A concentrated media can hide more from the public than an unconcentrated media can - as CNN admitted it did after the Iraq war because it was afraid of the consequences to its own employees. [19] It also needs to sustain a more coherent account. (In the middle of the Iraq war, I read a post on the Internet from someone who was at that time listening to a satellitle uplink with a reporter in Iraq. The New York headquarters was telling the reporter over and over that her account of the war was too bleak: She needed to offer a more optimistic story. When she told New York that wasn't warranted, they told her that they were writing "the story.")Blog space gives amateurs a way to enter the debate - "amateur" not in the sense of inexperienced, but in the sense of an Olympic athlete, meaning not paid by anyone to give their reports. It allows for a much broader range of input into a story, as reporting on the Columbia disaster revealed, when hundreds from across the southwest United States turned to the Internet to retell what they had seen. [20] And it drives readers to read across the range of accounts and "triangulate," as Winer puts it, the truth. Blogs, Winer says, are "communicating directly with our constituency, and the middle man is out of it" - with all the benefits, and costs, that might entail.Winer is optimistic about the future of journalism infected with blogs. "It's going to become an essential skill," Winer predicts, for public figures and increasingly for private figures as well. It's not clear that "journalism" is happy about this - some journalists have been told to curtail their blogging. [21] But it is clear that we are still in transtition. "A lot of what we are doing now is warm-up exercises," Winer told me. There is a lot that must mature before this space has its mature effect. And as the inclusion of content in this space is the least infringing use of the Internet (meaning infringing on copyright), Winer said, we will be the last thing that gets shut down."This speech affects democracy. Winer thinks that happens because "you don't have to work for somebody who controls, [for] a gate-keeper." That is true. But it affects democracy in another way as well. As more and more citizens express what they think, and defend it in writing, that will change the way people understand public issues. It is easy to be wrong and misguided in your head. It is harder when the product of your mind can be criticized by others. Of course, it is a rare human who admits that he has been persuaded that he is wrong. But it is even rarer for a human to ignore when he has been proven wrong. The writing of ideas, arguments, and criticism improves democracy. Today there are probably a couple million blogs where such writing happens. When there are ten million, there will be something extraordinary to report.John Seely Brown is the chief scientist of the Xerox Corporation. His work, as his Web site describes it, is "human learning and ... the creation of knowledge ecologies for creating ... innovation."Brown thus looks at these technologies of digital creativity a bit differently from the perspectives I've sketched so far. I'm sure he would be exicted about any technology that might improve democracy. But his real excitement comes from how these technologies affect learning.As Brown believes, we learn by tinkering. When "a lot of us grew up," he explains, that tinkering was done "on motorcycle engines, lawn-mower engines, automobiles, radios, and so on." But digital technologies enable a different kind of tinkering - with abstract ideas though in a concrete form. The kids of Just Think! not only think about how a commercial portrays a politician; using digital technology, they can take the commercial apart and manipulate it, tinker with it to see how it does what it does. Digital technologies launch a kind of bricolage, or "free collage," as Brown calls it. Many get to add to or transform the tinkering of many others.The best large-scale example of this kind of tinkering so far is free software or open-source software (FS/OSS). FS/OSS is software whose source code is shared. Anyone can download the technology that makes a FS/OSS program run. And anyone eager to learn how a particular bit of FS/OSS technology works can tinker with the code.This opportunity creates a "completely new kind of learning platform," as Brown describes. "As soon as you start doing that, you ... unleash a free collage on the community, so that other people can start looking at your code, tinkering with it, trying it out, seeing if they can improve it." Each effort is a kind of apprenticeship. "Open source becomes a major apprecenticeship platform."In this process, "the concrete things you tinker with are abstract. They are code." Kids are "shifting to the ability to tinker in the abstract, and this tinkering is no longer an isolated activity that you're doing in your garage. You are tinkering with a community platform. ... You are tinkering with other people's stuff. The more you tinker the more you improve." The more you improve, the more you learn.This same thing happens with content, too. And it happens in the same collaborative way when that content is part of the Web. As Brown puts it, "the Web [is] the first medium that truly honors multiple forms of intelligence." Earlier technologies, such as the typewriter or word processors, helped ampilfy text. But the Web amplifies much more than text. "The Web ... says if you are musical, if you are artistic, if you are visual, if you are interested in film ... [then] there is a lot you can start to do on this medium. [It] can now amplify and honor these multiple forms of intelligence."Brown is talking about what Elizabeth Daley, Stephanie Barish, and Just Think! teach: that this tinkering with culture teaches as well as creates. It develops talents differently, and it builds a different kind of recognition.Yet the freedom to tinker with these objects is not guaranteed. Indeed, as we'll see through the course of this book, that freedom is increasingly highly contested. While there's no doubt that your father had the right to tinker with the car engine, there's great doubt that your child will have the right to tinker with the images she finds all around. The law and, increasingly, technology interfere with a freedom that technology, and curiousity, would otherwise ensure.These restrictions have become the focus of researchers and scholars. Professor Ed Felten of Princeton (whom we'll see more of in chapter 10) has developed a powerful argument in favor of the "right to tinker" as it applies to computer science and to knowledge in general. [22] But Brown's concern is earlier, or younger, or more fundamental. It is about the learning that kids can do, or can't do, because of the law."This is where education in the twenty-first century is going," Brown explains. We need to "understand how kids who grow up digital think and want to learn.""Yet, as Brown continued, and as the balance of this book wil evince, "we are building a legal system that completely suppresses the natural tendencies of today's digital kids. ... We're building an architecture that unleashes 60 percent of the brain [and] a legal system that closes down that part of the brain."We're building a technology that takes the magic of Kodak, mixes moving images and sound, and adds a space for commentary and an opportunity to spread that creativity everywhere. But we're building the law to close down that technology."No way to run a culture," as Brewster Kahle, whom we'll meet in chapter 9, quipped to me in a rare moment of despondence._CONTENTION THREE:CREATIVE COMMONS PUBLIC LICENSING CONSTITUTES AN ETHICAL IMPERATIVE, ENDORSING A FREER DEBATE CULTURE FOR BOTH COOPERATIVE INNOVATION AND COMPETITIVE EXHILIRATION.Our opponents' briefs are covered under traditional copyright and are not avaialble on the web. The example of Linux software proves that Internet access to everyone's work and protection under public licenses are two indispensible components of distributed networks. Formal incentives systems, such as ballots, must discourage hoarding and persistently remind debaters to share, otherwise collaborative projects fail. Your ballot sets down the rules of the road.Jae Yun Moon. Doctoral candidate in Information Systems at New York University. & Lee Sproull. Stern School Professor of Business at NYU. 2000. ('Essence of Distributed Work: The Case of the Linux Kernel'. First Monday. Volume 5; Number 11. http://www.firstmonday.org/issues/is...oon/index.html.)_Others have written about lessons from Linux for commercial software development projects (e.g., Raymond, 1999). Here we consider how factors important in the Linux case might apply more generally to distributed work in and across organizations (also see Markus, Manville and Agres, 2000). It might seem odd to derive lessons for formal organizations from a self-organizing volunteer activity. After all, the employment contract should ensure that people will fulfill their role obligations and act in the best interest of the organization. Yet, particularly in distributed work, employees must go beyond the letter of their job description to exhibit qualities found in the Linux developers: initiative, persistence, activism. We suggest that the enabling conditions for Linux (the Internet and open source) usefully support these conditions. We then consider how factors emphasized in each of the three versions of the Linux story (great man and task structure, incentives for contributors, and communities of practice) can facilitate organizational distributed work.Clearly easy access to the Internet or its equivalent is a necessary precondition for the kind of distributed work represented by Linux. Developers used the Internet both for easy access to work products (to upload and download files) and for easy communication with other developers (to ask and answer questions, have discussions, and share community lore). Both capabilities are surely important. And they are simple. It is noteworthy that, despite the technical prowess of Linux developers, they relied upon only the simplest and oldest of Internet tools: file transfer, e-mail distribution lists, and Usenet discussion groups. Even with today's wider variety of more sophisticated Web-based tools, Linux developers continue to rely on these tools for coordinating their efforts. These tools are simple; they are available worldwide; they are reliable.The organizational equivalent of copyleft is a second precondition for the kind of distributed work represented by Linux. Both the formal and informal reward and incentive systems must reward sharing and discourage hoarding (See Constant, Kiesler and Sproull, 1996, and Orlikowski, 1992, for discussions of incentives for information sharing in organizations). Moreover work products should be transparently accessible so that anyone can use and build upon good features and anyone can find and fix problems. We do not underestimate the difficulty of creating the equivalent of copyleft for organizational work products. Failing to do so, however, can hobble distributed work. {...}Finally, Linux developers were members of and supported by vigorous electronic communities of practice. Creating and sustaining such communities can importantly contribute to distributed work. Electronic communities require both (simple) computer tools and social tools. We discussed computer tools under enabling conditions, above. The social tools include differentiated roles and norms. It is not enough to enable electronic communication among people working on a distributed project. In a project of any size people must understand and take on differentiated electronic roles. These roles, with their corresponding obligations and responsibilities, should be explicitly designated and understood by all. Indeed, one category of community norms is the expectations associated with role behaviors. More generally, norms are the "rules of the road" for the particular electronic community. Because distributed projects cannot rely upon the tacit reinforcements that occur in face-to-face communications, persistent explicit reminders of norms are necessary in the electronic context (See Sproull and Patterson, 2000 for more on this topic)._Before 1989, in order to copyright a work, you had to register with the copyright office, and display the circle c symbol. Today, however, unless you specifically designate that a work resides in the public domain, everything from a grocery list to a debate file is automatically copyrighted. 'All rights reserved' protections are applied to all the work you do in debate, with or without your consent. And 'fair use' / 'public domain' provisions are increasingly limited and vulnerable to reappropriation. The Creative Commons copyright offers the most reasonable, practical alternative - 'share and share alike' - which ensures the freedom to innovate with the work of others. The computer-readable tags allow debaters to search for debate-related content specifically, while the human-readable tags are a gesture of solidarity to the movement.Lawrence Lessig. Law Professor at Stanford Law School. 2004. (Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. p282-6. Also available here: http://libreria.sourceforge.net/libr...ure/USNOW.html.)_The Creative Commons is a nonprofit corporation established in Massachusetts, but with its home at Stanford University. Its aim is to build a layer of reasonable copyright on top of the extremes that now reign. It does this by making it easy for people to build upon other people's work, by making it simple for creators to express the freedom for others to take and build upon their work. Simple tags, tied to human-readable descriptions, tied to bullet-proof licenses, make this possible.Simple?which means without a middleman, or without a lawyer. By developing a free set of licenses that people can attach to their content, Creative Commons aims to mark a range of content that can easily, and reliably, be built upon. These tags are then linked to machine-readable versions of the license that enable computers automatically to identify content that can easily be shared. These three expressions together-?a legal license, a human-readable description, and machine-readable tags-?constitute a Creative Commons license. A Creative Commons license constitutes a grant of freedom to anyone who accesses the license, and more importantly, an expression of the ideal that the person associated with the license believes in something different than the "All" or "No" extremes. Content is marked with the CC mark, which does not mean that copyright is waived, but that certain freedoms are given.These freedoms are beyond the freedoms promised by fair use. Their precise contours depend upon the choices the creator makes. The creator can choose a license that permits any use, so long as attribution is given. She can choose a license that permits only noncommercial use. She can choose a license that permits any use so long as the same freedoms are given to other uses ("share and share alike"). Or any use so long as no derivative use is made. Or any use at all within developing nations. Or any sampling use, so long as full copies are not made. Or lastly, any educational use.These choices thus establish a range of freedoms beyond the default of copyright law. They also enable freedoms that go beyond traditional fair use. And most importantly, they express these freedoms in a way that subsequent users can use and rely upon without the need to hire a lawyer. Creative Commons thus aims to build a layer of content, governed by a layer of reasonable copyright law, that others can build upon. Voluntary choice of individuals and creators will make this content available. And that content will in turn enable us to rebuild a public domain.This is just one project among many within the Creative Commons. And of course, Creative Commons is not the only organization pursuing such freedoms. But the point that distinguishes the Creative Commons from many is that we are not interested only in talking about a public domain or in getting legislators to help build a public domain. Our aim is to build a movement of consumers and producers of content ("content conducers," as attorney Mia Garlick calls them) who help build the public domain and, by their work, demonstrate the importance of the public domain to other creativity.The aim is not to fight the "All Rights Reserved" sorts. The aim is to complement them. The problems that the law creates for us as a culture are produced by insane and unintended consequences of laws written centuries ago, applied to a technology that only Jefferson could have imagined. The rules may well have made sense against a background of technologies from centuries ago, but they do not make sense against the background of digital technologies. New rules?with different freedoms, expressed in ways so that humans without lawyers can use them?are needed. Creative Commons gives people a way effectively to begin to build those rules.Why would creators participate in giving up total control? Some participate to better spread their content. Cory Doctorow, for example, is a science fiction author. His first novel, Down and Out in the Magic Kingdom, was released on-line and for free, under a Creative Commons license, on the same day that it went on sale in bookstores.Why would a publisher ever agree to this? I suspect his publisher reasoned like this: There are two groups of people out there: (1) those who will buy Cory?s book whether or not it?s on the Internet, and (2) those who may never hear of Cory?s book, if it isn?t made available for free on the Internet. Some part of (1) will download Cory?s book instead of buying it. Call them bad-(1)s. Some part of (2) will download Cory?s book, like it, and then decide to buy it. Call them (2)-goods. If there are more (2)-goods than bad-(1)s, the strategy of releasing Cory?s book free on-line will probably increase sales of Cory?s book.Indeed, the experience of his publisher clearly supports that conclusion. The book?s first printing was exhausted months before the publisher had expected. This first novel of a science fiction author was a total success.The idea that free content might increase the value of nonfree content was confirmed by the experience of another author. Peter Wayner, who wrote a book about the free software movement titled Free for All, made an electronic version of his book free on-line under a Creative Commons license after the book went out of print. He then monitored used book store prices for the book. As predicted, as the number of downloads increased, the used book price for his book increased, as well.These are examples of using the Commons to better spread proprietary content. I believe that is a wonderful and common use of the Commons. There are others who use Creative Commons licenses for other reasons. Many who use the "sampling license" do so because anything else would be hypocritical. The sampling license says that others are free, for commercial or noncommercial purposes, to sample content from the licensed work; they are just not free to make full copies of the licensed work available to others. This is consistent with their own art?they, too, sample from others. Because the legal costs of sampling are so high (Walter Leaphart, manager of the rap group Public Enemy, which was born sampling the music of others, has stated that he does not "allow" Public Enemy to sample anymore, because the legal costs are so high [2]), these artists release into the creative environment content that others can build upon, so that their form of creativity might grow.Finally, there are many who mark their content with a Creative Commons license just because they want to express to others the importance of balance in this debate. If you just go along with the system as it is, you are effectively saying you believe in the "All Rights Reserved" model. Good for you, but many do not. Many believe that however appropriate that rule is for Hollywood and freaks, it is not an appropriate description of how most creators view the rights associated with their content. The Creative Commons license expresses this notion of "Some Rights Reserved," and gives many the chance to say it to others.In the first six months of the Creative Commons experiment, over 1 million objects were licensed with these free-culture licenses. The next step is partnerships with middleware content providers to help them build into their technologies simple ways for users to mark their content with Creative Commons freedoms. Then the next step is to watch and celebrate creators who build content based upon content set free.These are first steps to rebuilding a public domain. They are not mere arguments; they are action. Building a public domain is the first step to showing people how important that domain is to creativity and innovation. Creative Commons relies upon voluntary steps to achieve this rebuilding. They will lead to a world in which more than voluntary steps are possible.Creative Commons is just one example of voluntary efforts by individuals and creators to change the mix of rights that now govern the creative field. The project does not compete with copyright; it complements it. Its aim is not to defeat the rights of authors, but to make it easier for authors and creators to exercise their rights more flexibly and cheaply. That difference, we believe, will enable creativity to spread more easily._Living up to democratic ideals is about more than just protecting one's right to free speech - it's about actively creating free culture and new spaces for mutual understanding. This is a civic duty vital to the success of our republic; now it's time to get to work.Cass Sunstein. Professor of Jurisprudence at the University of Chicago Law School and Department of Political Science. 2001. (Republic.com. Afterword. Page 212.)_WORKAt this point in our history, most industrialized nations are blessed to have little reason to fear tyranny; and in many areas, such nations need more markets, and freer ones, too. But in the domain of communications, the current danger is that amidst all the celebration of freedom of choice, we will lose sight of the requirements of a system of self-government. From the standpoint of democracy, the Internet is far more good than bad. In most ways, things are better, not worse. Nostalgia and pessimism are truly senseless. But it is not senseless to suggest that in thinking about new communications technologies, we should keep democratic ideals in view. The notion of "consumer sovereignty," suitable though it is for market contexts, should not be the only basis on which we evaluate a system of communications. If we emphasize democratic considerations as well, we will have a series of novel inquiries about the social role of the Internet. We should be getting to work.One final note. The democratic ideal comes with its own internal morality. That morality calls for certain kinds of legal rights and institutions: strong rights of freedom of speech, the right to vote, an independent judiciary, checks and balances, protection of property rights. But democracy's internal morality also calls for a certain kind of culture, one in which people do not live in gated communities, or cocoon themselves, or regard their fellow citizens as enemies in some kind of holy war. Of course people are free, within broad limits, to say and do what they want. Gates and cocoons and enmities are not against the law. But if democracies are to work well, they will create spaces that increase the likelihood that citizens will actually see and hear one another, and have some chance to achieve a measure of mutual understanding. If we are to keep it, a twenty-first-century republic would do well to keep this point in plain view._You're welcome to access this position online at www.ossdebate.org. We appreciate any and all bug reports, new feature suggestions, patches, and other feedback there.

____________________________________________________________________________

  • Upvote 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

×
×
  • Create New...