Jump to content
Lazzarone

(cc)

Recommended Posts

CONTENTION ONE: DIGITAL PEER REVIEW ENHANCES THE QUALITY OF BRIEFS AND FILES, FOSTERING MORE EFFICIENT NETWORKS FOR ARGUMENT DEVELOPMENT.

 

 

In the status quo, the typical debate position is only seen by the team running it, their coaching staff, and the judges and opponents who happen to hear it in-round - plus, any modification is usually done in secret. Harvard's OpenLaw project, on the other hand, offers specific, empirical proof that an online process can help debaters craft higher quality arguments and disseminate their work to concerned citizens elsewhere. Open source, in addition to making better software, takes a political stand against proprietary control over knowledge by openly inviting others to find and fix bugs as well as advancing public licenses which prevent subsequent co-option. This round is now a test of the value of this experiment.

 

New Scientist. July 2002. (Graham Lawton, Features Editor. 'The Great Giveaway'. Available here: http://fossforum.tacticaltech.org/node/114.)

 

What started as a technical debate over the best way to debug computer programs is developing into a political battle over the ownership of knowledge and how it is used, between those who put their faith in the free circulation of ideas and those who prefer to designate them "intellectual property". No one knows what the outcome will be. But in a world of growing opposition to corporate power, restrictive intellectual property rights and globalisation, open source is emerging as a possible alternative, a potentially potent means of fighting back. And you're helping to test its value right now.

 

The open source movement originated in 1984 when computer scientist Richard Stallman quit his job at MIT and set up the Free Software Foundation. His aim was to create high-quality software that was freely available to everybody. Stallman's beef was with commercial companies that smother their software with patents and copyrights and keep the source code--the original program, written in a computer language such as C++--a closely guarded secret. Stallman saw this as damaging. It generated poor-quality, bug-ridden software. And worse, it choked off the free flow of ideas. Stallman fretted that if computer scientists could no longer learn from one another's code, the art of programming would stagnate (New Scientist, 12 December 1998, p42).

 

Stallman's move resonated round the computer science community and now there are thousands of similar projects. The star of the movement is Linux, an operating system created by Finnish student Linus Torvalds in the early 1990s and installed on around 18 million computers worldwide.

 

What sets open source software apart from commercial software is the fact that it's free, in both the political and the economic sense. If you want to use a commercial product such as Windows XP or Mac OS X you have to pay a fee and agree to abide by a license that stops you from modifying or sharing the software. But if you want to run Linux or another open source package, you can do so without paying a penny--although several companies will sell you the software bundled with support services. You can also modify the software in any way you choose, copy it and share it without restrictions. This freedom acts as an open invitation--some say challenge--to its users to make improvements. As a result, thousands of volunteers are constantly working on Linux, adding new features and wrinkling out bugs. Their contributions are reviewed by a panel and the best ones are added to Linux. For programmers, the kudos of a successful contribution is its own reward. The result is a stable, powerful system that adapts rapidly to technological change. Linux is so successful that even IBM installs it on the computers it sells.

 

To maintain this benign state of affairs, open source software is covered by a special legal instrument called the General Public License. Instead of restricting how the software can be used, as a standard software license does, the GPL--often known as a "copyleft"--grants as much freedom as possible (see http://www.fsf.org/licenses/gpl.html). Software released under the GPL (or a similar copyleft license) can be copied, modified and distributed by anyone, as long as they, too, release it under a copyleft. That restriction is crucial, because it prevents the material from being co-opted into later proprietary products. It also makes open source software different from programs that are merely distributed free of charge. In FSF's words, the GPL "makes it free and guarantees it remains free".

 

Open source has proved a very successful way of writing software. But it has also come to embody a political stand--one that values freedom of expression, mistrusts corporate power, and is uncomfortable with private ownership of knowledge. It's "a broadly libertarian view of the proper relationship between individuals and institutions", according to open source guru Eric Raymond.

 

But it's not just software companies that lock knowledge away and release it only to those prepared to pay. Every time you buy a CD, a book, a copy of New Scientist, even a can of Coca-Cola, you're forking out for access to someone else's intellectual property. Your money buys you the right to listen to, read or consume the contents, but not to rework them, or make copies and redistribute them. No surprise, then, that people within the open source movement have asked whether their methods would work on other products. As yet no one's sure--but plenty of people are trying it. {...}

 

Another experiment that's proved its worth is the OpenLaw project at the Berkman Center for Internet and Society at Harvard Law School. Berkman lawyers specialise in cyberlaw--hacking, copyright, encryption and so on--and the centre has strong ties with the EFF and the open source software community. In 1998 faculty member Lawrence Lessig, now at Stanford Law School, was asked by online publisher Eldritch Press to mount a legal challenge to US copyright law. Eldritch takes books whose copyright has expired and publishes them on the Web, but new legislation to extend copyright from 50 to 70 years after the author's death was cutting off its supply of new material. Lessig invited law students at Harvard and elsewhere to help craft legal arguments challenging the new law on an online forum, which evolved into OpenLaw.

 

Normal law firms write arguments the way commercial software companies write code. Lawyers discuss a case behind closed doors, and although their final product is released in court, the discussions or "source code" that produced it remain secret. In contrast, OpenLaw crafts its arguments in public and releases them under a copyleft. "We deliberately used free software as a model," says Wendy Selzer, who took over OpenLaw when Lessig moved to Stanford. Around 50 legal scholars now work on Eldritch's case, and OpenLaw has taken other cases, too.

 

"The gains are much the same as for software," Selzer says. "Hundreds of people scrutinise the 'code' for bugs, and make suggestions how to fix it. And people will take underdeveloped parts of the argument, work on them, then patch them in." Armed with arguments crafted in this way, OpenLaw has taken Eldritch's case--deemed unwinnable at the outset--right through the system and is now seeking a hearing in the Supreme Court.

 

There are drawbacks, though. The arguments are in the public domain right from the start, so OpenLaw can't spring a surprise in court. For the same reason, it can't take on cases where confidentiality is important. But where there's a strong public interest element, open sourcing has big advantages. Citizens' rights groups, for example, have taken parts of OpenLaw's legal arguments and used them elsewhere. "People use them on letters to Congress, or put them on flyers," Selzer says.

 

 

In contrast to a tightfisted approach, the Open Source developmental model promises rapid improvement. The OpenLaw project proves this model can work for debate - producing better briefs, sharing a mountain of information, and bolstering the depth and breadth of argument. As this position starts winning ballots, it'll snowball to widespread adoption until participants become intrinsically motivated to contribute solid work.

 

Linus Torvalds. Creator of Linux. & David Diamond. Freelance contributor to the New York Times and Business Week. November/December 2001. ('Why Open Source Makes Sense'. Educause Review. p71-2.)

 

In its purest form, the open source model allows anyone to participate in a project's development or commercial exploitation. Linux is obviously the most successful example. What started out in my messy Helsinki bedroom has grown to become the largest collaborative project in the history of the world. It began as an ideology shared by software developers who believed that computer source code should be shared freely, with the General Public License--the anticopyright--as the movement's powerful tool. It evolved to become a method for the continuous development of the best technology. And it evolved further to gain widespread market acceptance, as seen in the snowballing adoption of Linux as an operating system for Web servers, and in its unexpectedly generous IPOs.

 

What was inspired by ideology has proved itself as technology and is working in the marketplace. Now open source is expanding beyond the technical and business domains . At Harvard University Law School, professors Larry Lessig (who is now at Stanford) and Charles Nelson have brought the open source model to law. They started the Open Law Project, which relies on volunteer lawyers and law students posting opinions and research to the project's Web site to help develop arguments and briefs challenging the United States Copyright Extension Act. The theory is that the strongest arguments will be developed when the largest number of legal minds are working on a project, and as a mountain of information is generated through postings and repostings. The site nicely sums up the tradeoff from the traditional approach: "What we lose in secrecy, we expect to regain in depth of sources and breadth of argument." (Put in another context: With a million eyes, all software bugs will vanish.)

 

It's a wrinkle on how academic research has been conducted for years, but one that makes sense on a number of fronts. Think of how this approach could speed up the development of cures for disease, for example. Or how, with the best minds on the task, international diplomacy could be strengthened. As the world becomes smaller, as the pace of life and business intensifies, and as the technology and information become available, people realize the tightfisted approach is becoming increasingly outmoded.

 

The theory behind open source is simple. In the case of an operating system, the source code--the programming instructions underlying the system--is free. Anyone can improve it, change it, and exploit it. But those improvements, changes, and exploitations have to be made freely available. Think Zen. The project belongs to no one and to everyone. When a project is opened up, there is rapid and continual improvement. With teams of contributors working in parallel the results can happen far more speedily and successfully than if the work were being conducted behind closed doors.

 

That's what we experienced with Linux. Imagine: Instead of a tiny cloistered development team working in secret, you have a monster on your side. Potentially millions of the brightest minds are contributing to a project, and are supported by a peer-review process that has no, er, peer.

 

The first time people hear about the open source approach, it sounds ludicrous. That's why it has taken years for the message of its virtues to sink in. Ideology isn't what has sold the open source model. It started gaining attention when it was obvious that open source was the best method of developing and improving the highest quality technology. And now it is winning in the marketplace, an accomplishment has brought open source its greatest acceptance. Companies were able to be created around numerous value-added services, or to use open source as a way of making a technology popular. When the money rolls in, people get convinced.

 

One of the least understood pieces of the open source puzzle is how so many good programmers would deign to work for absolutely no money. A word about motivation is in order. In a society where survival is more or less assured, money is not the greatest of motivators. It's been well established that folks do their best work when they are driven by a passion. When they are having fun. This is as true for playwrights and sculptors and entrepreneurs as it is for software engineers. The open source model gives people the opportunity to live their passion. To have fun and to work with the world's best programmers, not the few who happen to be employed by their company. Open source developers strive to earn the esteem of their peers. That's got to be highly motivating.

 

 

Academic citations conclusively demonstrate that publishing online increases readership - debate should join the numerous disciplines that've switched to open access.

 

Eric von Hippel. Head of the Innovation and Entrepreneurship Group in the Sloan School of Management at the Massachusetts Institute of Technology. 2005. (Democratizing Information. p88-9. http://web.mit.edu/evhippel/www/books.htm.)

 

In the case of academic publications, we see evidence that free revealing does increase reuse—a matter of great importance to academics. A citation is an indicator that information contained in an article has been reused: the article has been read by the citing author and found useful enough to draw to readers' attention. Recent empirical studies are finding that articles to which readers have open access—articles available for free download from an author's website, for example—are cited significantly more often than are equivalent articles that are available only from libraries or from publishers' fee-based websites. Antelman (2004) finds an increase in citations ranging from 45 percent in philosophy to 91 percent in mathematics.

She notes that "scholars in diverse disciplines are adopting open-access practices at a surprisingly high rate and are being rewarded for it, as reflected in [citations]."

 

 

Debate scholars like legal scholars are prisoners of obsolete data structures. At your camp or squad, as in mine, there are coaches whose primary metric of success is the quantity of evidence cut per week. This leads to poorly-cut files filled with blippy cards, which a year later everyone has forgotten about. To remedy this, network publication creates a way to continuously revise and update your work. It also offers unprecedented opportunities for collaboration. This means fewer tubs full of redundant information and higher quality scholarship.

 

Eben Moglen. Professor of Law & Legal History at Columbia Law School. 5 January 1995. ('The Virtual Scholar and Network Liberation'. http://emoglen.law.columbia.edu/my_pubs/nospeech.html.)

 

The organization of information determines what kinds of learning are practicable given limited time and resources. In addition, the prevailing systems of information organization give rise to the social customs that define what kinds of scholarly activity are appropriate and useful. Until the beginning of the digital revolution, "data structures" meant primarily the physical organization of written information. How data were preserved affected what could be learned. For the authors of the book we call Bracton, for example--working in the middle of the 13th century--information about the laws and customs of England was contained--in dilute sequential form--in the mass of the plea rolls, to which they had preferential access.

 

Scholarship, in that context, meant epitomizing the plea rolls, to communicate to others in compressed form how their contents did and did not reflect the more familiar conceptual categories of the Romanized European law.

 

To a significant extent, our legal scholarship has remained fixed within this model of converting sequentially-stored dilute information into useful epitomes conforming to the intellectual prepossessions of the era. Littleton, Coke, Blackstone and Story--as I labor to make my students understand in my seminar on the intellectual history of the treatise tradition--all attempted to articulate the loose bones of the English law into a skeleton recognizable given the fashions of the time. Though the forms changed significantly with the eras, each of these types of scholarship was aimed at overcoming the same fundamental constraint. In modern jargon, the material of the law is produced and stored sequentially; the primary goal of legal scholarship has been to access that material associatively, by linking temporally displaced segments in topical relations. The scholar, however awkward it may sound, has been a specialized device for the performance of a sort and merge operation, either using internal memory or sorting externally, using whatever equivalent his generation offered for the three-by-five card.

 

If the information-theoretic significance of scholarship did not much change between the time of Bracton and our contemporaries, the primary problem in the intellectual organization of the law has been to get the scholar to the raw data to be sorted. In the beginning, as with those of us who must still make annual journeys to the English Public Record Office, the solution was to move the scholar around.

 

Since the European adoption of movable-type printing at the end of the 15th century, however, the technical infrastructure of scholarship has largely depended on the hope that the distribution of books could replace the peregrinations of scholars. Scholarship became, as much as possible, the consultation of static volumes of printed information, or the rendering of unprinted information suitable for reprocessing by the printing press. The emphasis was still upon making associative links between previously compiled sources of more dilute information.

 

Along with the process for consultation of sources, scholarship has consisted also of the process for consultation of other scholars. This meant either personal travel or the exchange of written correspondence until the development of technologies for voice transmission at the turn of the twentieth century. As we all know, however, the telephone has been more of a barrier to scholarship than an assistance, and only the development of the answering machine, I think, has prevented the telephone from extirpating scholarship altogether.

 

So, let us now consider what has happened to the media of scholarly communication. In principle, the infrastructural problems that have beset scholarship for one thousand years can now be eliminated. Already digital media directly replacing older analog media are coming into existence. Email is replacing the point-to-point media such as snail mail and telephone calls. Broadcast media--including primitive list servers and the more sophisticated structure of Usenet news--are beginning to serve some of the purposes previously served by scholarly pilgrimage, including organizational meetings, collaborative inquiry, exchange of notes and queries, and the like. Unfortunately, the poor design and low quality of commercial software threatens the vitiation of these new media, a point to which I return below.

 

In addition to new media of personal communication, the network has begun to resolve a few other problems of data organization. The linking of library catalogs has made traditional bibliographic research a trivial task. The fulltext retrieval services, though inadequate in many important respects, have at least rendered the basic sources of most legal scholarship accessible from anywhere in the world where a pair of copper wires is connected to a telephone switching office. Experiments with more extensive digitization of library collections, such as Columbia Law School's Project Janus, may within another generation make possible the global frictionless consultation of the entire existing body of our legal culture. Here the primary impediment is mindless adherence to the antiquated conception of "intellectual property," to whose well-deserved destruction I shall return in a few minutes.

 

But these new media are not just inadequately implemented in the existing technological and legal context. While they substantially reduce the friction in scholarly communication, avoiding the need to move people to data, they are not designed to solve the other primary problem that has beset the scholarship of the past. Even given email, netnews, automated catalogs and the virtual library--and assuming away the ridiculous limitations on use posed by rules protecting the non-productive middlemen called publishers--the scholar is still engaged in using carbon-based intelligence to make static links among existing sources, thus predetermining the obsolescence of her enterprise in the face of future developments. We are still the prisoners of outmoded data structures, and it is time to reconsider the essence of what we do. As Albert Einstein said, our experience in the 20th century is that everything has changed except the nature of men's minds. Fortunately, what we do has become so altogether foolish that it should not be difficult to change. {...}

 

But can the network provide an appropriate alternative for the resuscitation of legal scholarship? The answer is most certainly yes. The first step is the elimination of publication as presently understood. Placing on the network a version of my work in a portable page-description language (such as PostScript) allows anyone caring to read my scholarship, whether online or in a permanent form, to receive it with no loss in production values over the present system of physical reproduction. The digital broadcast media like netnews and the listservs, or their more usable successors, will then replace the existing finding aids.

 

But we will do more by network publication than saving the costs of law review publication and reversing the noxious effect of the middlemen on the culture of scholarship. Network publication will for the first time directly confront the static quality of all prior scholarly data structures. Placing my work in the net means that I can continuously revise and expand it. In addition, our cumbersome citation mechanisms can be replaced by direct active links to other materials on the net, so that the footnote--which is surely the bane of legal scholarship--can be replaced by proliferated cross-linkages of the kind primitively modeled in the current world by the citation links of the commercial fulltext systems, and slightly more sophisticatedly by the existing webform hypertext formats, such as the World Wide Web. Such links can be created by machine control as well as human intervention, so that case citations, legislative updates, and other purely mechanical incorporations can occur without my having to do more than make occasional editorial foray to prune back the accretion of new links.

 

The webform systems also model for us, in a fairly simple way, the unprecedented opportunities for collaborative work that the network has created. Within the next generation we shall see the successors of the webform hypertext systems facilitating collaborative projects in the humanities on a scale previously only dreamed of. The conception of the History Workshop or the Sixieme Section will be revitalized, for example, along with kindred conceptions in many disciplines.

 

For the common lawyers, too, limitations in place for centuries will suddenly give way. The low quality of the common law's encyclopedic sources, largely the consolidated output of headnote writers working like Grub Street hacks for the booksellers, should be replaced by a far richer literature, achieving the breadth of scale of the Romanist tradition without its limited conceptual categories. Primary sources, commentary, counter-commentary and scholarly debate should all be joined in a single dynamic web, collaboratively edited. Our contributions to this web will be much less bulky than our existing screeds, reflecting the higher priority given to the making of links over the self-assertive announcement of one's own brilliant conceptualizations. But the result will be finally to concentrate the activity of scholars where the need has always been: on the human mind's unparalleled capacity to connect apparently disparate materials. This is what carbon-based intelligence is for; the rest, may I say, is silicon.

 

 

Debate continues to operate on an industrial model of closed workshops with separate assembly lines. Internet technology has paved the way for a new broadly applicable informational model of peer-to-peer networks dedicated to shared resources and reciprocal collaboration. This serves as a revolutionary alternative - an ant colony of creative debaters working better together.

 

Wired. November 2003. (Thomas Goetz, Editor. 'Open Source Everywhere'. http://www.wired.com/wired/archive/11.11/opensource_pr.html.)

 

Cholera is one of those 19th-century ills that, like consumption or gout, at first seems almost quaint, a malady from an age when people suffered from maladies. But in the developing world, the disease is still widespread and can be gruesomely lethal. When cholera strikes an unprepared community, people get violently sick immediately. On day two, severe dehydration sets in. By day seven, half of a village might be dead.

 

Since cholera kills by driving fluids from the body, the treatment is to pump liquid back in, as fast as possible. The one proven technology, an intravenous saline drip, has a few drawbacks. An easy-to-use, computer-regulated IV can cost $2,000 - far too expensive to deploy against a large outbreak. Other systems cost as little as 35 cents, but they're too complicated for unskilled caregivers. The result: People die unnecessarily.

 

"It's a health problem, but it's also a design problem," says Timothy Prestero, a onetime Peace Corps volunteer who cofounded a group called Design That Matters. Leading a team of MIT engineering students, Prestero, who has master's degrees in mechanical and oceanographic engineering, focused on the drip chamber and pinch valve controlling the saline flow rate.

 

But the team needed more medical expertise. So Prestero turned to ThinkCycle, a Web-based industrial-design project that brings together engineers, designers, academics, and professionals from a variety of disciplines. Soon, some physicians and engineers were pitching in - vetting designs and recommending new paths. Within a few months, Prestero's team had turned the suggestions into an ingenious solution. Taking inspiration from a tool called a rotameter used in chemical engineering, the group crafted a new IV system that's intuitive to use, even for untrained workers. Remarkably, it costs about $1.25 to manufacture, making it ideal for mass deployment. Prestero is now in talks with a medical devices company; the new IV could be in the field a year from now.

 

ThinkCycle's collaborative approach is modeled on a method that for more than a decade has been closely associated with software development: open source. It's called that because the collaboration is open to all and the source code is freely shared. Open source harnesses the distributive powers of the Internet, parcels the work out to thousands, and uses their piecework to build a better whole - putting informal networks of volunteer coders in direct competition with big corporations. It works like an ant colony, where the collective intelligence of the network supersedes any single contributor.

 

Open source, of course, is the magic behind Linux, the operating system that is transforming the software industry. Linux commands a growing share of the server market worldwide and even has Microsoft CEO Steve Ballmer warning of its "competitive challenge for us and for our entire industry." And open source software transcends Linux. Altogether, more than 65,000 collaborative software projects click along at Sourceforge.net, a clearinghouse for the open source community. The success of Linux alone has stunned the business world.

 

But software is just the beginning. Open source has spread to other disciplines, from the hard sciences to the liberal arts. Biologists have embraced open source methods in genomics and informatics, building massive databases to genetically sequence E. coli, yeast, and other workhorses of lab research. NASA has adopted open source principles as part of its Mars mission, calling on volunteer "clickworkers" to identify millions of craters and help draw a map of the Red Planet. There is open source publishing: With Bruce Perens, who helped define open source software in the '90s, Prentice Hall is publishing a series of computer books open to any use, modification, or redistribution, with readers' improvements considered for succeeding editions. There are library efforts like Project Gutenberg, which has already digitized more than 6,000 books, with hundreds of volunteers typing in, page by page, classics from Shakespeare to Stendhal; at the same time, a related project, Distributed Proofreading, deploys legions of copy editors to make sure the Gutenberg texts are correct. There are open source projects in law and religion. There's even an open source cookbook.

 

In 2003, the method is proving to be as broadly effective - and, yes, as revolutionary - a means of production as the assembly line was a century ago. {...}

 

If the ideas behind it are so familiar and simple, why has open source only now become such a powerful force? Two reasons: the rise of the Internet and the excesses of intellectual property. The Internet is open source's great enabler, the communications tool that makes massive decentralized projects possible. Intellectual property, on the other hand, is open source's nemesis: a legal regime that has become so stifling and restrictive that thousands of free-thinking programmers, scientists, designers, engineers, and scholars are desperate to find new ways to create.

 

We are at a convergent moment, when a philosophy, a strategy, and a technology have aligned to unleash great innovation. Open source is powerful because it's an alternative to the status quo, another way to produce things or solve problems. And in many cases, it's a better way. Better because current methods are not fast enough, not ambitious enough, or don't take advantage of our collective creative potential.

 

Open source has flourished in software because programming, for all the romance of guerrilla geeks and hacker ethics, is a fairly precise discipline; you're only as good as your code. It's relatively easy to run an open source software project as a meritocracy, a level playing field that encourages participation. But those virtues aren't exclusive to software. Coders, it could be argued, got to open source first only because they were closest to the tool that made it a feasible means of production: the Internet.

 

The Internet excels at facilitating the exchange of large chunks of information, fast. From distributed computation projects such as SETI@home to file-swapping systems like Grokster and Kazaa, many efforts have exploited the Internet's knack for networking. Open source does those one better: It's not only peer-to-peer sharing - it's P2P production. With open source, you've got the first real industrial model that stems from the technology itself, rather than simply incorporating it.

 

"There's a reason we love barn raising scenes in movies. They make us feel great. We think, 'Wow! That would be amazing!'" says Yochai Benkler, a law professor at Yale studying the economic impact of open source. "But it doesn't have to be just a romanticized notion of how to live. Now technology allows it. Technology can unleash tremendous human creativity and tremendous productivity. This is basically barn raising through a decentralized communication network." {...}

 

"Open source can build around the blockages of the industrial producers of the 20th century," says Yale's Benkler. "It can provide a potential source of knowledge materials from which we can build the culture and economy of the 21st century."

 

If that sounds melodramatic, consider how far things have come in the past decade. Torvalds' hobbyists have become an army. Britannica's woes are Wikipedia's gains. In genetics and biotech, open source promises a sure path to breakthroughs. These early efforts are mere trial runs for what open source might do out in the world at large. The real test, the real potential, lies not in the margins. It lies in making something new, in finding a better way. Open source isn't just about better software. It's about better everything.

 

 

The Internet Engineering Task Force (ITEF) proves that open documentation standards are a catalyst for significant progress while restricting access only adds to flaws and fragmentation. Voluntarily-enforced standards help to keep everyone on the same page.

 

Scott Bradner. Data Network Designer at Harvard University. 1999. ('The Internet Engineering Task Force'. Opensources: Voices from the Open Source Revolution. Pages 47, 51-2.)

 

For something that does not exist, the Internet Engineering Task Force (lETF) has had quite an impact. Apart from TCP/lP itself, all of the basic technology of the Internet was developed or has been refined in the IETF. IETF working groups created the routing, management, and transport standards without which the Internet would not exist. IETF working groups have defined the security standards that will help secure the Internet, the quality of service standards that will make the Internet a more predictable environment, and the standard for the next generation of the Internet protocol itself.

 

These standards have been phenomenally successful. The Internet is growing faster than any single technology in history, far faster than the railroad, electric light, telephone, or television, and it is only getting started. All of this has been accomplished with voluntary standards. No government requires the use of IETF standards. Competing standards, some mandated by governments around the world, have come and gone and the IETF standards flourish. But not all IETF standards succeed. It is only the standards that meet specific real-world requirements and do well that become true standards in fact as well as in name.

 

The IETF and its standards have succeeded for the same sorts of reasons that the Open Source community is taking off. IETF standards are developed in an open, all-inclusive process in which any interested individual can participate. All IETF documents are freely available over the Internet and can be reproduced at will. In fact the IETF's open document process is a case study in the potential of the Open Source movement. {...}

 

It is quite clear that one of the major reasons that the IETF standards have been as successful as they have been is the IETF's open documentation and standards development policies. The IETF is one of the very few major standards organizations that make all of their documents openly available, as well as all of its mailing lists and meetings. In many of the traditional standards organizations, and even in some of the newer Internet-related groups, access to documents and meetings is restricted to members or only available by paying a fee. Sometimes this is because the organizations raise some of the funds to support themselves through the sale of their standards. In other cases it is because the organization has fee-based memberships, and one of the reasons for becoming a member is to be able participate in the standards development process and to get access to the standards as they are being developed.

 

Restricting participation in the standards development process often results in standards that do not do as good a job of meeting the needs of the user or vendor communities as they might or are more complex than the operator community can reasonably support. Restricting access to work-in-progress documents makes it harder for implementers to understand what the genesis and rational is for specific features in the standard, and this can lead to flawed implementations. Restricting access to the final standards inhibits the ability for students or developers from small startups to understand, and thus make use of, the standards.

 

The IETF supported the concept of open sources long before the Open Source movement was formed. Up until recently, it was the normal case that "reference implementations" of IETF technologies were done as part of the multiple implementations requirement for advancement on the standards track. This has never been a formal part of the IETF process, but it was generally a very useful by-product. Unfortunately this has slowed down somewhat in this age of more complex standards and higher economic implications for standards. The practice has never stopped, but it would be very good if the Open Source movement were to reinvigorate this unofficial part of the IETF standards process.

 

It may not be immediately apparent, but the availability of open standards processes and documentation is vital to the Open Source movement. Without a clear agreement on what is being worked on, normally articulated in standards documents, it is quite easy for distributed development projects, such as the Open Sources movement, to become fragmented and to flounder. There is an intrinsic partnership between open standards processes, open documentation, and open sources. This partnership produced the Internet and will produce additional wonders in the future.

 

 

 

CONTENTION TWO: FAIRER RESOURCE DISTRIBUTION MITIGATES INEQUALITY, BUILDING AN EDUCATIONAL COMMONS FOR DEMOCRATIC EMPOWERMENT.

 

 

In debate today, only insiders can afford expensive evidentiary resources - such as Lexis codes, institute files, and card-cutting assistant coaches. Open Source projects, on the contrary, initiate a communal development style based on providing access to everyone.

 

Eric von Hippel. Head of the Innovation and Entrepreneurship Group in the Sloan School of Management at the Massachusetts Institute of Technology. & Georg von Krogh. Director of the Institute of Management at the University of St. Gallen. March-April 2003. ('Open Source Software and the "Private-Collective" Innovation Model'. Organization Science. Volume 14; Number 2. Pages 210-1. http://opensource.mit.edu/papers/hippelkrogh.pdf.)

 

Software can be termed open source independent of how or by whom it has been developed: The term denotes only the type of license under which it is made available. However, the fact that open source software is freely accessible to all has created some typical open source software development practices that differ greatly from commercial software development models—and that look very much like the "hacker culture" behaviors described earlier.

 

Because commercial software vendors typically wish to sell the code they develop, they sharply restrict access to the source code of their software products to firm employees and contractors. The consequence of this restriction is that only insiders have the information required to modify and improve that proprietary code further (see Meyer and Lopez 1995, also Young et al. 1996, Conner and Prahalad 1996). In sharp contrast, all are offered free access to the source code of open source software. This means that anyone with the proper programming skills and motivations can use and modify any open source software written by anyone. In early hacker days, this freedom to learn and use and modify software was exercised by informal sharing and codevelopment of code—often by the physical sharing and exchange of computer tapes and disks upon which the code was recorded. In current Internet days, rapid technological advances in computer hardware and software and networking technologies have made it much easier to create and sustain a communal development style at ever-larger scales. Also, implementing new projects is becoming progressively easier as effective project design becomes better understood, and as prepackaged infrastructural support for such projects becomes available on the Web.

 

 

No traditional case can prevent the proprietary control of ideas from widening the digital divide and excluding thousands of idea-rich students from policy debate. Now imagine if open-sourcing debate catches on - the back-files of big budget schools and handbook companies will be open to the world at no cost, thereby empowering currently marginalized folks to achieve greater possibilities for intellectual growth.

 

Ganesh Prasad. Software Design Specialist. 29 May 2001. ('Open Source-onomics: Examining some pseudo-economic arguments about Open Source'. FreeOS.com: The Resource Center for Free Operating Systems. http://www.freeos.com/articles/4087.)

 

To play in a market, you need to have money. That automatically excludes all the people who can't pay. It's a shame that in a world of over 6 billion people, about half are just bystanders watching the global marketplace in action. There are brains ticking away in that half-world of market outcasts that could contribute to making the world better in a myriad little ways that we fortunate few don't bother to think about. There are problems to be solved, living standards to be raised, yes, value to be created, and the "market" isn't doing it fast enough.

 

There are millions who have been waiting for generations for their lot to improve. Religion has promised them a better afterlife, but no God has seen fit to improve their present one. In a world where socialism has been humiliatingly defeated, governments seem ashamed to spend money on development. Everyone now seems to believe that governments must be self-effacingly small. The market is now the politically correct way to solve all problems. But the market, as we have seen, doesn't recognize the existence of those who have nothing to offer as suppliers and nothing to pay as consumers. They are invisible people.

 

Therefore it falls to the miserable to improve their lot themselves. Given the tools, they can raise themselves out of their situation. They will then enter the market, which will wholeheartedly welcome them (though it hadn't the foresight to help them enter it in the first place).

 

Where will such tools come from? In a world where intellectual property has such vociferous defenders that people must be forced to pay for software, information technology widens the gap between the haves and the have-nots, a phenomenon known as the digital divide. If producers of software deserve to be paid, then that means hundreds of thousands of people will never have access to that software. That's a fair market, but a lousy community.

 

Open Source is doing what God, government and market have failed to do. It is putting powerful technology within the reach of cash-poor but idea-rich people. Analysts could quibble about whether that is creating or merely releasing value, but we could do with a bit of either. And yes, that is revolutionary.

 

Is it possible to make money off Open Source? In the light of all that we have discussed, this now seems a rather petty and inconsequential question to ask. There is great wealth that will be created through Open Source in the coming months and years, and very little of that will have anything to do with money. A lot of it will have to do with people being empowered to help themselves and raise their living standards. No saint, statesman or scholar has ever done this for them, and certainly no merchant. If this increase in the overall size of the economic pie results in proportionately more wealth for all, then that's the grand answer to our petty question.

 

Economics is all about human achievement. It wasn't aliens from outer space who raised us from our caves to where we are today. It was the way we organized ourselves to create our wealth, rather like the donkey with a carrot dangling before it that pulls a cart a great distance. Open Source gives means to human aspiration. It breaks the artificial mercantilist limits of yesterday's software market and unleashes potentially limitless growth.

 

 

Treating information as a proprietary finite resource holds back efforts to increase minority participation in debate; it reinforces inequalities between well-off and not-so-well-off skools by creating dependency on costly knowledge-manufacturers. Building an Open Source debate community, on the other hand, amplifies the voices of everyone.

 

Danny Yee. Board member of Electronic Frontiers Australia. December 1999. ('Development, Ethical Trading and Free Software'. First Monday. Volume 4; Number 12. http://www.firstmonday.org/issues/issue4_12/yee/.)

 

"This is the context for intellectual property rights enforcement. This world market in knowledge is a major and profoundly anti-democratic new stage of capitalist development. The transformation of knowledge into property necessarily implies secrecy: common knowledge is no longer private. In this new and chilling stage, communication itself violates property rights. The WTO is transforming what was previously a universal resource of the human race - its collectively, historically and freely-developed knowledge of itself and nature - into a private and marketable force of production." - Allan Freeman, Fixing up the world? GATT and the World Trade Organisation

 

A good deal of the world's primary resources are located in the poorer countries of the world's "South", even if their exploitation is often in the hands of external corporations. Systems for controlling the distribution of information, on the other hand, are (like possession of capital) overwhelmingly centralised in the rich "North". This should be of great concern to organisations such as Oxfam International members which take a long-term perspective in their attempts to reduce the inequitable distribution of resources. {...}

 

Proprietary software increases the dependence of individuals, organisations, and communities on external forces - typically large corporations with poor track records on acting in the public interest. There are dependencies for support, installation and problem fixing, sometimes in critical systems. There are dependencies for upgrades and compatibility. There are dependencies when modification or extended functionality is required. And there are ongoing financial dependencies if licensing is recurrent.

 

Political dependencies can result from the use of proprietary software, too. For example, an Irish ISP under attack for hosting the top level East Timor domain .tp was helped by hackers and community activists in setting up a secure Linux installation. Given that this attack was probably carried out with the connivance of elements of the Indonesian government, it is hard to imagine a commercial vendor with a significant market presence in Indonesia being so forthcoming with support.

 

Nearly exact parallels to this exist in agriculture, where the patenting of seed varieties and genome sequences and the creation of non-seeding varieties are used to impose long-term dependencies on farmers.

 

An Analogy: Baby-milk Powder : The effects of baby-milk powder on poor infants (which has sparked a Nestle campaign/boycott) provide an analogy to the effects of proprietary software.

 

Sending information in Microsoft Word format to correspondents in Eritrea is analogous to Nestle advertising baby milk powder to Indian mothers. It encourages the recipients to go down a path which is not in their best interests, and from which it is not easy for them to recover. The apparent benefits (the doctor recommended it; we will be able to read the documents sent to us) may be considerable and the initial costs involved (to stop breast-feeding and switch to milk powder; to start using Microsoft Office) may be subsidised, hidden, or zero (with "piracy"), but the long-term effects are to make the recipients dependent on expensive recurrent inputs, and to burden them with ultimately very high costs.

 

Moreover, because documents can be easily copied and because there are strong pressures to conform to group/majority standards in document formats, pushing individuals towards proprietary software and document formats can snowball to affect entire communities, not just the individuals initially involved.

 

Proprietary software not only creates new dependencies: it actively hinders self-help, mutual aid, and community development.

 

Users cannot freely share software with others in the community, or with other communities.

 

The possibilities for building local support and maintenance systems are limited.

 

Modification of software to fit local needs is not possible, leaving communities with software designed to meet the needs of wealthy Northern users and companies, which may not be appropriate for them.

 

An Example: Language Support : Language support provides a good example of the advantages of free software in allowing people to adapt products to their own ends and take control of their lives. Operating systems and word processing software support only a limited range of languages. Iceland, in order to help preserve its language, wants Icelandic support added to Microsoft Windows - and is even willing to pay for it. But without access to the source code - and the right to modify it - they are totally dependent on Microsoft's cooperation. See, for example, an article in the Seattle Times and an article by Martin Vermeer which argues that lack of software localisation is a threat to cultural diversity.

 

Whatever the outcome of this particular case, it must be noted that Iceland is hardly a poor or uninfluential nation. There is absolutely no hope of Windows being modified to support Aymara or Lardil or other indigenous languages. The spread of such proprietary software will continue to contribute to their marginalisation.

 

In contrast, the source code to the GNU/Linux operating system is available and can be freely modified, so groups are able to add support for their languages. See, as an example, the KDE Internationalization Page - KDE is a desktop for GNU/Linux. Access to source code also allows experiments like the the Omega Typesetting System, a modification of the TeX typesetting system "designed for printing all of the world's languages, modern or ancient, common or rare". This sort of extension or modification is simply not possible with proprietary word-processing packages.

 

Sustainable development should favour unlimited resources over finite ones. But while software appears to be a renewable resource, its control by profit-making corporations, as intellectual property, effectively turns it into a finite resource. {...}

 

Free software both encourages learning and experimentation and in turn benefits from it. Free software is widespread in educational institutions, since access to the source code makes free software an ideal tool for teaching; indeed much free software began as learning exercises.

 

Due to low start-up costs and rapid change, software development and the information economy more generally offer a possible way for the South to build high value industries, leapfrogging older technologies and even modes of production. The flourishing Indian software industry provides an obvious example. But if these industries are built on proprietary products and protocols owned by multinational corporations, then this will only reinforce one-sided dependencies. Free software has obvious advantages here.

 

Free software lends itself to collaborative, community-based development at all scales from cottage industry to world-wide efforts involving the collaboration of thousands of people. Internet access potentially offers the poor the ability to communicate directly with the rest of the world, to directly present their own ideas and perspectives. Combined with the free software development model, it allows them to participate in creating and molding the technologies and systems that will determine their future.

 

 

Creating an innovative commons serves the core pedagogical mission of debate. Online textbooks prove that scholars from across the globe can collaborate to review and update informational resources, free-of-charge. This empowers educators and provides fresh learning opportunities for students. Since the more skools participate, the better, use your ballot both to encourage active contribution and set a precedent for educational commons.

 

Gary Hepburn. Assistant Professor at Acadia University's School of Education. August 2004. ('Seeking an educational commons: The promise of open source development models'. First Monday. http://www.firstmonday.org/issues/issue9_8/hepburn/.)

 

Most of us have at least a passing familiarity with the concept of a commons. According David Bollier (2003), the term refers to "a wide array of creations of nature and society that we inherit freely, share and hold in trust for future generations." Well–known examples of commons that exist or have existed include grazing land, the Internet, fresh water supplies, and roadways. Lawrence Lessig (2001) pushes the concept of a commons further in his book, The Future of Ideas, as he describes the role of an innovative commons in society:

 

"They create the opportunity for individuals to draw upon resources without connections, permission, or access granted by others. They are environments that commit themselves to being open. Individuals and corporations draw upon the value created by this openness. They transform that value into other value, which they then consume privately."[1]

 

The fact that society has always used the value of that which we hold in common to build greater value allows us to see an important reason why maintaining common resources is good for all. Even private enterprises benefit from that fact that we hold some resources in common. To appreciate this point, all we need to do is consider the value of roadways to individual and commercial activities. Recognizing the importance of common resources is not anti–private or anti–commercial. Providing some common resources and seeking a reasonable balance between that which is privately owned and that which is held in common benefits society.

 

Public institutions, such as schools, can be thought of as a type of cultural commons (Bollier, 2001, 2002; Reid, 2003). Societies around the world recognize the importance of providing education for all and have made substantial investments to do so. Thought of as a commons, schools ideally ought to be able to provide the resources needed to support optimal learning experiences for students. Our societal investment in education is an attempt to enable this, but we often encounter limitations as providing education is complicated and costly. In reality, schools have trouble living up to the ideal of an educational commons. Clearly, schools do not meet some of the criteria Lessig described above for an innovative commons to exist. There are many cases in which schools are not able "to draw upon resources without connections, permission, or access granted by others" [2].

 

Assuming we want to establish an educational commons that supports innovation, we need to reconsider some of the conditions under which education is conducted. Exploring the concept of an educational commons can bring about a fresh perspective, revealing current blind spots as well as future strategies that may lead us closer to an educational commons. Recent technological developments and, in particular, the Internet have provided some ways in which we can draw upon common resources to aid us in our educational activities. Before I explore these developments further, I will briefly discuss the principle threat to our ability to realize an educational commons. {...}

 

There are many other types of open source projects emerging in addition to those aimed at software development (Stalder and Hirsh, 2002) that can benefit schools. Internet–based collaborative technologies are being used to develop online, text–based materials that are intended for educational purposes. Such projects allow subject experts from around the world to work together to produce materials that are freely available to download, modify, print and distribute. Like software projects, these content development projects are noted for their rigorous review process and ability to be quickly updated as the need arises.

 

Many examples of text–based, content development projects are emerging. There are initiatives underway to develop online textbooks that can be used in subject areas commonly taught in schools. Wikibooks is a project "dedicated to developing and disseminating free, open content textbooks and other classroom texts." It currently hosts over 50 textbooks in varying stages of development. A similar project that is at earlier stages of development is the Open Textbook Project. It has the goal of developing "openly copyrighted (copylefted) textbooks using the free software development model." In addition to textbooks, an encyclopedia development project has proven very successful. Wikipedia has recently surpassed the Internet traffic received by the online version of Encyclopaedia Britannica.

 

Schools, in particular, can benefit from these projects as they get the chance to obtain high quality text–based resources, free of cost or usage restrictions. Unlike open source software projects that may prove technically challenging to educators who wish to participate in development, textbook and encyclopedia projects are closely aligned with the expertise of educators. Once educators become aware of these projects as users and contributors, a resource of immense value will be available to schools to be used as they see fit. Open source models can become a revolutionary source of innovation and opportunity for schools.

 

Returning to the notion of schools as a commons, open source development has the potential to place resources in the hands of educators and students that can be used in ways that best support educational processes. One of the main advantages of using the products of open source development is that schools are able to avoid market enclosure. Commercial products are no longer an obligatory passage point (Callon, 1986; Latour, 1987) in obtaining many resources that are required in education. By eliminating the expense and constraints that accompany commercial products, educators and students gain greater control over the ways in which education is conducted. Open source products can be used by anyone, at anytime, in most any way they choose. The money that is no longer required for commercial products that have been replaced by open source products can be used to support other areas of need within the school.

 

Interestingly, an important advantage of schools using open source resources appears to be a reversal of one of the problems that has confronted traditional commons. One of the fundamental problems with most commons is overuse of the resources. Indeed, this concern is the basis of Hardin's (1968) well–known essay, "The Tragedy of the Commons." As more consumers of the resources provided by a particular commons take advantage of it, the resource can become depleted. In order to preserve the resource in a traditional commons, some sort of management strategy needs to be put in place. In contrast to traditional commons, open source projects can actually benefit from increased numbers of users. Software and Web sites are not depleted by those who copy or view the resources. Indeed, users can become co–developers as they provide feedback, suggestions, and improvements (Raymond, 1998). As Raymond (2000) points out, "widespread use of open-source software tends to increase its value ... In this inverse commons, the grass grows taller when it's grazed upon."

 

As schools begin to use open source products they will move closer to the ideal of a commons, while solving many problems that have confronted them in the past. As more schools move in this direction, the value and quality of the resources are likely to increase rather than be depleted. There are, however, several challenges that must be considered in order to begin taking advantage of open source products in a productive way.

 

Beginning to use open source products requires educators to revisit some of their basic assumptions about the types of resources we use in schools and from where those resources should come. I am assuming that few educators would object to the concept of an educational commons, but many may have some anxiety about giving up many of the commercial products with which they have become comfortable. Commercial products are often useful and of high quality, but using them in cases where open source alternatives exist tends to lead to many of the problems I have been discussing in this article. Knowing this, educators need to become familiar with open source resources and explore their appropriateness for teaching and learning. If the resources are found to be appropriate, they should be used in place of commercial resources. In the case of software, for example, I would challenge educators to explain why OpenOffice could not replace the commercial office suites that are currently used on most school computers. Unless there is an excellent reason, the open source software should be used due to its overall suitability, low cost, and better alignment with educational values.

 

The sort of mindset that would move education toward greater use of open source resources is not currently in place. Most educators are not outraged by the corporate intrusion in the educational commons. We have a long history of such intrusions, although they seem to have intensified in recent times. Educators have become resigned to the necessity of some corporate involvement in education. From this perspective, it may appear more extreme to consider making use of open source resources than to continue using commercial ones. The ideal of an educational commons may serve to highlight that which is being lost as we hand more control over the educational enterprise to corporate interests. Becoming involved with open source resources offers more than just a way to cut costs: it contributes to returning the control of education back to the educators. The new mindset that will take education in the direction of leveraging open source development to support a commons is one that will come about partly as a result of educating educators and partly as an educational policy direction.

 

A second challenge faced in implementing open source resources is in educators taking on roles in open source development processes. To have high quality resources that meet the educational needs, it is important that educators be willing to participate in the development of various products. It is not uncommon that educators give feedback to producers of commercial products, particularly when opinions are solicited, but they must be more proactive about participating in open source projects. These projects do not typically have resources to solicit extensive feedback and contributions. Educators must understand the nature of open source development and seek ways to become involved. The development of software and other types of educational resources requires a wide variety of contributions and competencies. Becoming an active contributor to projects will ensure that a broad array of resources is produced that is educationally appropriate. The ultimate beneficiaries of such involvement will be students and schools.

 

The vision of an educational commons characterized by easily available resources that are flexible, affordable, and high quality is an appealing one. Further, reducing corporate intrusion into education at the resource level is desirable. By providing the medium that enables collaborative, open source projects to thrive, the Internet is emerging as a key technological innovation that will allow schools to overcome some significant challenges. Already, resources are available that can be used in schools immediately. Others are under active development and will soon be ready for mainstream use. Perhaps most exciting are those that have not been developed yet. As educators learn about open source development models and re–consider some long held assumptions about how educational resources are produced, they can leverage open source processes to take control of meeting educational needs. In addition to producing substitutes for commercial resources, educators are likely to begin producing resources that are new and innovative. Education can quickly move toward the ideal of a commons and, perhaps more importantly, embrace the ideal of fostering a true innovative commons.

 

 

An information commons fosters democratic empowerment for 6 reasons: it produces better policies, draws on localized decision-making, values diversity, democratizes resources, creates social trust, and re-invigorates markets.

 

David Bollier. Cofounder of Public Knowledge. Spring 2004. ('Why We Must Talk about the Information Commons'. Law Library Journal. p275: 36-42. http://www.aallnet.org/products/2004-17.pdf.)

 

As a concept, the commons has much to commend to any democratic assessment of our nation's media and information infrastructure because it emphasizes values that market discourse largely ignores. Just as economic analyses tend to focus on efficiency, productivity, and profitability (among other economic and market indices), students of the commons tend to focus on a range of social, civic, and humanistic concerns. These include:

 

Openness and feedback. As scholars of common-pool resources have shown, people living under a successful commons regime tend to know what is going on. When there is open feedback and a sharing of ideas, the community is more likely to discover flaws, debate different options, and choose the best policies. Such transparency lies at the root of science, the democratic process (hence the First Amendment), and free software and open source software development.

 

Shared decisionmaking. A commons is flexible yet hardy precisely because it draws intelligence from everyone in a bottom-up flow. This means that rules are smarter because they reflect knowledge about highly specific, local realities. By contrast, centralized power tends to have less democratic accountability and to be less responsive to conditions that are local and particular.

 

Diversity within the commons. Diversity combined with openness can yield phenomenal creativity and innovation. This is the story of the United States (E pluribus unum), the Internet, the free software movement, and the evolution of species. The greater the diversity in a democratic polity, cyberspace, a programming community, or a gene pool, the more likely it is that better, more adaptive innovations will materialize and prevail.

 

Society equity within the commons. While a commons need not be a system of strict egalitarianism, it is predisposed to honor a rough social equity and legal equality among its members. A key goal of commons management is to democratize social benefits that can otherwise be obtained only through private purchase. The free market, of course, has little interest in social equity.

 

Sociability in the commons. In gift economies, such as an online community or a professional discipline, transactions take on a more personal, social dimension. This can be tremendously powerful in creating certain kinds of wealth (e.g., the Linux operating system, genealogical databases) while fostering social connections among people.

 

Having sketched the contrasting field of vision that a commons analysis provides, it bears emphasizing that the commons is not necessarily hostile to the market. We need both. The point is that there must be an appropriate equilibrium between the two. They must be separated by a semi-permeable barrier that allows both to retain their essential integrity while invigorating each other.

 

 

In debate, there's little space for citizen deliberation, and extremist rhetoric abounds. Yet blogs prove that the web can serve as an efficient, multimedia tool for promoting public discourse. Not only will writing arguments online qualify as an essential skill for future decision-makers, but debaters today can use the net to distill their own ideas through internet peer review and link to a broader range of sources, as well as criticize politicians and raise awareness. Open-source software is the best example of this collaborative approach, helping 21st-century ecologies for education grow. Every new website offers a tiny chance to increase learning and improve democracy, realistically outweighing the most gigantic of impacts which only occur on the flow.

 

Lawrence Lessig. Law Professor at Stanford Law School. 2004. (Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. p40-5. http://libreria.sourceforge.net/library/Free_Culture/CHAPTER02.html.)

 

When two planes crashed into the World Trade Center, another into the Pentagon, and a fourth into a Pennsylvania field, all media around the world shifted to this news. Every moment of just about every day for that week, and for weeks after, television in particular, and media generally, retold the story of the vents we have just witnessed. The telling was a retelling, because we had seen the events that we described. The genius of this awful act of terrorism was that the delayed second attack was perfectly timed to assure that the whole world would be watching.

 

These retellings had an increasingly familiar feel. There was music scored for the intermissions, and fancy graphics that flashed across the screen. There was a formula to interviews. There was "balance," and seriousness. This was news choreographed in the way we have increasingly come to expect it, "news as entertainment," even if the entertainment is tragedy.

 

But in addition to this produced news about the "tragedy of September 11," those of us tied to the Internet came to see a very different production as well. The Internet was filled with accounts of the same events. Yet these Internet accounts had a very different flavor. Some people constructed photo pages that captured images from around the world and presented them as slide shows with text. Some offered open letters. There were sound recordings. There was anger and frustration. There were attempts to provide context. There was, in short, an extraordinary worldwide barn raising, in the sense Mike Godwin uses the term in his book Cyber Rights, around a news event that had captured the attention of the world. There was ABC and CBS, but there was also the Internet.

 

I don't mean simply to praise the Internet - though I do think the people who supported this form of speech should be praised. I mean instead to point to a significance in this form of speech. For like Kodak, the Internet enables people to capture images. And like in a movie by a student on the "Just Think!" bus, the visual images could be mixed with sound or text.

 

But unlike any technology for simply capturing images, the Internet allows these creations to be shared with an extraordinary number of people, practically instantaneously. This is something new in our tradition - not just that culture can be captured mechanically, and obviously not just that events are commented upon critically, but that this mix of captured images, sound, and commentary can be widely spread practically instantaneously.

 

September 11 was not an aberration. It was a beginning. Around the same time, a form of communication that has grown dramatically was just beginning to come into public consciousness: the Web-log, or blog. The blog is a kind of public diary, and within some cultures, such as in Japan, it functions very much like a diary. In those cultures, it records private facts in a public way - it's a kind of electronic Jerry Springer, available anywhere in the world.

 

But in the United States, blogs have taken on a very different character. There are some who use the space simply to talk about their private life. But there are many who use the space to engage in public discourse. Discussing matters of public import, criticizing others who are mistaken in their views, criticizing politicians about the decisions they make, offering solutions to problems we all see: blogs create the sense of a virtual public meeting, but one in which we don't all hope to be there at the same time and in which conversations are not necessarily linked. The best of the blog entries are relatively short; they point directly to words used by others, criticizing with or adding to them. They are arguably the most important form of unchoreographed public discourse that we have.

 

That's a strong statement. Yet it says as much about our democracy as it does about blogs. This is the part of America that is most difficult for those of us who love America to accept: Our democracy has atrophied. Of course we have elections, and most of the time the courts allow those elections to count. A relatively small number of people vote in those elections. The cycle of these elections has become totally professionalized and routinized. Most of us think this is democracy.

 

But democracy has never just been about elections. Democracy means rule by the people, but rule means something more than mere elections. In our tradition, it also means control through reasoned discourse. This was the idea that captured the imagination of Alexis de Tocqueville, the nineteenth-century French lawyer who wrote the most important account of early "Democracy in America." It wasn't popular elections that fascinated him - it was the jury, an institution that gave ordinary people the right to choose life or death for other citizens. And most fascinating for him was that the jury didn't just vote about the outcome they would impose. They deliberated. Members argued about the "right" result; they tried to persuade each other of

the "right" result, and in criminal cases at least, they have to agree upon an unanimous result for the process to come to an end. [15]

 

Yet even this institution flags in American life today. And in its place, there is no systematic effort to enable citizen deliberation. Some are pushing to create ust such an institution. [16] And in some towns in New England, something close to deliberation remains. But for most of us for most of the time, there is no time or place for "democratic deliberation" to occur.

 

More bizarrely, there is generally not even permission for it to occur. We, the most powerful democracy in the world, have developed a strong norm against talking about politics. It's fine to talk about politics with people you agree with. But it is rude to argue about politics with people you disagree with. Political discourse becomes isolated, and isolated discourse becomes more extreme. [17] We say what our friends want to hear, and hear very little beyond what our friends say.

 

Enter the blog. The blog's very architecture solves one part of this problem. People post when they want to post, and people read when they want to read. The most difficult time is synchronous time. Technologies that enable asynchronous communication, such as e-mail, increase the opportunity for communication. Blogs allow for public discourse without the public ever needing to gather in a single public place.

 

But beyond architecture, blogs also have solved the problem of norms. There's no norm (yet) in blog space not to talk about politics. Indeed, the space is filled with political speech, on both the right and the left. Some of the most popular sites are conservative or libertarian, but there are many of all political stripes. And even blogs that are not political cover political issues when the occasion merits.

 

The significance of these blogs is tiny now, though not so tiny. The name Howard Dean may well have faded from the 2004 presidential race but for blogs. Yet even if the number of readers is small, the reading is having an effect.

 

One direct effect is on stories that had a different life cycle in the mainstream media. The Trent Lott affair is an example. When Lott "misspoke" at a party for Senator Strom Thurmond, essentially praising Thurmond's segregationist policies, he calculated correctly that this story would disappear from the mainstream press within forty-eight hours. It did. But he didn't calculate its life cycle in blog space. The bloggers kept researching the story. Over time, more and more instances of the same "misspeaking" emerged. Finally, the story broke back into the mainstream press. In the end, Lott was forced to resign as senate majority leader. [18]

 

This different cycle is possible because the same commercial pressures don't exist with blogs as with other ventures. Television and newspapers are commercial entities. They must work to keep attention. If they lose readers, they lose revenue. Like sharks, they must move on.

 

But bloggers don't have a similar constraint. They can obsess, they can focus, they can get serious. If a particular blogger writes a particularly interesting story, more and more people link to that story. And as the number of links to a particular story increases, it rises in the ranks of stories. People read what is popular; what is popular has been selected by a very democratic process of peer-generated rankings.

 

There's a second way, as well, in which blogs have a different cycle from the mainstream press. As Dave Winer, one of the fathers of this movement and a software author for many decades, told me, another difference is the absence of a financial "conflict of interest." "I think you have to take the conflict of interest" out of journalism, Winer told me. "An amateur journalist simply doesn't have a conflict of interest, or the conflict of interest is so easily disclosed that you know you can sort of get it out of the way."

 

These conflicts become more important as media becomes more concentrated (more on this below). A concentrated media can hide more from the public than an unconcentrated media can - as CNN admitted it did after the Iraq war because it was afraid of the consequences to its own employees. [19] It also needs to sustain a more coherent account. (In the middle of the Iraq war, I read a post on the Internet from someone who was at that time listening to a satellite uplink with a reporter in Iraq. The New York headquarters was telling the reporter over and over that her account of the war was too bleak: She needed to offer a more optimistic story. When she told New York that wasn't warranted, they told her that they were writing "the story.")

 

Blog space gives amateurs a way to enter the debate - "amateur" not in the sense of inexperienced, but in the sense of an Olympic athlete, meaning not paid by anyone to give their reports. It allows for a much broader range of input into a story, as reporting on the Columbia disaster revealed, when hundreds from across the southwest United States turned to the Internet to retell what they had seen. [20] And it drives readers to read across the range of accounts and "triangulate," as Winer puts it, the truth. Blogs, Winer says, are "communicating directly with our constituency, and the middle man is out of it" - with all the benefits, and costs, that might entail.

 

Winer is optimistic about the future of journalism infected with blogs. "It's going to become an essential skill," Winer predicts, for public figures and increasingly for private figures as well. It's not clear that "journalism" is happy about this - some journalists have been told to curtail their blogging. [21] But it is clear that we are still in transition. "A lot of what we are doing now is warm-up exercises," Winer told me. There is a lot that must mature before this space has its mature effect. And as the inclusion of content in this space is the least infringing use of the Internet (meaning infringing on copyright), Winer said, we will be the last thing that gets shut down."

 

This speech affects democracy. Winer thinks that happens because "you don't have to work for somebody who controls, [for] a gate-keeper." That is true. But it affects democracy in another way as well. As more and more citizens express what they think, and defend it in writing, that will change the way people understand public issues. It is easy to be wrong and misguided in your head. It is harder when the product of your mind can be criticized by others. Of course, it is a rare human who admits that he has been persuaded that he is wrong. But it is even rarer for a human to ignore when he has been proven wrong. The writing of ideas, arguments, and criticism improves democracy. Today there are probably a couple million blogs where such writing happens. When there are ten million, there will be something extraordinary to report.

 

John Seely Brown is the chief scientist of the Xerox Corporation. His work, as his Web site describes it, is "human learning and ... the creation of knowledge ecologies for creating ... innovation."

 

Brown thus looks at these technologies of digital creativity a bit differently from the perspectives I've sketched so far. I'm sure he would be excited about any technology that might improve democracy. But his real excitement comes from how these technologies affect learning.

 

As Brown believes, we learn by tinkering. When "a lot of us grew up," he explains, that tinkering was done "on motorcycle engines, lawn-mower engines, automobiles, radios, and so on." But digital technologies enable a different kind of tinkering - with abstract ideas though in a concrete form. The kids of Just Think! not only think about how a commercial portrays a politician; using digital technology, they can take the commercial apart and manipulate it, tinker with it to see how it does what it does. Digital technologies launch a kind of bricolage, or "free collage," as Brown calls it. Many get to add to or transform the tinkering of many others.

 

The best large-scale example of this kind of tinkering so far is free software or open-source software (FS/OSS). FS/OSS is software whose source code is shared. Anyone can download the technology that makes a FS/OSS program run. And anyone eager to learn how a particular bit of FS/OSS technology works can tinker with the code.

 

This opportunity creates a "completely new kind of learning platform," as Brown describes. "As soon as you start doing that, you ... unleash a free collage on the community, so that other people can start looking at your code, tinkering with it, trying it out, seeing if they can improve it." Each effort is a kind of apprenticeship. "Open source becomes a major apprenticeship platform."

 

In this process, "the concrete things you tinker with are abstract. They are code." Kids are "shifting to the ability to tinker in the abstract, and this tinkering is no longer an isolated activity that you're doing in your garage. You are tinkering with a community platform. ... You are tinkering with other people's stuff. The more you tinker the more you improve." The more you improve, the more you learn.

 

This same thing happens with content, too. And it happens in the same collaborative way when that content is part of the Web. As Brown puts it, "the Web [is] the first medium that truly honors multiple forms of intelligence." Earlier technologies, such as the typewriter or word processors, helped ampilfy text. But the Web amplifies much more than text. "The Web ... says if you are musical, if you are artistic, if you are visual, if you are interested in film ... [then] there is a lot you can start to do on this medium. [it] can now amplify and honor these multiple forms of intelligence."

 

Brown is talking about what Elizabeth Daley, Stephanie Barish, and Just Think! teach: that this tinkering with culture teaches as well as creates. It develops talents differently, and it builds a different kind of recognition.

 

Yet the freedom to tinker with these objects is not guaranteed. Indeed, as we'll see through the course of this book, that freedom is increasingly highly contested. While there's no doubt that your father had the right to tinker with the car engine, there's great doubt that your child will have the right to tinker with the images she finds all around. The law and, increasingly, technology interfere with a freedom that technology, and curiosity, would otherwise ensure.

 

These restrictions have become the focus of researchers and scholars. Professor Ed Felten of Princeton (whom we'll see more of in chapter 10) has developed a powerful argument in favor of the "right to tinker" as it applies to computer science and to knowledge in general. [22] But Brown's concern is earlier, or younger, or more fundamental. It is about the learning that kids can do, or can't do, because of the law.

 

"This is where education in the twenty-first century is going," Brown explains. We need to "understand how kids who grow up digital think and want to learn."

 

"Yet, as Brown continued, and as the balance of this book will evince, "we are building a legal system that completely suppresses the natural tendencies of today's digital kids. ... We're building an architecture that unleashes 60 percent of the brain [and] a legal system that closes down that part of the brain."

 

We're building a technology that takes the magic of Kodak, mixes moving images and sound, and adds a space for commentary and an opportunity to spread that creativity everywhere. But we're building the law to close down that technology.

 

"No way to run a culture," as Brewster Kahle, whom we'll meet in chapter 9, quipped to me in a rare moment of despondence.

 

 

 

CONTENTION THREE: CREATIVE COMMONS PUBLIC LICENSING CONSTITUTES AN ETHICAL IMPERATIVE, ENDORSING A FREER DEBATE CULTURE FOR BOTH COOPERATIVE INNOVATION AND COMPETITIVE EXHILARATION.

 

 

Our opponents' briefs are covered under traditional copyright and are not available on the web. The example of Linux software proves that Internet access to everyone's work and protection under public licenses are two indispensable components of distributed networks. Formal incentives systems, such as ballots, must discourage hoarding and persistently remind debaters to share, otherwise collaborative projects fail. Your ballot sets down the rules of the road.

 

Jae Yun Moon. Doctoral candidate in Information Systems at New York University. & Lee Sproull. Stern School Professor of Business at NYU. 2000. ('Essence of Distributed Work: The Case of the Linux Kernel'. First Monday. Volume 5; Number 11. http://www.firstmonday.org/issues/issue5_11/moon/index.html.)

 

Others have written about lessons from Linux for commercial software development projects (e.g., Raymond, 1999). Here we consider how factors important in the Linux case might apply more generally to distributed work in and across organizations (also see Markus, Manville and Agres, 2000). It might seem odd to derive lessons for formal organizations from a self-organizing volunteer activity. After all, the employment contract should ensure that people will fulfill their role obligations and act in the best interest of the organization. Yet, particularly in distributed work, employees must go beyond the letter of their job description to exhibit qualities found in the Linux developers: initiative, persistence, activism. We suggest that the enabling conditions for Linux (the Internet and open source) usefully support these conditions. We then consider how factors emphasized in each of the three versions of the Linux story (great man and task structure, incentives for contributors, and communities of practice) can facilitate organizational distributed work.

 

Clearly easy access to the Internet or its equivalent is a necessary precondition for the kind of distributed work represented by Linux. Developers used the Internet both for easy access to work products (to upload and download files) and for easy communication with other developers (to ask and answer questions, have discussions, and share community lore). Both capabilities are surely important. And they are simple. It is noteworthy that, despite the technical prowess of Linux developers, they relied upon only the simplest and oldest of Internet tools: file transfer, e-mail distribution lists, and Usenet discussion groups. Even with today's wider variety of more sophisticated Web-based tools, Linux developers continue to rely on these tools for coordinating their efforts. These tools are simple; they are available worldwide; they are reliable.

 

The organizational equivalent of copyleft is a second precondition for the kind of distributed work represented by Linux. Both the formal and informal reward and incentive systems must reward sharing and discourage hoarding (See Constant, Kiesler and Sproull, 1996, and Orlikowski, 1992, for discussions of incentives for information sharing in organizations). Moreover work products should be transparently accessible so that anyone can use and build upon good features and anyone can find and fix problems. We do not underestimate the difficulty of creating the equivalent of copyleft for organizational work products. Failing to do so, however, can hobble distributed work. {...}

 

Finally, Linux developers were members of and supported by vigorous electronic communities of practice. Creating and sustaining such communities can importantly contribute to distributed work. Electronic communities require both (simple) computer tools and social tools. We discussed computer tools under enabling conditions, above. The social tools include differentiated roles and norms. It is not enough to enable electronic communication among people working on a distributed project. In a project of any size people must understand and take on differentiated electronic roles. These roles, with their corresponding obligations and responsibilities, should be explicitly designated and understood by all. Indeed, one category of community norms is the expectations associated with role behaviors. More generally, norms are the "rules of the road" for the particular electronic community. Because distributed projects cannot rely upon the tacit reinforcements that occur in face-to-face communications, persistent explicit reminders of norms are necessary in the electronic context (See Sproull and Patterson, 2000 for more on this topic).

 

 

Before 1989, in order to copyright a work, you had to register with the copyright office, and display the circle c symbol. Today, however, unless you specifically designate that a work resides in the public domain, everything from a grocery list to a debate file is automatically copyrighted. 'All rights reserved' protections are applied to all the work you do in debate, with or without your consent. 'Fair use' and 'public domain' are increasingly limited and vulnerable to reappropriation. The Creative Commons copyright offers the most reasonable and practical alternative - 'share and share alike' - which ensures the freedom to innovate with the work of others. The computer-readable tags allow debaters to search for debate-related content specifically, while the human-readable tags are a gesture of solidarity to the movement.

 

Lawrence Lessig. Law Professor at Stanford Law School. 2004. (Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. p282-6. http://libreria.sourceforge.net/library/Free_Culture/USNOW.html.)

 

The Creative Commons is a nonprofit corporation established in Massachusetts, but with its home at Stanford University. Its aim is to build a layer of reasonable copyright on top of the extremes that now reign. It does this by making it easy for people to build upon other people's work, by making it simple for creators to express the freedom for others to take and build upon their work. Simple tags, tied to human-readable descriptions, tied to bullet-proof licenses, make this possible.

 

Simple—which means without a middleman, or without a lawyer. By developing a free set of licenses that people can attach to their content, Creative Commons aims to mark a range of content that can easily, and reliably, be built upon. These tags are then linked to machine-readable versions of the license that enable computers automatically to identify content that can easily be shared. These three expressions together—a legal license, a human-readable description, and machine-readable tags—constitute a Creative Commons license. A Creative Commons license constitutes a grant of freedom to anyone who accesses the license, and more importantly, an expression of the ideal that the person associated with the license believes in something different than the "All" or "No" extremes. Content is marked with the CC mark, which does not mean that copyright is waived, but that certain freedoms are given.

 

These freedoms are beyond the freedoms promised by fair use. Their precise contours depend upon the choices the creator makes. The creator can choose a license that permits any use, so long as attribution is given. She can choose a license that permits only noncommercial use. She can choose a license that permits any use so long as the same freedoms are given to other uses ("share and share alike"). Or any use so long as no derivative use is made. Or any use at all within developing nations. Or any sampling use, so long as full copies are not made. Or lastly, any educational use.

 

These choices thus establish a range of freedoms beyond the default of copyright law. They also enable freedoms that go beyond traditional fair use. And most importantly, they express these freedoms in a way that subsequent users can use and rely upon without the need to hire a lawyer. Creative Commons thus aims to build a layer of content, governed by a layer of reasonable copyright law, that others can build upon. Voluntary choice of individuals and creators will make this content available. And that content will in turn enable us to rebuild a public domain.

 

This is just one project among many within the Creative Commons. And of course, Creative Commons is not the only organization pursuing such freedoms. But the point that distinguishes the Creative Commons from many is that we are not interested only in talking about a public domain or in getting legislators to help build a public domain. Our aim is to build a movement of consumers and producers of content ("content conducers," as attorney Mia Garlick calls them) who help build the public domain and, by their work, demonstrate the importance of the public domain to other creativity.

 

The aim is not to fight the "All Rights Reserved" sorts. The aim is to complement them. The problems that the law creates for us as a culture are produced by insane and unintended consequences of laws written centuries ago, applied to a technology that only Jefferson could have imagined. The rules may well have made sense against a background of technologies from centuries ago, but they do not make sense against the background of digital technologies. New rules—with different freedoms, expressed in ways so that humans without lawyers can use them—are needed. Creative Commons gives people a way effectively to begin to build those rules.

 

Why would creators participate in giving up total control? Some participate to better spread their content. Cory Doctorow, for example, is a science fiction author. His first novel, Down and Out in the Magic Kingdom, was released on-line and for free, under a Creative Commons license, on the same day that it went on sale in bookstores.

 

Why would a publisher ever agree to this? I suspect his publisher reasoned like this: There are two groups of people out there: (1) those who will buy Cory's book whether or not it's on the Internet, and (2) those who may never hear of Cory's book, if it isn't made available for free on the Internet. Some part of (1) will download Cory's book instead of buying it. Call them bad-(1)s. Some part of (2) will download Cory's book, like it, and then decide to buy it. Call them (2)-goods. If there are more (2)-goods than bad-(1)s, the strategy of releasing Cory's book free on-line will probably increase sales of Cory's book.

 

Indeed, the experience of his publisher clearly supports that conclusion. The book's first printing was exhausted months before the publisher had expected. This first novel of a science fiction author was a total success.

 

The idea that free content might increase the value of nonfree content was confirmed by the experience of another author. Peter Wayner, who wrote a book about the free software movement titled Free for All, made an electronic version of his book free on-line under a Creative Commons license after the book went out of print. He then monitored used book store prices for the book. As predicted, as the number of downloads increased, the used book price for his book increased, as well.

 

These are examples of using the Commons to better spread proprietary content. I believe that is a wonderful and common use of the Commons. There are others who use Creative Commons licenses for other reasons. Many who use the "sampling license" do so because anything else would be hypocritical. The sampling license says that others are free, for commercial or noncommercial purposes, to sample content from the licensed work; they are just not free to make full copies of the licensed work available to others. This is consistent with their own art—they, too, sample from others. Because the legal costs of sampling are so high (Walter Leaphart, manager of the rap group Public Enemy, which was born sampling the music of others, has stated that he does not "allow" Public Enemy to sample anymore, because the legal costs are so high [2]), these artists release into the creative environment content that others can build upon, so that their form of creativity might grow.

 

Finally, there are many who mark their content with a Creative Commons license just because they want to express to others the importance of balance in this debate. If you just go along with the system as it is, you are effectively saying you believe in the "All Rights Reserved" model. Good for you, but many do not. Many believe that however appropriate that rule is for Hollywood and freaks, it is not an appropriate description of how most creators view the rights associated with their content. The Creative Commons license expresses this notion of "Some Rights Reserved," and gives many the chance to say it to others.

 

In the first six months of the Creative Commons experiment, over 1 million objects were licensed with these free-culture licenses. The next step is partnerships with middleware content providers to help them build into their technologies simple ways for users to mark their content with Creative Commons freedoms. Then the next step is to watch and celebrate creators who build content based upon content set free.

 

These are first steps to rebuilding a public domain. They are not mere arguments; they are action. Building a public domain is the first step to showing people how important that domain is to creativity and innovation. Creative Commons relies upon voluntary steps to achieve this rebuilding. They will lead to a world in which more than voluntary steps are possible.

 

Creative Commons is just one example of voluntary efforts by individuals and creators to change the mix of rights that now govern the creative field. The project does not compete with copyright; it complements it. Its aim is not to defeat the rights of authors, but to make it easier for authors and creators to exercise their rights more flexibly and cheaply. That difference, we believe, will enable creativity to spread more easily.

 

 

Living up to democratic ideals is about more than just protecting one's right to free speech - it's about actively creating free culture and new spaces for mutual understanding. This is an ethical imperative vital to the success of our republic; now it's time to get to work.

 

Cass Sunstein. Professor of Jurisprudence at the University of Chicago Law School and Department of Political Science. 2001. (Republic.com. Afterword. Page 212.)

 

At this point in our history, most industrialized nations are blessed to have little reason to fear tyranny; and in many areas, such nations need more markets, and freer ones, too. But in the domain of communications, the current danger is that amidst all the celebration of freedom of choice, we will lose sight of the requirements of a system of self-government. From the standpoint of democracy, the Internet is far more good than bad. In most ways, things are better, not worse. Nostalgia and pessimism are truly senseless. But it is not senseless to suggest that in thinking about new communications technologies, we should keep democratic ideals in view. The notion of "consumer sovereignty," suitable though it is for market contexts, should not be the only basis on which we evaluate a system of communications. If we emphasize democratic considerations as well, we will have a series of novel inquiries about the social role of the Internet. We should be getting to work.

 

One final note. The democratic ideal comes with its own internal morality. That morality calls for certain kinds of legal rights and institutions: strong rights of freedom of speech, the right to vote, an independent judiciary, checks and balances, protection of property rights. But democracy's internal morality also calls for a certain kind of culture, one in which people do not live in gated communities, or cocoon themselves, or regard their fellow citizens as enemies in some kind of holy war. Of course people are free, within broad limits, to say and do what they want. Gates and cocoons and enmities are not against the law. But if democracies are to work well, they will create spaces that increase the likelihood that citizens will actually see and hear one another, and have some chance to achieve a measure of mutual understanding. If we are to keep it, a twenty-first-century republic would do well to keep this point in plain view.

 

 

 

_____

You're welcome to access this position online here:

http://stuartgeiger.com/ossdebate/index.php?title=Creative_Commons

We appreciate any and all bug reports, spelling corrections, new feature suggestions, patches, and other feedback there.

Edited by Lazzarone
  • Upvote 2
  • Downvote 1

Share this post


Link to post
Share on other sites

*golf clap*

 

Okay, open-source styled research efforts are good for debate. So... does that make one team or another win the round or something?

 

Or am I missing something here?

Share this post


Link to post
Share on other sites

VEry much the same boat as Tomak; why is this a voting issue? Why completely ignore the discussion of the affirmative to run this kritik?

 

and, on the "Alt"; it would probably be somethink like ''everyone post their shit online'' - why not just ask the affirmative to do that before the round? Or, if they offer to post their stuff on the wiki, do you concede?

 

If I lost to this ''kritik'' I'd be pissed enough about it to spit on the project and not contribute. Conversely, I'd much rather get a W and post my stuff online (something I may have been willing to do before but just hadn't given it much though). If your goal is truly to create a community change, then does the aff enacting that change (think of it like a perm) mean the neg's arguments go away?

 

I don't see how this is viable, unless you want the debate to really start at your completely new 2NC.

Share this post


Link to post
Share on other sites

because, presumably, you haven't put your positions online under a creative commons license. (if you have, then yes, a new 2n.c. would be required.)

 

why completely ignore the affirmative discussion to run topicality? ... this is just like losing on topicality: you can't win by promising to run a topical case in future rounds. (and counter-perm: you lose and you comply.)

 

i'm delighted you both seem to agree that "open-source styled research efforts are good for debate": moon & sproull studied distributed networks like linux and concluded that access to everyone's work under public licenses is a necessary component AND that formal incentives systems (e.g., ballots) are needed too, in order to discourage free-riding. 'otherwise collaborative projects fail', says the shell.

 

why are there typically free-riders who don't contribute to pre-tournament case disclosure, for instance? ...because they know there are no real consequences for not contributing.

 

it's all well and fine to talk a good game about 'community change', but without altering incentives, it's just that - talk. losing the round is the enforcement, just like losing on topicality enforces the topic area.

 

ignorance of the law is no excuse - and neither is 'not having given it much thought'.

 

so yes, it's a viable voting issue. and you can consider your knowledge of this position as having been asked 'before the round'.

Share this post


Link to post
Share on other sites

What's the difference between a case list and open source besides publishing the full context of the card?

Share this post


Link to post
Share on other sites

Okay, so the argument is that if the other team refuses to publicly license their copyrighted work, they should be penalized with the ballot. They have the right at any time to get out of the link by simply stating in writing essentially what's in my sig. The fact that they refuse is a voting issue because the educational quality of debate is at stake.

 

That about right?

 

By the way, you forgot to explicitly state whether the text of the kritik is publicly licensed, and under which license. That could lure quite the turn in the 2AC methinks. ;)

Share this post


Link to post
Share on other sites

well what's above is more of a sketch really, not what i'd expect a team to run in an actual round [for one thing, it's lengthy; for another, most of the cards are a bit dated, so i imagine there's superior evidence out there by now] -- but as a matter of fact, the main page of the website does clearly include a creative commons share-alike license: http://stuartgeiger.com/ossdebate/index.php?title=Main_Page / http://creativecommons.org/licenses/by-sa/2.5/ ...so you might have to try a different 2a.c. turn, tomak. :)

 

your restatement of the position appeared dismissive, but let me take on one potential counter-argument implied in the following phrase: "...if the other team refuses to publicly license their copyrighted work...".

 

i'm not sure what you meant by "copyrighted work" here: are you referring to the teams'/squads' copyright on their own debate work (which actually exists automatically, believe it or not*) or are you referring to the authors' copyrighted work from which the team is reading?

 

if it's the latter, and you're implying that this position is somehow illegal, i'd remind you of the existence of debate handbooks, which excerpt/reorganize copyrighted works and are sold for a profit. they'd be illegal long before an entirely educational use for non-commercial purposes would, no?

 

"penalized with the ballot" -- would you characterize topicality in the same way? and if not, why is one community standard considered a penalty and the other is not, when you have equal time in-round to dispute the legitimacy or desirability of either? (and can't every loss be seen basically as a penalty?)

 

"educational quality" is a nice catch-all, but don't forget that there are two main 'advantage areas': improving argument quality and remedying inequality in debate.

 

...and yes, you can get out of the link by not linking - i.e., putting all your debate work online protected under a creative commons public license. not having done so results in a loss. that's about right.

_

 

to your question, rhizome: a case-list generally only includes the 1a.c., and 1n.c. shells, and maybe some 2ac blocks. the strong version of this says that nothing can be read in a debate round that isn't put online first. it's like a pre-trial discovery process with no exemptions and no surprises. also, not contributing to a case-list seldom carries any repercussions for those who freeload, whereas not contributing here costs you the ballot.

_

 

*from the shell: "Before 1989, in order to copyright a work, you had to register with the copyright office, and display the circle c symbol. Today, however, unless you specifically designate that a work resides in the public domain, everything from a grocery list to a debate file is automatically copyrighted."

Edited by Lazzarone

Share this post


Link to post
Share on other sites
...and yes, you can get out of the link by not linking - i.e., putting all your debate work online protected under a creative commons public license. not having done so results in a loss. that's about right.

 

If a team responded by saying that they already publish their case online, how would you verify that in-round? Would you print out a copy of all case wikis the morning of each tournament? What if they said that it was published on an another site (e.g. personal or squad website on their school's server)? Or would you not challenge them, even if you thought they were lying?

Share this post


Link to post
Share on other sites

they'd give you a u.r.l., and there are laptops, phones, and school computers for verification purposes. if they weren't telling the truth, it'd be on par with fabricating evidence - essentially, lying about citations. that deserves a loss and possible disqualification pending the decision of the tournament director.

 

once enough people start contributing, however, it's reasonable to think all of this internet disclosure/publication business would go on prior to the date of the tournament. websites such as this or others like it could be used for coordination purposes, perhaps under the auspices of the host school itself. and those who didn't contribute would be known and held to account, like those who, word got around, were running some blatantly non-topical case.

 

 

...and not just tags and cites, rhizome - full text.

Share this post


Link to post
Share on other sites

What makes FULL TEXT uniquely better than tags and cites? "Not everyone has access to lexus" is a shitty question given that the entire premise on the article is based off the idea of free computer information....

 

Outside of the generic "cap good/copyright good" arguments this just seems like a bad "disclosure good" argument...

Share this post


Link to post
Share on other sites
they'd give you a u.r.l., and there are laptops, phones, and school computers for verification purposes. if they weren't telling the truth, it'd be on par with fabricating evidence - essentially, lying about citations. that deserves a loss and possible disqualification pending the decision of the tournament director.
Of the common stop-the-round-to-check-cites-online rules that I'm familiar with, none would provide for verifying a "meta" issue, such as whether the case is published online. They are narrowly tailored to check for fabricated evidence, not to see where valid evidence is posted. Plus, even if you were allowed to do so once or twice, most tab rooms would cut you off if you started demanding verification every round because you'd slow the tournament down significantly.

 

 

...and not just tags and cites, rhizome - full text.

So, in almost all cases, you are demanding that your opponents commit copyright infringement in order to not lose? Unless they own the full text, get permission from someone who does, or tailor their use to the Fair Use exception, then you are asking them to break the law to win a debate round...

Share this post


Link to post
Share on other sites

"So, in almost all cases, you are demanding that your opponents commit copyright infringement in order to not lose? Unless they own the full text, get permission from someone who does, or tailor their use to the Fair Use exception, then you are asking them to break the law to win a debate round."

 

you're confusing the full text of the card with the full text of the published work. a card is, generally speaking, a quotation: one to which you add an original tag-line and then organize into an original argument chain. as i explained to tomak, handbooks companies would be said to violate 'fair use' long before this position, which is an entirely non-commercial, educational use. ...unless you think handbooks commit copyright infringement and are illegal?

 

"Of the common stop-the-round-to-check-cites-online rules that I'm familiar with, none would provide for verifying a "meta" issue, such as whether the case is published online. They are narrowly tailored to check for fabricated evidence, not to see where valid evidence is posted. Plus, even if you were allowed to do so once or twice, most tab rooms would cut you off if you started demanding verification every round because you'd slow the tournament down significantly."

 

i sincerely doubt you'd run into teams, round-after-round, who'd give you a phony u.r.l. and lie about having all their stuff online. the capacity to verify checks abuse, since the consequences for any such team would be dire. this isn't "a 'meta' issue": your opponents will have claimed that a citation exists which does not in fact exist, that it includes things it does not in fact include. the website in question serves as that citation, even under the 'narrowly tailored' policy you stipulate.

 

"What makes FULL TEXT uniquely better than tags and cites?"

 

(again, keep in mind we're talking about the full text of the actual position read in-round, not the full text of whatever article or book.)

 

two answers here, broadly equating to the two contentions: first, what we're looking to do is improve argument quality, so a reader (not necessarily a participant in debate, by the way) should see the tag-line and the full-text of what's being read in-round in order to correct mistakes, ask questions, make suggestions, and so forth. now, of course, one can look up the entire article on one's own time, for context. but someone in the relevant field who happens across a debate argument they may be in an uniquely qualified position to comment on wouldn't know what to make of the typical case-list - their potentially valuable commentary would be impoverished by a series of taglines and citations.

 

second, you've already said it: not everyone has access to full-text services like lexis. this position levels the playing field. the team or squad which now lives off topicality and critiques will be in a better position to focus on case, and the card-cutting assistant coaches at better-funded schools will be working for everyone.

 

"'Not everyone has access to lexis' is a shitty question given that the entire premise on the article is based off the idea of free computer information."

 

i'm not sure i'm understanding you, but let me clarify. obviously, not all the information available on services like lexisnexis is available on the internet right now. that's one level. now, once someone accesses that information and organizes it into a debate argument, i take the result to be new work. that's another level. that new work is also typically not accessible on the internet right now. so simply because the position is "based off the idea of free computer information" doesn't mean that everyone already - magically - has access to those two levels of information - and it's the second level we're really concerned with here. ...what am i missing?

 

"Outside of the generic 'cap good/copyright good' arguments this just seems like a bad 'disclosure good' argument."

 

it's a proposed change to the activity's standard operating procedure. and the communistic angle isn't as generic as it seems; in their book 'multitude', hardt and negri discuss (cc) specifically, under the heading of proposals "to eliminate destructive forms of political and economic control":

 

Some of the most innovative and powerful reform projects, in fact, involve the creation of alternatives to the current system of copyright. The most developed of these is the Creative Commons project, which allows artists and writers a means to share their work freely with others and still maintain some control over the use of the work. When a person registers a work with Creative Commons, including texts, images, audio, and video productions, he or she forgoes the legal protections of copyright that prevent reproduction... This notion of the common is the basis for a postliberal and postsocialist political project.
Edited by Lazzarone

Share this post


Link to post
Share on other sites

Yeah, my problem with this is that either:

 

1) No access to computers argument (stupid, but it might work with some judges) - you exclude people and force them to lose if they don't have access to a computer, this creates a sub-class of people who will always lose, etc etc.

2) No way to prove - phones aren't allowed in round (here it'll get ya kicked out), laptops don't have internet where I am, and there's really no other way to check, and it doesn't make sense - when would you check?

 

I mean, it might work as a framework arg, but you still run into those two problems.

 

If there are decent answers to those arguments, then maybe this might work.

Share this post


Link to post
Share on other sites
you're confusing the full text of the card with the full text of the published work. a card is, generally speaking, a quotation: one to which you add an original tag-line and then organize into an original argument chain.
I didn't confuse the terms. First, while cards are indeed often cut from books, treatises, theses, and other long works, they are also frequently cut from news articles, opinion pieces, and other short works. It was not at all uncommon that a fresh card in our tub would be a full news article printed from the internet or clipped from a magazine/newspaper with selective highlighting. For short wire pieces we occasionally found useful, we'd read the entire article. A general rule of thumb in copyright law is that your ability to successfully claim Fair Use decreases the greater percentage of the work you copy. So, while copying 500 words from a 500-page book might be okay, copying 500 words from a 1000-word article might not be.

 

Second, depending on the content being copied, even a low-percentage of the original work might not be Fair Useable if the part copied is significant in content. (See Harper & Row Pubs. v. The Nation, Inc., 471 US 539 (1985), where the Supreme Court held that excerpting 300-400 words from Gerald Ford's memoirs (a book many, many times that length) was not Fair Use.)

 

as i explained to tomak, handbooks companies would be said to violate 'fair use' long before this position, which is an entirely non-commercial, educational use. ...unless you think handbooks commit copyright infringement and are illegal?

First, from time to time, handbook companies probably do copy more than Fair Use allows, but they have three things in their favor. (A) They can boost their Fair Use credibility over that of general publishers because their market is educational users (academic debaters) only, and bona fide educational uses are generally given more Fair Use protection than those who disseminate copies to the general public. But you are asking teams to publish their copies online for everyone to read, for free. Since doing so it not required by the rules of debate, nor is it actually "part" of the debate (merely pre-debate prep), the students' argument for Fair Use would be weaker.

 

(B) It's a lot harder for the copyright owners to discover that a handbook publisher is copying too much because the content of the handbooks is only released to paying customers. Since the copyright owners are not usually in the target market for debate handbooks, it is unlikely that they will come across one to even know that a suit would be possible (and then they'd have to find a recent enough handbook to still be within the statute of limitations for a copyright suit, etc.). But you are asking teams to publish their copies freely online where any copyright owner with access to Google can find their material being copied, even if they have zero prior knowledge of competitive debate. So it's entirely possible that a student and a handbook could copy the exact same infringing content, but the handbook is far less likely to be caught than the student under your system.

 

© Handbook companies make lots of revenue from sales of their books, so they have substantial incentive to fight any copyright claim against them tooth-and-nail. This is not only a deterrent to a copyright owner who knows the case may be close, but it's also another way handbook companies differ from schools. Schools (and most students, personally) don't have the time, money, or inclination to fight a lawsuit (even when their chances are good). So if a student publishes their case on their school webspace, and the school is hit with a copyright lawsuit, even if they case is good, most schools (particularly public ones) will remove the offending content (and possibly punish the student) rather than fight. Most students who host on their own webspace and most Wiki owners will do the same.

 

So handbook companies are far less likely to be hit with a copyright infringment lawsuit, far more likely to fight such a lawsuit, and more likely to prevail in court than a similarly situated student/team under your system.

 

Furthermore, even if you are right, that handbook companies and students are on-par in this scenario, how can a student (who is not a lawyer and has no specialized copyright knowledge) know that the cards they post are not infringing? You're basically demanding that teams run their cases past a copyright lawyer before posting them, or take a potentially costly risk that they adequately Fair Used the content.

 

i sincerely doubt you'd run into teams, round-after-round, who'd give you a phony u.r.l. and lie about having all their stuff online.

If your strategy caught on, I would expect this behavior to become common as well.

the capacity to verify checks abuse, since the consequences for any such team would be dire.
Well duh. But the questions is "can you verify in-round at all?" If the verification doesn't happen until after the judge hands in the ballot, then the consequences would be nil.

 

this isn't "a 'meta' issue": your opponents will have claimed that a citation exists which does not in fact exist,
But it's not a citation. The case is not evidence, it merely contains evidence. My case doesn't need to be cited because I read it aloud to you. It's the citations within the case (the evidence that I use to support my argument) that most mid-round cite-verification rules I know of pertain to. The arguments themselves can't be cited; so if I say that my argument is posted somewhere, I know many judges and tournament directors who would not let you stop the round to check that.

 

Sure, you could check after the round and think less of a lying team, but there would be no ballot-based consequences for them.

Edited by Fox On Socks

Share this post


Link to post
Share on other sites

I misunderstood how much you demand of the other team. You're not just asking for a CC licensing blip somewhere on the bottom of the page on their 1AC. You require them to digitize (all?) their files, upload them to a server, apply machine-readable tags that the content has a free (and copyleft?) license, and do so in a way that other debaters will know how to access it - all well before the round started.

 

Well, that gets you the link. But I think you're going to have a really hard time showing that the punishment fits the crime. I guess it's not any harder than any other utopian K of the format "wouldn't it be awesome if every debater... therefore you lose because you never thought of it."

Share this post


Link to post
Share on other sites

The punishment mentality is something that fascinates me about debate. It seems to be assumed here, for example, that you can't make similar arguments to individual debaters outside debate rounds because "It's the right thing to do" and "Not doing so perpetuates injustice" are claims that don't motivate debaters to actually do that much. Rather, you have to present the argument in a round with the threat of a loss for debaters to actually consider and act on moral claims. This is something that I would worry about quite a bit more than whether or not someone has posted the full text of their NATO counterplan file online.

Share this post


Link to post
Share on other sites

skirtsteak: "1) No access to computers argument (stupid, but it might work with some judges) - you exclude people and force them to lose if they don't have access to a computer, this creates a sub-class of people who will always lose, etc etc."

 

even if you go to some of the best libraries, you don't have the access to all the publications that lexisnexis does on a daily, instantaneous basis. and you know what? they have computers at most libraries. ergo, the open debate initiative can only be adding to the net amount of access that even the lesser funded schools/squads/teams have. that makes this the very opposite of exclusion - an enforced inclusion, which includes all those formerly excluded from pricey evidentiary resources.

 

"2) No way to prove - phones aren't allowed in round (here it'll get ya kicked out), laptops don't have internet where I am, and there's really no other way to check, and it doesn't make sense - when would you check?"

 

it's a safe bet that almost all teams you hit won't have all their files online under a creative commons license, and thus won't claim to, making this a moot point. but you can ask before the round: if they say no, then you run the argument; if they decline to answer, then you run the argument; if they say yes and here's the u.r.l. where it's stored, you'd have to have some means to get online; if it doesn't check out, you challenge; if it does check out, then you greet a fellow comrade. the image of debaters as these ultra-competitive jerks is often belied by how many immediately non-competitive practices are widespread, like disclosing your affirmative before the round.

_

 

 

birdwing7: "Do all arguments have to be posted before the round, or only the 1AC?"

 

all.

_

 

 

fox on socks: "A general rule of thumb in copyright law is that your ability to successfully claim Fair Use decreases the greater percentage of the work you copy."

 

true, but 'fair use' is decided on a case-by-case basis and weighs multiple factors. i find your self-certainty as to open debate's illegality ("you're basically asking people to break the law") bombastically unwarranted. plus, once you frame your argument in terms of the 'likeliness of getting caught', you've already admitted that violations of the letter of the law are of diminished importance and are likely undecidable beforehand. so all i have to show is there are sound reasons for thinking students and schools will not get into any legal trouble. i do not, strictly speaking, have to show that this position doesn't break the law, since (we should agree) that's gray.

 

but second, let's say we did keep to the strict interpretation of 'fair use' that you propose. then even a coach's photocopying an article and distributing it to their team could be considered criminal, and handbook companies - even debate institutes - would be considered deeply engaged in illegal activity. i think most participants are likely to dismiss such a view as laughable. in fact, many (socially accepted) uses of the lexisnexis service in this activity involve pirated codes and are blatantly illegal.

 

so, third, back to the getting 'caught' (or sued) part - which we agree is the only thing that should concern us here: i'll mention several mitigating factors.

 

as with handbooks, this is also a bona fide educational use, and for no commercial purpose. i doubt there'll be much of any spillover to 'the general public' as such. what we're hoping for is spillover to professional, academic sources, who already have access to the journal articles and resources we're talking about, but who can offer salient criticism of the way debaters use them to write their arguments. so it's highly doubtful we'll even be on the copyright holders' radar, and if there is a blip, we'll look like a educational exemption. i can't foresee much of their potential market not purchasing a specific book or periodical on account of its being freely available to academic debaters, so the actual effect on the sale of any copyrighted work would be negligible. and since most cards are usually edited excerpts, there's no direct market substitute for the original. you'd still have to track down the entire work for context purposes.

 

additionally, handbook companies may have unwittingly laid the groundwork for a precedent. one might consider debate work to constitute 'criticism' as well as creating a wholly original work all its own. i emailed lawrence lessig about this issue, and although he said there's no easy way to say, he said that debate work could be reasonably argued to produce its own copyrighted work ('the card with the tagline'). this also flips the decision-calculus around: not using (cc) means you're using traditional copyright, whether you agree with that or not, since

 

"In some countries (including the United States of America), the mere creation of a work establishes copyright over it, and there is no legal requirement to register or declare copyright ownership".

 

so while i'm not "demanding that teams run their cases past a copyright lawyer before posting them", finding out a little bit more about copyright law couldn't hurt, since even if you disagree with this position, the question of what to do with (already automatically copyrighted) debate materials still confronts the community.

 

the harper & row decision you cite from 1985 is entirely inapplicable, since it had little to do with substantiality, even though the nation magazine pooorly argued that it did. in that case the nation tried to 'scoop' time magazine by publishing *unpublished material*:

 

Two to three weeks before the Time article's scheduled release, an unidentified person secretly brought a copy of the Ford manuscript to Victor Navasky, editor of The Nation, a political commentary magazine. Mr. Navasky knew that his possession of the manuscript was not authorized and that the manuscript must be returned quickly to his "source" to avoid discovery. 557 F. Supp. 1067, 1069 (SDNY 1983). He hastily put together what he believed was "a real hot news story" composed of quotes, paraphrases, and facts drawn exclusively from the manuscript. Ibid. Mr. Navasky attempted no independent commentary, research or criticism, in part because of the need for speed if he was to "make news" by "publish[ing] in advance of publication of the Ford book. {...}

 

The Nation effectively arrogated to itself the right of first publication[.]

 

since debaters aren't in the business of publishing unpublished material (as such cards wouldn't even have cites), this case doesn't factor into the matter at hand one way or the other. (sandy berger's son never tried to read his father's journal articles as evidence before they were published - to his credit. :) and copying an entire work doesn't preclude a finding of fair use, though it may weigh against such a finding.

 

a better case to cite would've been the 2000 l.a. times v. free republic decision in federal district court, ...but even there i'd argue: (1) our use is transformative (tag-lines and constructing chains of reasoning), (2), we're entirely educational, not even minimally commercial, (3), in all but a small minority of cases, we're not "wholesale copying" (in the phrasing of the decision), and (4), again, we have virtually zero effect on sales.

 

couple more nitpicks: handbooks are purchasable by the general public; or at least i'm not aware of any rule that states they can only be sold to debaters. also handbook companies are business, whereas going after students in a big way is likely to garner bad press. and in point of fact, this last sentence of yours - "Since doing [(cc)] i not required by the rules of debate, nor is it actually 'part' of the debate (merely pre-debate prep), the students' argument for Fair Use would be weaker." - actually straight-turns itself. not only are handbooks "pre-debate prep", but they actually violate the explicit rules of pre-debate prep as laid down by organizations like the american forensics association. check out their code of standards here: http://www.ndtceda.com/pipermail/edebate/2005-April/061585.html - so, their position is the weaker one legally, despite their capacity to employ lawyers.

 

"Furthermore, even if you are right, that handbook companies and students are on-par in this scenario, how can a student (who is not a lawyer and has no specialized copyright knowledge) know that the cards they post are not infringing?"

 

perhaps there is a need for an overarching debate organization to provide debaters and coaches and institute directors with a set of guidelines, but i think the sound practical approach to take now, whether we're talking about loading up tubs full of photocopies or uploading files to the internet, is that we'll cross that bridge when we come to it, since there's good reason to think we never will. if we ever did, it's likely that file-hosting services could resolve this, since you can essentially store any file you desire on sites such as mediafire with little risk of being traced or targeted. then debaters would just distribute the links in semi-private forums like this one, which shouldn't attract anymore attention than the 'books and articles'-section here (which, i'll remind you, distributes entire works verbatim, not debate arguments). moreover, who do the copyright holders sue? the person who runs the internet forum isn't responsible for what may be on the other side of a link someone posts. and just because a profile is named judith butler doesn't mean they can actually sue judith butler. :) :)

 

and there's another turn worth mentioning here: should worse come to worse, it'd force debaters to actually write arguments - that is, to use the ideas and concepts contained in the literature instead of relying on quotations. this is a rebuttal from prudence, and it may also be an improver of argument quality. if the status quo is illegal, this position opens up an alternative to piracy - to subject texts to criticism, to use them minimally, to write your arguments yourself, to restore 'evidence' to its traditional meaning of facts and expert opinions, not the verbatim presentation, and thus to add new scholarly works to the existing literature with every argument manufactured in debate. in the legal sense, this would mean debate arguments would be more 'transformative' than 'derivative'.

 

"If the verification doesn't happen until after the judge hands in the ballot, then the consequences would be nil."

 

although i'm morally certain this would be an unlikely, rare scenario, it'd be a similar process to an evidence challenge: the ballot couldn't be signed until verification takes place, and/or if the team was found to be lying, then the the ballot would be voided by the tournament director.

 

"My case doesn't need to be cited because I read it aloud to you."

 

one of the requirements of this position is that your case becomes a citation - that it's stored on a publicly accessible website. von hippel 05 from the shell: "scholars in diverse disciplines are adopting open-access practices at a surprisingly high rate and are being rewarded for it, as reflected in [citations]".

_

 

 

tomak, you're right that the research burden required to be competitive in debate is demanding to say the least, but this lessens, not exacerbates, that burden by eliminating redundant work and compelling debater sharing. once a card is online, there's little reason to cut it again (assuming the first debater did their job right). that's the opposite of "utopian" - it's eminently practical, since many of the data structures debaters use are as obsolete as the 3"5" card. this would save time and effort and afford debaters increased opportunity to focus on constructing superior arguments instead of just having the superior prerequisites for constructing arguments.

_

 

 

maxpow, why is it assumed that this position uniquely exhibits a "punishment mentality"? the hypothetical debaters in question will have lost a debate round; it's not going on their permanent record; they're not getting fined or going to jail; at worse they fail to advance as far as they could have at a tournament that's entirely voluntary. a top-down mandate would be more of a punishment, leaving no room for discussion, and not treating debaters as capable of making up their own minds. we should be glad there are standards - staying on topic, refraining from racist and sexist speech - that the activity preserves directly through in-round deliberations. if we're to consider the judge as some kind of instrument of disciplinary power, then they are that in every round regardless of the substance of the argument. and although there may be a grain of truth in such a critique of debate, i think the more popular view is more commonsensical: arguing for a position isn't 'threatening' you with anything, and failing to pick up the ballot isn't a 'punishment'. do you have the same take with regards to topicality or procedural abuse voters - that they should be settled by talking outside the round rather than in-round?

Edited by Lazzarone

Share this post


Link to post
Share on other sites

I don't see how all arguments can be posted in advance. How do you make an argument that is specific to what's going on in a round? Or are you saying only evidence needs to be posted in advance? If the latter, does it have to be posted in a particular file? What if I want to use in 2NR a card posted in a different file? Why do cards need to be posted, but their application not? Following the analogies in the kritik (full disclosure: I hate analogies), I can't see why posting evidence is more important than posting the application of the evidence to a particular argument. In academia, if that is the analogy you are using, the application might sometimes be more important than the evidence. The same is true in round, isn't it? Sometimes the evidence is outcome determinative, but often it is the way the evidence is applied.

 

If you truly have to post everything, how do you know in advance what 2NR in a given round will be?

 

I'm probably misunderstanding the position.

Edited by birdwing7

Share this post


Link to post
Share on other sites
maxpow, why is it assumed that this position uniquely exhibits a "punishment mentality"? the hypothetical debaters in question will have lost a debate round; it's not going on their permanent record; they're not getting fined or going to jail; at worse they fail to advance as far as they could have at a tournament that's entirely voluntary. a top-down mandate would be more of a punishment, leaving no room for discussion, and not treating debaters as capable of making up their own minds. we should be glad there are standards - staying on topic, refraining from racist and sexist speech - that the activity preserves directly through in-round deliberations. if we're to consider the judge as some kind of instrument of disciplinary power, then they are that in every round regardless of the substance of the argument. and although there may be a grain of truth in such a critique of debate, i think the more popular view is more commonsensical: arguing for a position isn't 'threatening' you with anything, and failing to pick up the ballot isn't a 'punishment'. do you have the same take with regards to topicality or procedural abuse voters - that they should be settled by talking outside the round rather than in-round?

As I understand it, this position says that a judge should give a team a loss if that team does not post their material under a creative commons license. That's a punishment (or disincentive) for that team, not a reward, and I don't think the voluntary nature of debate changes the fact that it is a punishment. It's not a particularly awful one and people will survive it without too many scars. Also, I didn't say that it uniquely exhibits a punishment mentality, nor did I say that it's necessarily unjustified to punish teams for doing something bad (sexist language) or failing to do something good (making decent arguments). That's part of the nature of competitive debate and I think that teams should be prepared for it.

 

What I did say is that the necessity for the punishment mentality fascinates me. What fascinates me about it is that moral discourse and claims often lack force outside of a scenario where debaters can be punished or rewarded. When attempting to convince debaters to take these claims seriously outside rounds fails, then that's when the necessity for making claims about fairness and justice inside rounds arises. What I said is worrying is not the in-round claims that attempt to make claims about fairness and justice relevant, but the failure of out-of-round claims to have similar sorts of impacts. And I think that is worrisome, and perhaps more seriously worrisome than a failure to comply with the recommendations of this particular position in-round (which is not to say that its recommendations are awful and should not be followed). I don't think that this is at odd with your position, either, especially since, as you claim, "formal incentives systems, such as ballots, must discourage hoarding and persistently remind debaters to share, otherwise collaborative projects fail. Your ballot sets down the rules of the road."

 

I just think it's interesting, and I wasn't particularly aiming for a Foucauldian problematization of the judge as a node of disciplinary power or anything.

Share this post


Link to post
Share on other sites

my bad, birdwing, when you said arguments, i thought you meant evidence, but yes, we're talking about any quotation you intend to cite in-round as a card and the brief synopsis of that quotation typically accompanying it known as a tag, entire 1a.c. case, all 1n.c. shells, all 2a.c. blocks, all 2n.c./2n.r. blocks, and everything in the tubs. of course, spontaneous analytical argumentation, or applying a card in a new way, are gravy, and one of the beneficial outcomes of the position is ostensibly the increased incentivization of that particular skill set.

 

and yes, maxpow, that's interesting to me too. the round is a site of struggle.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...