Cyberspace

Google versus the Cyber-Socialists


Siva Vaidhyanathan, The Googlization of Everything (And Why We Should Worry) (University of California Press, 2011), 280 pages, $43.95. 


On August 19, 2004, the New York Stock Exchange rang the bell for the birth of a company that was to change the way the world obtained information. Appropriately for a new infant, its exchange code was GOOG. Its initial public offering was at US$85 a share; the price at close that day was $100.34. Two years later, Google ranked at 353 in Fortune’s top 500 companies; in 2010 it stood at 101. With a market capitalisation now around US$180 billion, it will undoubtedly be in the top 100 when the Fortune 2011 list is published. It is now third to Apple and Microsoft as a technology company and is the world’s best-known brand. All thanks to a Persian mathematician working in Baghdad 1200 years ago, whose name, Al-Khwarizmi, translated into medieval Latin as Algorismus gives us the modern term algorithm, meaning “decimal number system”. Google’s algorithms work at lightning speed in its powerful computers to trawl the internet for terms that match the query, and then present them in a ranked order. The company published an explanation of the system as its patented “PigeonRank” as a spoof on April Fool’s Day 2002 (www.google.com/technology/pigeonrank.html), but it is nevertheless a fairly accurate metaphor of how it works.

The search engine we all know and use was the beginning, and still is the heart and profit mainspring of the enterprise. But by my last count, Google has some 160 products, of which the best known are: Gmail, Google Earth, Google Maps, Google News and Android, the operating system which now powers more smartphones than any other. Then there is Orkut, a social networking site popular in India and Brazil; Google Transliteration, which will translate your words into many languages; Google Body, which peels back layers to reveal the organs; Google Scholar; Google Translate; Google Labs; Google Art Project and—Google Books.

In this, the seventh year of its life, the seventh round of its title fight to retain internet championship of the world, Google has just taken a jab and a left hook to its social networking aspirations and a very heavy blow to the solar plexus of its ambitions to dominate all the world’s knowledge.

The first shock came last September when the company was obliged to agree to a class action settlement in California arising from its attempts to launch its new social networking idea, Buzz. Plaintiffs alleged that Gmail users had been enrolled automatically in Buzz, exposing their e-mail contacts without sufficient consent. Google quickly modified Gmail, and denied causing any harm. The case in the San Jose Division of the US District Court was finalised only in March this year, with Google agreeing to pay US$8.5 million to fund internet privacy policy and education. Because it would be administratively infeasible to distribute this amount to 37 million Gmail users, Chief Judge James Ware utilised the legal mechanism of “cy-près”, now common in the USA, which allows money to be used to promote the interests of class members. This produced its own reaction, with critics such as the Technology Liberation Front pointing out that many of the organisations were already beneficiaries of Google’s contributions. They questioned whether payouts to privacy regulatory advocates would advance the interests of all consumers.

The second punch landed when the Federal Trade Commission also concluded Google had used deceptive tactics and violated its own privacy statements when it launched Buzz. In March, the FTC obtained a consent agreement from the company prohibiting it from future privacy misrepresentations and agreeing to submit to independent privacy audits for the next twenty years. The agreement requires Google to obtain users’ “express affirmative consent” before making any changes on how and with whom it shares information previously collected. The consent order was not a finding of actual violation of the law, and constituted no admission, but if confirmed in the final order, every future violation may result in a civil penalty of US$16,000. Should that be calculated per user, it could amount to millions of dollars.

Lawyers are now divided on whether this FTC settlement could be applied in future to every Google service or product that undergoes some “change, addition or enhancement” if it results from the sharing of certain information. The Senior Attorney for the FTC’s Division of Privacy and Identity Protection, Katie Ratte, was quoted as saying that although the order had Buzz in mind, “We just don’t want to see information collected under one set of rules shared in a new way.” This could put a serious crimp in Google’s social network plans.

The big problem for Google is Facebook. Not yet floated as a public company, its shares have been trading excitedly on the secondary market, giving it a theoretical worth of US$80 billion. It has more than 600 million subscribers and is raking in advertising at the rate of $4 billion a year. With one in every four page views in the United States on Facebook, it has proved the appeal and the money in social networking. Late last year, its “Project Titan” delivered Facebook Messages, touted as a “Gmail killer”. Two months ago it extended its reach to commercial brand promotion in Britain, and also introduced Facebook Chat for live voice calls. Larry Page, one of Google’s founders, who recently returned as CEO, has declared that his company’s future is “social”. He has linked company bonuses to the success in “integrating relationships, sharing identity across [Google’s] products”. Competition is warming. By the end of this year, Facebook is likely to have more than 500 shareholders, compelling it to disclose its financial results under US Securities and Exchange Commission rules. This may result in a more realistic assessment of its worth by the market, but Google is taking no chances. On the same day the FTC consent decree was handed down, Google launched +1, its version of Facebook’s “Like” button. 

On March 22, Google was sent to the canvas and took the compulsory eight-second count when New York Circuit Court Judge Denny Chin rejected the Google Book Settlement, which had been reached with the publishing industry and authors in October 2008 after years of argument. His decision should not have surprised the publishing world. The settlement had been opposed by the US Department of Justice, the governments of France and Germany, hundreds of authors and publishers and all Google’s competitors. Google had announced it planned to scan all known books in the world (128,864,880) by the end of the decade. It had signed up many of the world’s great libraries in the USA and Europe. Using special cameras scanning one thousand pages an hour, it had already digitised 14 million volumes. In anticipation of approval of the Settlement, it had planned the launch of Google Editions, to compete with Amazon, Barnes & Noble, Apple and other electronic booksellers with its own e-book store.

There had been three main objections to the Settlement: Google would become gatekeeper to the world’s literary heritage; it would put the onus on authors to assert their copyright, and it would acquire rights to the “Orphans”—millions of books still in copyright but whose authors could not be traced. Google had claimed that scanning books to index them came within the accepted definition of “fair use”, the principle established in 1976 when the photocopier seemed to threaten publishing. The Settlement shook up the cosy oligopoly of the publishing world anew, and forced it to confront the digital era.

Emotions ran high. The Open Content Alliance, which had its own plan to scan and digitise important library books, described the Google Book Agreement as “a grand and dastardly scheme to construct an organisation to control/monetize the Orphans”. Early in 2008, the Shawn Bentley Orphan Works Act had attempted to legislate in the USA for fair use of Orphans. It would have allowed anybody to digitise Orphan works without fear of being sued for copyright infringement as long as they could prove that they had tried to find the rights holder. This would have given every company interested in scanning books the same legal protection as in Google’s agreement. The Act passed the Senate but met considerable industry opposition and ran aground a few months later when the Google Book Settlement was announced.

The Settlement would have legitimised Google’s flying start at a cost of a mere US$125 million. $45 million was to compensate authors whose works had already been scanned; $34.5 million to set up a special registry to manage rights holders’ payments and find owners of Orphans, while $45.5 million was to pay the plaintiff’s lawyers! The rights holders of out-of-print books still in copyright were to be paid $60 per full book or $5 to $15 for partial works, but faced a cut-off date to claim or opt out. Authors would get 63 per cent of all advertising and revenues from e-sales of their work. There was to be a series of stepped “price bins” from US$2.99 to $29.99. Authors could set their own price, or it would be sorted into a price bin by a Google algorithm. Google would also provide terminals in contributing libraries, giving access to indexed catalogues of their entire collections in digital form, to which the normal “fair use” restrictions would apply.

Judge Chin recognised the value of digitising the world’s books, but decided the deal gave Google an unfair advantage. Copyright was of most concern to him, but he also raised international issues and questions of privacy: 

While the creation of a universal digital library would benefit many, the agreement would go too far. Indeed, it would give Google a significant advantage over competitors, rewarding it for engaging in wholesale copying of copyright works without permission … It would give Google a de facto monopoly over unclaimed works. 

But he observed that many of the concerns raised by the government and other objectors would be ameliorated if the agreement were converted from an “opt out” to an “opt in” settlement, and he urged the parties to revise it. Judge Chin set June 1 for a “status” meeting of the parties to consider his decision, but it may yet end up being tested in the Supreme Court.

These two attempts by Google were by no means the first times Silicon Valley had taken its brainwaves to the market contentiously, believing it was a cheaper and easier way to test them, even if they were challenged and had to back down. It was what Kashmir Hill, technology commentator in Forbes Magazine called the “Apology approach to innovation”. Sergey Brin, one of Google’s founders, justified Google Books’ initiative thus: “The famous library of Alexandria burned three times, in 48 BC, AD 273 and AD 680, as did the Library of Congress in 1851 … I hope such destruction never happens again, but history would suggest otherwise.” He was rebuked by Pamela Samuelson, Professor of Law at University of California at Berkeley and a recognised pioneer in digital copyright law: “Libraries everywhere are terrified that Google will engage in price-gouging when setting prices for institutional subscriptions.” Twelve months earlier, she had denounced the settlement as “not really the settlement of a dispute over whether scanning books to index them was fair use, but a massive restructuring of the book industry’s future without meaningful government oversight.” It was Professor Samuelson who first recognised how libraries could be hooked—it would be a huge advantage to have their entire corpus online, but Google was relying on their subscriptions to pay for its scanning. Harvard University Library, one of the foundation libraries in the scheme, said it would withdraw from Google Books if agreement could not be reached.

In Sydney, Michael Fraser, Director of the Communications Law Centre in the University of Technology, conceded that Google Books had attempted to side-step real dysfunctions in access to copyright books online, and thought there were the makings of a deal there. However although Google might be benign now, as a corporation with profit as its raison d’être, it could not be trusted in the role of gatekeeper to the collected books of mankind, he said. “Whoever controls the flow of information controls society,” he wrote, a remark leading inevitably to the unworkable suggestion that “the creative industries should work with government to develop a registry of content held in creators’ own databases”. As one observer commented drily about a similar suggestion in the USA: “He’d trust the guys with badges and guns, but not a salesman!” 

With this ferment in the worlds of publishing and cyberspace, a new book, The Googlization of Everything (And Why We Should Worry) by Siva Vaidhyanathan, Professor of Media Studies at the University of Virginia, burst on the scene. He has been described by the Library Journal as a “mover and shaker” in the library field, no doubt because in one of its featured articles he lauded librarians for being “on the front line of copyright battles”. He runs a blog with the same name as the book; its subtitle, “How one company is disrupting commerce, culture and community”, forewarns readers not to expect an even-handed, much less an objective review of Google’s enterprises.

Google is a company Siva loves to hate. For the last five years he has built his academic career and his ubiquitous presence in print and on radio and television on criticising it. In 2006 he invented a new discipline, Critical Information Studies, defined in academic gobbledegook as “an emerging transdisciplinary field concerned broadly with the politics of information in contemporary connected societies”. His first publication, Copyrights and Copywrongs: The Rise of Intellectual Property and How It Threatens Creativity was a campaign speech for the freedom to use, revise, criticise and manipulate other people’s work. As exemplified by this and his new book, the word politics in his definition should be underlined.

Vaidhyanathan opens Googlization by declaring Google omniscient, omnipotent and omnipresent, ruling like Caesar. “Google is not evil, but neither is it morally good,” he writes; 

Google does not make us smarter, nor does it make us dumber … But it’s dangerous because of our increasing, uncritical faith in and dependence on it, and because of the way it fractures and disrupts almost every market or activity it enters …

Increasingly, Google is the lens through which we view the world. Google refracts, more than reflects, what we think is true and important … Its biases are built into its algorithms. And those biases affect how we value things, perceive things, and navigate the worlds of culture and ideas … If Google is the lens through which we see the world, we all might be cursed to wander the earth, blinded by ambition. 

Vaidhyanathan is too subtle to make The Googlization of Everything a diatribe; rather, it is the condescension of the patronising academic laid bare. If he gives little credence to the intelligence of the public, he shows even less understanding of the workings of the marketplace. Google has no monopoly on search; as has been shown, it lags badly in social networking, and has competitors in almost every other field it has entered. Many of them are uneconomic, and designedly so. As recent news reports have shown, the digital marketplace is aflame with legal actions as the major players claim and cross-claim to protect their ideas, their patents and their turf. Google has its fair share of these. In his bleak view of the world, what Vaidhyanathan seems unable to stomach is that technological innovation and creativity should earn a profit. “Where once Google specialised in delivering information to satiate curiosity, now it does so to facilitate consumption,” he says. The cure to this awful commercialism of information? “We owe it to ourselves to invest in and support an environment that will enable experimentation and the emergence of new institutions and voices that can foster republican values and global democratic culture.” He means the government.

Vaidhyanathan’s curiously complacent views about China’s supervision of the internet and social policy would shock more expert observers, but when he leaves the internet and tackles the conservation and accessibility of the world’s heritage of literature and knowledge he makes a little more sense. He summarises the argument that 400 objectors to the Google Book Settlement put: “Institutions such as libraries, states and universities tend to last for centuries, commercial firms rarely make it through one century … We should not trust Google to be custodian of our precious cultural and scientific resources.” This is at the heart of our dilemma: how to preserve and store what we have collected in our civilisation? And in our transition from an analogue to a digital age, how to balance access and reproducibility against intellectual property rights and incentive to create.

Tim Wu, Professor of Law at Columbia University, describes this as the story of “twin tragedies”. Both positions, he says, have difficulty in demonstrating empirically, as opposed to anecdotally, that either overprotection or piracy has stilled the engines of creativity. So, any putative change in copyright protection can both be defended as a necessary creative incentive, and attacked as an unnecessary control. In an article in the Michigan Law Review as long ago as 2004, he argued that the main challenges for twenty-first-century copyright are not challenges of authorship policy, but new and harder problems for copyright communications policy. Today Wu is best known for his development of Net Neutrality theory; we will return to his ideas later. 

It’s probably still regarded as infra dig to read the last page of a novel first. But The Googlization of Everything is one book where the reader should start at the end. There will be found Vaidhyanathan’s explicit political views, which despite tactical caveats, colour his analysis. His prescription to cure what he had earlier termed “blind faith in technology and market fundamentalism” is a Human Knowledge Project, inspired by the Human Genome Project—“a grand global project, funded by a group of concerned governments and facilitated by the best national libraries, to plan and execute a fifty-year project to connect everybody to everything”. It should be committed to building a global system that could erase the disparities in knowledge that currently exist between a child growing up in a poor village in South Africa and another growing up in a wealthy city in Canada, he wrote. This involved “removing impediments such as overly protective and anti-competitive intellectual-property powers; a set of global policies explicitly designed to serve the underserved, closing the digital divide that privileges the wealthy and better educated.” He wants to link the libraries of the world in a global network, because “this is where poor people seek knowledge and opportunity via the internet”.

Swept up in the hubris of designing his United Nations Ministry of Information and Truth, he does not consider the implications of liquidating copyright regimes around the world; the mechanisms by which they could be eliminated are left unexplained. The people who might explain it to him are the members of the worldwide Access to Knowledge (A2K) movement, with whom he seems to share core beliefs. These people, who might generously be described as the intellectual arm of the hooligan organisation that disrupts G20 and WTO conferences, think the solution is simple: abolish patents, copyright and trademarks.

We are fortunate that the aims, objectives and strategy of A2K have been set out publicly in a new book, Access to Knowledge in the Age of Intellectual Property. It’s a collection of thirty-three essays by some of the leading lights in the movement, including Roberto Verzola, Peter Drahos, Lawrence Liang and Yochai Benkler. Siva Vaidhyanathan has reviewed it approvingly, describing A2K as a powerful and emerging social movement, and the work a handbook for activism. Amy Kapczynski, an Assistant Professor of Law at the University of California at Berkeley, and one of the editors, wrote the first contribution: “A Conceptual Genealogy”. She characterises intellectual property as a “seductive mirage” and its laws “the despotic dominion”—appropriating the term William Blackstone used 240 years ago to define material property in his Commentaries on the Laws of England. But the movement has reached even further back in history for a metaphor for its philosophical underpinnings. It compares the laws protecting intellectual property with England’s Inclosure Acts which replaced the common-field system of tillage with one of unrestricted private use, often for sheep pasture. From this, A2K has constructed concepts of “public domain” and “commons”, analogous to the village common, justifying the right of popular access to the creative work of others.

The idea seems to have originated in an initially influential article, “The Tragedy of the Commons”, by zoologist and ecologist Garrett Hardin in 1968. Theorising about the English commons system, Hardin argued that multiple individuals, acting independently and out of rational self-interest, would ultimately deplete a shared resource. The message fed into the Club of Rome’s 1972 report The Limits to Growth. But Hardin ignored the evidence of what happened in real life: self-regulation by the communities involved. Anthropologist Dr G.N. Appell observed that “The Tragedy of the Commons” 

had been embraced as a sacred text by scholars and professionals in the practice of designing futures for others, and imposing their own economic and environmental rationality on other social systems of which they have incomplete understanding and knowledge.  

At least there is consistency within A2K—Access to Knowledge is available for free download as a “creative common”.

Interestingly, it is one of Google’s free and innovative by-products of its collection of information and its digitisation of books that enables us to track the rise of this movement. In November last year, Google released Ngram Viewer, which has been welcomed as a useful research tool by linguists, librarians and PhD students.

Entering a phrase in the search box yields a linear graph showing the frequency of its use in selected books over a selected period of years. When I put “intellectual property” into the box and selected “American English” and the years 1800 to 2008, the blue graph line remained flat until the early 1980s, when it rose almost vertically, and continued to rise, albeit a little more slowly after 2000. Selecting “British English” gave a slightly flatter curve, until 2000, when it too climbed steeply, indicating the movement took a little time to cross the Atlantic. We can surmise that the start-point is consistent with the backlash of the libertarian Left to Ayn Rand’s provocative views on laissez-faire capitalism and individual rights. Atlas Shrugged, her monster novel published in 1957, had hypothesised the collapse of civilisation if the profit motive was eliminated. Ten years later, in Capitalism: The Unknown Ideal, she wrote a chapter arguing that patents and copyrights were quintessentially property rights, not privileges granted by governments.

Access to Knowledge trumpets A2K’s first great campaign—against the patents of the pharmaceutical industry, in a demand that it give access to copies of AIDS medicines. The AIDS activist group Act-Up-Paris created a logo with the familiar © symbol over-written by COPY=RIGHT. Now the guns are turned on publishing, to free the entire body of human information and artistic creation from legal restriction. But as Amy Kapczynski shows in her introduction, the movement is not above putting its own spin on events. She claims as “one of the most salient moments of success for A2K” the WTO’s Doha Agreement, which she said, “declared that intellectual property rights do not trump public health”. In fact, Article 3 of the 2001 Doha Ministerial Statement said: “We recognise that intellectual property protection is important for the development of new medicines. We also recognise the concerns about its effects on prices.” The agreement went on to support the patent system by reaffirming the TRIPS (Trade-Related Aspects of Intellectual Property Rights) Agreement. Doha protected patents by establishing the right to grant compulsory licences for the manufacture of generic versions of expensive drugs. Some unintended consequences of this decision have been revealed recently by reports of the black market trade in cheaply-produced generic HIV-AIDS retro-viral drugs in South Africa to mix with marijuana for the enhanced enjoyment of its smokers.

A2K’s attack on intellectual property rights depends on a mix of quasi-Marxist theory and perverted economics. Here’s an example: 

The legitimation narrative of intellectual property today is a thaumatrope—two different images on a card or disk, recto and verso, that when spun on an axis give the appearance of a single, unified image. One image is derived from the field of information economics, but omits the skepticism about intellectual property present in that field. The other screen is derived from the theories of the Chicago School of economics about the superiority of private-property rights in material resources, but suppresses the many significant differences between the economics of land and the economics of information …

Information inherently is not consumable. Because the marginal cost of information is zero, the ideal price of information in a competitive market is also zero. 

What then is the future of the library, and its modern global digital offspring which Google so brazenly attempted to launch? The idea of a universal library had been floated before, but only in fiction. Jorge Luis Borges wrote a short story in 1941, “The Library of Babel”, which contained all the information in the world but in randomly scattered words, letters and symbols. Without a code everything was unreadable. Borges’s parable was that unlimited information does not yield wisdom. English fantasy writer Terry Pratchett, recently a guest speaker at the Sydney Opera House, parodied libraries, knowledge and wisdom in The Unseen Library, part of his Discworld series: 

Books contain knowledge, and knowledge equals power, which according to the laws of physics can be converted to energy and matter, so the Library contains an extremely large mass that can distort time and space. That is the natural philosophy mumbo-jumbo explanation on the dangers of the Library. 

Google Books’ collection might not produce wisdom, but its catalogues did provide indexes—the code to make sense of it all. Would its 130-million-volume mass distort time and space? 

Professor Wu, introduced earlier, has written a book that might suggest the way forward. In The Master Switch, he reviews the chaos that has followed every major technological innovation—the telegraph, the telephone, film, radio, television, the internet. Each has followed what he terms “The Cycle”. Information technologies passed from someone’s hobby to somebody’s industry. The phase of revolutionary novelty and youthful utopianism always promised to change our lives, but never did change the nature of our existence. These brave new technologies evolved into industrial giants, which in time would succumb to the challenge of new invention and competition, re-starting the cycle. The danger, he believes, is vertical integration, and the risk of domination and monopoly power could come either from the private sector or from government. A principle which Australians well understand as the separation of the powers (now similarly being effected in Telstra), Wu’s Net Neutrality theory envisages a structure akin to the electricity grid which, as he says, doesn’t care whether you plug in a toaster or a computer. So he proposes an enforced distance between those who develop information, and those who own the network structure. He believes the intrusion of the law is justified only to combat harmful behaviour, and in networks, “to prevent behaviour that may be narrowly beneficial for the carrier, but has negative spillovers for the economy and the nation”.

How this might be applied to the global library concept, or the intrinsic problems might be solved, has yet to be developed. Microsoft abandoned its attempt to digitise the world’s books after two years, with 750,000 volumes and 80 million journal articles copied; if Google doesn’t do the job, who else has the resources? Why not treat Google as the wholesaler, and let everyone else buy access to the resource? Would the Extended Collective Licensing (ECL) system used in the Nordic countries successfully, for a much smaller corpus of books, be appropriate? How can the cost of clearing rights on a book-by-book basis be met? Or is a revised Shawn Bentley Act the answer?

Whatever happens, authors’ ability to rely on copyright protection of their work is under serious attack. As we ponder whether the threat is really from Google, from A2K or from big government, it would be well for all to remember Hayek’s Fatal Conceit: 

To act on the belief that we possess the knowledge and the power which enable us to shape the processes of society entirely to our liking, knowledge which in fact we do not possess, is likely to make us do much harm. 

The last word must go to the American blogger who, tired of the debate, posted: “I love Google … but I assume they are well aware of that.” 

 

Geoffrey Luck wrote “The Delusions of the Cyber-Utopians” for the April issue.

Leave a Reply