Deputy Director (Government Affairs), Privacy International
Tutorial Fellow, Department of Information Systems, London School of Economics
On March 6, 2000 participants of the Freedom of Expression Group met to discuss the Regulatory, Access, and Privacy issues relating to Freedom of Expression at the Global Knowledge II Action Summit in Kuala Lampur. The three panels, relating to each issue, attempted to draw links between the various concepts; and before I begin my own paper contribution to GKII I would like to summarise the event, focussing particularly on regulation and privacy.
The links between the three issues within the context of Freedom of Expression are not always clear, and not always without conflict of context. In the first panel on Regulatory issues, the notion of regulation was not only discussed, but its essence called to question in three ways. First, regulation as an act requires a motive, and within communities it requires the analysis of norms. Second, however, is that there are two facets to regulation within jurisdictions, according to the panelists: regulations on structure (architectural, technical, and physical media) and content (what to do with a jurisdiction if you actually have control over it, which in itself is a challenging concept on the Internet). The third component of regulation panel was an illustration of regulation of content within the jurisdiction of Australia was developed. With the four identified motives of: 'having to do something', 'protecting children', normalising the Internet with traditional media regulations, and creating regulatory certainty, the Australian government passed legislation in July 1999 to create regulations on Internet content. Australia both saw an opportunity within its jurisdiction to act, and felt compelled as a government body to act; both of which are by no means necessary nor sufficient criteria for regulation, however. With ratings and filtering schemes on a complaints-based scheme and a severe regime of penalties for non-compliance, the Australian government demonstrated how governments can act towards content regulation. Australia decided on its jurisdiction, with its decided motives, the government acted. Whether such action was responsible is a context-relevant challenge; more interesting is whether content regulation on the Internet has implications for the technology, or other policies and freedoms, or then recursively upon itself.
The second panel on Access to telecommunications outlined further context-based situations; within selected countries and regions in Asia and Africa. To promote diversity on the internet of language and culture, communities other than the standard English speaking world must be connected; but markets and economies must first be established. Connecting computers to communities that do not even have dial tone access is not a simple task; and requires much more than technology, but education as well.
The final panel concentrated on the Privacy aspects to free expression but evolved into an overview of privacy on the Internet. With technological challenges and opportunities, regulatory attempts and arbitrage, the landscape, both technological and regulatory was drawn by the various speakers. The line between privacy and free speech is not clear, nor are the two concepts always complimentary. In fact, as one panelist argued, there has been a dilution of individual privacy due to the predominance and power of the free speech debate. Another panelist argued that a similar divide was lessening, that is of the public/private. To reconstruct privacy, three methods were provided: technologies (privacy-enhancing), legal remedies, and regulatory remedies such as the EU Data Protection Directive of 1995, which enforce Fair Information Practices. In analysis, as the threats to privacy on the internet are technological, legal, and regulation-based (particularly in the case of free speech and expression), perhaps the opportunities for reconstruction are equally likely to be found in these three threats.
The participants meeting was by no means conclusive: no further elaboration was developed on how to regulate, when to regulate, and the ability to draw lines on a map of jurisdictions on the internet for freedom of expression. Definitions of privacy were similarly inconclusive, yet opportunities were drawn within the similar challenges of regulation, however. And access resides again as an imperative, but interests and policies on expression could distract the primary goal of increasing knowledge, followed by provision of technology, information, and communications.
As the landscape is not necessarily clear, I find it within our interests to look more clearly, to focus our lenses on the actors, both institutional and technological. I will argue that efforts to regulate content and access will create an infrastructure of surveillance and in tandem dismisses any and all attempts at eqality. The infrastructure appears to be reluctant towards regulation, making regulation difficult to apply; moreover, once applied, technology allows for the circumvention of content regulation and the circumvention of the infrastructure of surveillance. If regulation is a challenge to apply, and then impossible to enforce equally and judisciously, then its very nature may be called to question. On the Internet, regulation with only partial effectiveness creates vast inequalities, and renders us further from the promised Open Society, and not necessarily to the 'information rich/information poor' divide, but more appropriately termed: those who can access information, and those who are prevented, based solely on technological circumvention. Can/ can-nots, empowered/regulated, able/prevented.
Perhaps it is because the Internet generally originated in the US, or the early prevalance of US-hosted web sites -- irregardless of the reason, the US has exported its first ammendment.
Congress shall make no law...abridging the freedom of speech, or of the press, or of the people peaceably to assemble, and to petition the Government for a redress of grievances.
Because of the openness of the Internet, the ability to send and receive packets from any server in the world, this preservation of the freedom of speech of the individual citizen of the USA now can apply to all individuals on the Internet. Likewise the Universal Declaration of Human Rights too supports free expression in Article 19:
Everyone has the right to freedom of opinion and expression: this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.
and the European Convention on Human Rights, article 10, paragraph 1 declares:
Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises.
States can still require licensing, however, of traditional media under the ECHR. Is the Internet a traditional medium, or does it require a different approach, a different exclusion principle? The creators of the ECHR, the UNDHR, and most definetly the first ammendment did not consider the Internet; its very nature makes the principles of free speech apply readily, but the exceptions apply less judisciously.
Attempts to regulate content on the Internet is tightly intertwined with attempts to regulate access. Regulating access is almost a necessary pre-disposition for content regulation because of the very nature of the Internet, but as I will discuss in the last section of this paper, it is by no means sufficient. Regulation of access has some startling cases of attempts. In a 1999 report from Rapporteurs Sans Frontieres1, such attempts are documented. Burma manages complete censorship through a state monopoly on access to the Internet; moreover a 1996 law obliges all owners of computers to declare this fact to government, and failure to comply may lead to 15 years in prison. Iran censors the Internet as it censors other media on issues relating to sexuality, religion, criticism, mentions of Israel, the US, etc. through filters2. In Saudi Arabia, all traffic for the various private Internet Service Providers (ISPs) go through the servers of the Science and Technology Centre which is equipped with filters banning access to sites that provide "information contrary to Islamic values". As China embraces and adopts Internet technologies, it is also refusing much of the empowering possibilities: users are closely monitored and are supposed to register with the authorities; as the tenth anniversary of the Tiananmen massacre (4 June 1999) drew near, the Chinese authorities ordered the closure of 300 cybercafes in Shanghai on the pretext that they did not have the necessary authorisation. These are only selected governments in selected contexts that with motives of their own, have censored the Internet through limiting access, and through filtering of content. How can I argue as these regulations are being enforced that the US First Amendment applies generally?
The case is even more difficult as more open and democratic states are enforcing similar regulations. As mentioned above, Australia enacted content regulation, enforced by the Australian Broadcasting Authority. As its deputy chairman states,
"Whereas in the United States the US Constitution First Amendment allows the free speech lobby to dominate discussion about self-regulation, other countries with healthy democratic systems and vibrant processes of open expression are able to seek a more appropriate balance between the right to free expression and the right of communities to nurture national and local cultures and to protect children from harmful content."3
In the inevitably painful attempt to find a balance, Australia now rates Australian Internet content and may provide take-down notices for offensive material, and obliges ISPs to recommend to users that they use filtering technologies for content that exists outside of Australia. Australia uses the same means as other less democratic countries, however: filtering technology and regulating access providers.
The US is no exception to attempts at regulating content or access. When the American Civil Liberties Union took Attorney General Janet Reno to court regarding the Communications Decency Act (CDA), the judges agreed with the ACLU that the CDA was too onerous. In their decision, the judges of the District Court for the Eastern District of Pennsylvania stated:
There is no effective way to determine the identity or the age of a user who is accessing material through e-mail, mail exploders, newsgroups or chat rooms. An e-mail address provides no authoritative information about the addressee, who may use an e-mail "alias" or an anonymous remailer. There is also no universal or reliable listing of e- mail addresses and corresponding names or telephone numbers, and any such listing would be or rapidly become incomplete. For these reasons, there is no reliable way in many instances for a sender to know if the e-mail recipient is an adult or a minor. The difficulty of e-mail age verification is compounded for mail exploders such as listservs, which automatically send information to all e-mail addresses on a sender's list. Government expert [...] agreed that no current technology could give a speaker assurance that only adults were listed in a particular mail exploder's mailing list.4
The link here is key: in order to delimit access to 'indecent information' or access to interactive environments only for those who have reached the age of maturity (?), some type of identification procedure is required. To rid the Internet of adult content, for example, may be possible (however unlikely), but is not desirable (legally) because such speech is protected and rights of access are permitted under existing law, regardless of how one may wish to question this on moral grounds. However if you wish to restrict access to such information, an infrastructure of identity would have to be created where, to gain access, you would be required to release your personal details (name, age, credit card number). Regulating access to content in this way necessarily reduces privacy, and requires the creation of an infrastructure of surveillance.
In 1996 the judges found that no such infrastructure existed; such an infrastructure could be created, but at this point we may have to question the morally-laden concept of priorities: what comes first? Privacy or free speech. I find this question to be poorly-formed, and not worthy of consideration: it is the question that is forced upon us with power-interests already embedded within. The two rights are not contradictory: articles 8 and 10 of the ECHR are mutually compatible -- the rights of privacy and free expression need not cancel each other out. The point of this section, however, was to discuss the inverse proposition: that if we wish to regulate access to content, restricting privacy necessarily follows.
Countries around the world are convinced that the inclusion of their communities to the Internet will bring great rewards, but are unwilling to face the risks. Each country is considering bringing the Internet into schools, libraries, public centres, etc. However, in creating social inclusion governments are embedding their own interests into the infrastructure being deployed. That is, access to schools and libraries are being discussed under terms of social inclusion and benefits to students and citizens, while under the condition of filtering of indecent content.
Filtering and rating is by no means a currently viable technique, however. I will discuss rating and filtering through two reports dealing with each, respectively.
Rating schemes, whether imposed by states or voluntarily adopted by ISPs or individuals create unrecoverable access problems. The ACLU Fahrenheit 451.2 report5 discusses why state-prompted rating schemes are problematic:
Adding to the above scenarios the concept of privacy, if individuals must present their identity to prove that they are legally allowed to view such content, particularly in medical-related issues or in conversation, this contravenes accepted principles of access to information and the right to privacy. If I wish to attend a Restricted movie, I have to show ID. If I surf the Internet searching for AIDS-related information, or files referring to the war in Chechnya, then I will be required to show my identification card every step of the way.
Technical problems arise with filtering as well. A 1997 study from the Electronic Privacy Information Center6 investigates the efficiency of filtering technology. The study found that the vast majority of sites that were blocked by filtering technologies were by no means 'indecent'. In a particular case, a comparison of two different search engines, one filtered and the other not filtered, points out the limits to filtered search engines and questions the requirement of placing these engines in libraries and schools. The researchers first used Altavista to search for the following terms, and then the filtered search engine, and compared the results:
In fact, one of the eight documents that was produced by the filtered-search engine turned out to be a parody of a Dr. Seuss story using details from the murder of Nicole Brown Simpson. Other sites that were blocked by the filter-enabled search engine included the "San Diego Zoo" (99.6 per cent blocked), "Mozart" (99.9 per cent blocked), and "Astronomy" (99.9 per cent blocked). The technology is not mature, so perhaps government regulation can create more research opportunities for such a market of technologies? This is exactly the intent of regulation of content through the provision of filtering technologies: to spur the growth of filtering research and development. However, we are dealing with issues today, and we are closing the doors of access to information today, which will irrevocably change the nature of our understanding of the Internet.
More recently, Peace Fire, an organisation that investigates technological attempts at content regulation released a report7 in March 2000 that filtering and privacy are intertwined in a very dark way. They investigated a censorware program, I-Gear, now owned by Symantec. Of the 437,000 sites that are blocked by I-Gear, the researchers looked at the first 50 working URLs that were blocked under the Sex/Acts category and, similar to the EPIC report, found that 76% of the blocked pages were erroneous. The privacy dimension now rears its head: the program, when installed, registers itself by sending your personal name and your computer ID number to the company, and uses this information to download further filters (updates).
I find this similar in type to an event that occurred in Singapore. In 1995, the Singaporean government scanned through the accounts of users on a service provider looking for pornographic files: 80,000 files were scanned, and only 5 'indecent' graphic files were found.8 The privacy of the accounts on this business service provider was eroded in the insatisable quest for indecent material. Governments are placing the priorities for us: censorship is more important than privacy.
Moreover, law enforcement agencies around the world are trying to enrol the ISPs to maintain records on their clients in order to investigate crimes. As Louis Freeh, Director of the US Federal Bureau of Investigation testified:
We would encourage the Internet provider industry to maintain subscriber and call information for a fixed period of time; they now discard it very briefly, unlike the telephone companies. Those are records which are very critical in identifying and even tracing some of the [child pornography-type] cases and leads. That would be a very helpful thing and we certainly hope that it could be done, even on a voluntary basis. Caller ID, retaining caller ID by the Internet service providers would be another hopefully voluntary measure that we would help us, and we are in discussions with the providers to see if we can receive that kind of assistance.9
If you combine the power to request for access to records with the ability to rate and filter sites, the result is the law enforcement dream: companies may be co-opted to provide governments with information about who has been accessing what kind of information. If CNN.com becomes an 'undesirable' web news provider (as is the case China), ISPs can log all users who access CNN.com, and notify authorities. Controlling access to information has a relationship with ascertaining the identities of those who access controversial information. Techniques and technologies are being developed to further this power of governments; meanwhile regulation is creating a market for faulty filtering technologies, which although they seem more benign than the threat of government surveillance, the threat to the integrity and variety of the Internet is equally appaling.
In the previous sections, I developed some relationships between regulation, government, and technology. In the first section, the relationship between regulation of access and privacy was developed through technology: if it is to function adequately, restricting access requires some means of identification mechanism. In the second section I discussed the technologies and techniques of content control: if these are to function adequately, they must be widespread, but even so the techniques are weak. This results in regulation creating opportunities for such techniques to be implemented despite their immature state, and also resulting in further research and development of filtering technologies. The synthesis of these two points is ominous: once techniques of filtering and ratings are to be widespread, the next requirement will be to know who is accessing the politically unacceptable speech, the state-denied religious speech, or the indecent material on the National Association for the Advancement of Colored People. Then there is a link with the freedom of access to expression, and the freedom to express: what happens if you do not rate your content? What happens if your content must be rated, and is found objectionable by a third party agency? If they require you to remove the information, they must know your contact and relevant personal details in order to charge you with non-compliance. Therefore, both access regulations and content/expression regulations have implications for privacy.
However there is a missing portion of this issue that requires investigation: the technology. In the first section, regulation was argued to prompt the development of an infrastructure of surveillance. In the second section, the technology that sustains the regulation was investigated, and then the combination with the infrastructure of surveillance was considered. This third section investigates another perspective: how technology can actually circumvent regulation, and the effects of this concept.
The nature of technology is inescapable. As Professor Roger Clarke commented10 on the Australian policy on content regulation:
What is appalling about this statement, the government's policy, and the legislation that was passed by the Opposition-controlled Senate as well as the Government-controlled House, is that it is framed in blithe ignorance of the nature of the technology and hence of the behaviour that it pretends to regulate. This results in no advantages to the intended beneficiaries, and is to the serious detriment of all involved.
The Internet is a distributed network that was designed to be relatively secure in the sense of availability: if portions of the Internet fail, other portions will not suffer. If one pipe to information is closed, others can be used. Failing to acknowledge this is a failure to understand the nature of the Internet whilst passing judgements and legislation on it.
While the Australian Act was being implemented, people who understood the nature of the Internet were working on using the very same technology of the Internet to evade or circumvent the regulations. The recommendations by 2600 Australia11 included:
The above mentioned technology and methods are not rare, nor difficult to implement.
Therefore the technology and techniques allow any individual to circumvent the law. A proxy-like connection that provides you with anonymity is the well known service of Anonymizer (available at http://www.anonymizer.com) -- surfing the web using the Anonymizer prevents all the sites on the Internet that you are visiting from ascertaining your identifying traces. The problem however is that your ISP can know that you are using a service like the Anonymizer -- your ISP will still maintain logs of the sites that you visit, even with the anonymizer; so any government can still find out what it is that you are viewing, and whether it is indecent or controversial (particularly in the cases mentioned above where governments own the ISPs or monitor traffic through ISPs).
Another technology that circumvents access restrictions is the Freedom Network. This application and associated network provides you with secured connections with the Internet (encrypted) so that your ISP can not see where it is that you are going on the Internet -- your ISP only knows that you using the Freedom Network. Meanwhile, the Freedom Network also provides you with pseudonymity when you are viewing web sites -- these web sites can not retrace through their logs back to your computer; they do not need to know your identity, and Freedom ensures that they do not. Yes, such technologies will provide challenges to the FBI and similar institutions in their intelligence procedures, but at the same time, these technologies will provide those within Burma with the ability to communicate with the outside world, being able to circumvent the Burmese regulations in the same way that these technologies also circumvent Australian regulations. The Burmese, nor the Australians, need not be concerned with the infrastructure of surveillance once they are using Freedom.
There is also a technique for developing speech that can not be removed by take-down notices by governments. The idea is based on the work of Ross Anderson from Cambridge University, entitled the Eternity Service12. The concept follows from the Gutenberg Press, and Liebling's revenge: Freedom of the press means freedom only for those who have a press; any one with access to the Eternity Service can have a press. Anderson looks back at history, particularly Tyndale, who translated the New Testament to English, and was as a result executed. However, Tynedale's execution occurred too late: 50,000 copies of the english-language New Testament were already in print, and could not be taken-down. Governments may demand that servers be shut down, which is far easier than burning distributions of books, so Anderson proposes the creation of an international network of servers that function anonymously, where copies of files are on these servers around the world. If a government tries to supress an individual's speech by removing it from the Internet, the file will still exist on other servers throughout the world (in fact will also replicate to others) that the governments may not access due to jurisdictional issues, and because knowledge of the existence of the servers and their locations is limited technically. Any government attempt at controlling the freedom of expression would fail.
What is most relevant about these technologies is that they circumvent regulation, and also they are relatively accessible to any knowledgeable user of the Internet. But governments continue to attempt regulation without understanding the nature of the Internet and associated techniques. The conflict that arises is that no matter how much the governments try to regulate the freedom of expression, and no matter how much they attempt to create an infrastructure of surveillance, these regulations can be circumvented through readily available technology. Therefore, government regulations will only apply to those individual citizens that do not have the skills or the knowledge of these services and how they work; and so government regulations are in fact deepening a divide within the Information Society which can run deeper than the concepts of 'information rich' and 'information poor'. Rather this divide between 'knowledge rich' and 'knowledge poor' is intimately linked with 'access rich' and 'access poor'. Moreover, the link to privacy is that these knowledge rich people who use the technologies to gain more knowledge beyond the constraints of regulation can also be the same individuals that have privacy, while the non-skilled and access-poor citizen will have no privacy while accessing content, content that is already restricted in ambiguous ways. Governments, therefore, in their attempts to regulate access and content are in fact creating divides, and while their regulations are also attempts in futility.
Some are concerned that the Internet will create havens for criminals. This was the old concern in the days of national cryptography policies; governments were forced to let go of their surveillance and intelligence-gathering privileges because the technology allowed for individual privacy. One of the reasons that this occurred was because governments were enveloped in the idea of the Internet as a progressive tool that all citizens should have. Now governments are regulating the content on the Internet because they believe it is their duty to do so, and because the Internet can give the individual access to information and content that governments were traditionally able to control.
Bringing their traditional media regulations to the Internet appears to be an easy task, until one understands the nature of the Internet. It is a broadcast medium, it is a common carrier -- the old definitions need not apply. More importantly though is that the impacts of traditional regulations and laws on the Internet have devastating implications for privacy: a traditionally simple task of ascertaining age in the real world does not apply as simply on the Internet. Moreover, the ability for governments to control the flow of information may be challenged, but government's ability to perform surveillance on your Internet traffic is unparalleled with any analogies to the real world.
In discussing the idea of content and access regulation, governments are shaping a specific infrastructure and a specific form of technology: identity-verifying infrastructures and filtering technologies. The first form is costly, and undeveloped, while it has detracting effects on privacy. The second form of technology is undeveloped, dangerously poor, and reduces the power of the Internet as a form of knowledge-gathering.
Subsequently, technology is also shaping this debate. If regulations are created that do not apply equally and judisciously, then these regulations are neither equalitarian nor just, then these regulations do not promote equality nor justice. If technology enables those with the knowledge and capability to circumvent regulations, then the regulations apply only to those who are unskilled. However government efforts in the Information Society are performed specifically for the cause of creating Knowledge: the very essence of this conference is on Knowledge, creation, development, application. Applying regulation to technology not only interferes with markets, but creates divides for those with knowledge and capability against those who must submit. Such a divide is congruent with the information and communication rich/poor divides, and will create an Internet culture within nation-states that is not open, that is far from equalitarian. This is not the open society on open networks, this is the gated community, lacking in knowledge, lacking information, and lacking privacy. You want a global village and you get the panopticon for idiots.
Privacy and Free Expression are not necessarily mutually exclusive; however Privacy and restraints on free expression are necessarily conflicting.
Gus Hosein lectures in Information Systems at the London School of Economics and Political Science. He also works for Privacy International, and is an Advisory Council Member for the Foundation for Information Policy Research, and is Policy Counsel for ZeroKnowledge Systems.
1 Rapporteurs Sans Frontieres (1999), The twenty enemies of the Internet, released August 9, 1999.
2 The report continues that medical students are denied access to web pages that deal with anatomy, for instance.
3 Australian Broadcasting Authority (1999), Broadcasting, co-regulation and the public good, speech by Gareth Granger, NR 101/1999, 29 October 1999.
4 United States District Court for the Eastern District of Pennsylvania (1996), Adjudications on Motions for Preliminary Injunction.
5 ACLU, Fahrenheit 451.2: Is Cyberspace Burning? How Rating and Blocking Proposals May Torch Free Speech on the Internet.
6 Electronic Privacy Information Center (1997), Faulty Filters: How Content Filters Block Access to Kid-Friendly Information on the Internet, February 1997.
7 PeaceFire (2000), Analysis of first 50 URL's blocked by I-Gear in the .edu domain, March 1, 2000.
8 Ang, Peng Hwa and Berlinda Nadarajan, Censorship and the Internet: A Singaporean Perspective, Communications of the ACM, Vol.39, No.6, June 1996.
9 Freeh, L., Hearing of the Commerce, Justice, State and the Judiciary Committee -- Subject: FY '99 Appropriations for proposal to Prevent Child Exploitation on the Internet. 1998, Federal Bureau of Investigation: Washington DC.
10 Professor Roger Clarke, Letter to Politech Mailing List: ABA Demonstrates Its Ignorance to the World, Tue, 2 Nov 1999 08:37:28 +1100.
11 Dogcow, Evading the Broadcasting Services Amendment (Online Services) Act 1999, a report by 2600 Australia.
12 First paper is by Ross Anderson, entitled The Eternity Service, available at http://www.cl.cam.ac.uk/users/rja14/eternity/eternity.html.
Back to Gus Hosein's Home Page