I’ve written before about the technical, ethical and legal problems surrounding Australia’s plan to enforce a compulsory, universal Internet Content Filter. I maintain that the system is ineffective and inappropriate, foisting a law enforcement role on the nation’s ISPs, and threatening free speech without providing sufficient protection from the very content it seeks to block.
With Internet deregulation on the horizon in Vanuatu, it seems timely to take a look at some of the basic issues underlying the debate.
[This week’s Communications column for the Vanuatu Independent.]
This week, the Australian government moved closer to implementing its controversial Internet Content Filter. The ICF represents the Rudd government’s latest attempt to curtail access to illegal or ‘unwanted’ online materials by requiring that all Australian Internet providers implement this filtering system. News sources report that the government has released the technical specification of its pilot implementation.
I’ve written before about the technical, ethical and legal problems surrounding this plan. I maintain that the system is ineffective and inappropriate, foisting a law enforcement role on the nation’s ISPs, and threatening free speech without providing sufficient protection from the very content it seeks to block.
With Internet deregulation on the horizon in Vanuatu, it seems timely to take a look at some of the basic issues underlying the debate.
At the core of the debate over access to Internet content is the issue of privacy. The question of privacy has a few particular wrinkles here in Vanuatu, where family-centric village life still dominates our culture.
There are two fundamental approaches to privacy in the online world. The first takes an individualistic, contractual approach. It states that all information pertaining to you is yours and yours alone, though you may choose to negotiate away some of your personal information in exchange for a given service. In this light, privacy is a desirable, valuable commodity; it’s up to the individual to ensure that they don’t give away too much of it.
I like to call this the American approach, because of its strong emphasis on personal liberty and responsibility.
The second perspective on privacy contends that the cat is already out of the bag. We live in a global community where information about us is available to any who chooses to look. If we accept that point, then the only things left to do are to make sure that nobody gets a monopoly on access to information and that everyone’s information if equally accessible. So if a nosy government wants to know everything about us, that’s fine, as long as we get to know everything the government knows. Proponents of this approach claim that this creates a culture of civility, because anyone who pokes his nose into others’ business will soon find his deepest personal secrets exposed as well. What’s good for the goose is good for the gander.
I call this the Japanese approach, because such a regime relies on mutual respect, restraint and conformity to function properly.
In the American approach, individuals must carefully guard their own personal information. But what happens if they don’t?
Suppose someone joins an online dating service, even though they’re already married. Let’s say they later run for political office. If it comes out that the candidate propositioned women in a discussion forum, well, too bad for him. He disclosed the information; now he has to live with the consequences.
On the other side of the issue, if our candidate contracts an STD, then goes online to order drugs to treat the condition, how should we treat his actions? Does his Internet Provider have a right to know this? How about the government? The American approach says no.
In this context, a content-filtering programme creates huge worries for individuals. In order to filter out the ‘unwanted’ material, a content filter needs to look at every URL you type in. It would be bad enough if the government were looking at this information, but in this case, the people with their eyes on the data would be private Internet Service Providers.
There is nothing stopping a low-level employee from watching this data simply out of prurient interest. In fact, this kind of abuse happens almost every time comprehensive surveillance is conducted. In a famous example, low-level staffers in the US National Security Agency would regularly listen in on romantic conversations between soldiers serving in Iraq and their wives at home. The practice became so common that some even created ‘Greatest Hits’ compilations of their favourites and shared them with other staffers.
The American approach contends that anyone who abuses a person’s privacy is liable to civil or even criminal prosecution. But how to police something like this? The American approach basically states that you are responsible for protecting your own data, but you should have powerful legal tools available if someone betrays your trust.
In order to act though, we have to know someone is spying on us. More often than not, all we have is someone’s promise that they aren’t.
At the other end of the privacy continuum, the Japanese approach to privacy states that personal information is only valuable to the extent that others are willing to respect it. It’s more a cultural approach than a legalistic one.
Let’s take the same example we used above, where a candidate tries to set up an adulterous liaison in an online forum. Everyone can see the information, but before they talk about it, they consider whether this site is considered a public or private space.
In an essay titled ‘Privacy and Paper Walls’, I wrote:
“In the past, most Japanese houses were made of wood and featured sliding doors made mostly of paper. They were useless, of course, for blocking noise or preventing willful intrusion, but they were extremely effective at establishing a distinction between public and private space. A couple in a crowded household might have a furious argument, for example, but if the fusuma, or sliding door, is closed, then as far as anyone in the adjoining room is concerned, the quarrel hasn’t happened.
“It’s hard to imagine how one could possibly ignore something so obvious, but consider the social transaction involved: If you agree to ignore what happens on the other side of the door, I will agree to do the same. Now consider the number of potentially embarrassing noises that could emanate between these spaces, and you’ll begin to appreciate just how useful such an agreement would be.”
Put in terms closer to home for many in Vanuatu: We should consider carefully the beam in our own eye before commenting on the mote in our brother’s eye. Who else uses that dating forum? How comfortable would it be for all of us if public attention were drawn to that site? If society collectively decides that the site should be subject to scrutiny, so be it. But it’s equally possible that people might choose to leave such information alone – no matter how much they might personally disapprove of it – because the public cost would be too high.
A dating site might not be a perfect example of a site people would prefer to consider a private space, but let’s go back to content filtering services:
If people find out that staffers were regularly watching what sites they access, the vast majority would disapprove, because even innocent information can prove dangerous. A woman who’d miscarried several times would not want anyone but her closest confidants to know that she was pregnant again, not because it’s wrong, but because discussing it would be too painful. Likewise, a devout Christian experiencing a crisis of Faith would not necessarily want it widely known. It’s perfectly normal that we should face such moments in our lives, but it’s not something most of us choose to make public.
The Japanese approach, therefore, relies on a social contract in which everyone respects everyone else’s secrets in order that their own remain protected.
It’s pretty easy to see how knotted and difficult privacy becomes in an online world whose very basis is cooperative information sharing. Whether we think that people should take care to protect their individual privacy, or that privacy should be protected through mutual discretion and respect, it’s clear that attempting to regulate online behaviour inevitably creates complicated, difficult and often troubling problems for everyone.
Whether we use online systems or administer them, or both, we all benefit from a minimalist, agnostic approach that avoids prying wherever possible.