On Privacy

The ramifications of our ability to transmit, access and synthesise vast amounts of data using technology are consistently underestimated by people because of the simple fact that, as far as they’re concerned, they are sitting in the relative privacy of their own room with nothing but the computer screen as an intermediary.

Slashdot recently reported the release of document analysing privacy issues in a number of major browsers. One of the findings was that the Flash plugin on all platforms and browsers was terribly insecure. One of the commenters had this to say:

“Privacy issues aside, I’ve never had any trouble with Flash.”

To which I replied:

I like your logic: Aside from a single tile, the space shuttle Columbia’s last mission went flawlessly.

Seriously, though: you’ve underlined the single greatest problem in computer security today – what we don’t see can hurt us. I’ve written about this at greater length elsewhere, but to put it simply, privacy is the battleground of our decade.

The struggle to come to terms with privacy will manifest itself in the legal, moral and ethical arenas, but it arises now because of technology and the cavalier approach that the vast majority of people take to it.

The ramifications of our ability to transmit, access and synthesise vast amounts of data using technology are consistently underestimated by people because of the simple fact that, as far as they’re concerned, they are sitting in the relative privacy of their own room with nothing but the computer screen as an intermediary.

On the consumer side of things, this creates what Schneier calls a Market for Lemons in which the substance of the product becomes less valuable than its appearance. As long as we have the illusion of security, we don’t worry about the lack of real protection.

On the institutional side, we see countless petty abuses of people’s privacy. There is nothing stopping a low-level employee from watching this data simply out of prurient interest. In fact, this kind of abuse happens almost every time comprehensive surveillance is conducted. In a famous example, low-level staffers in the US National Security Agency would regularly listen in on romantic conversations between soldiers serving in Iraq and their wives at home. The practice became so common that some even created ‘Greatest Hits’ compilations of their favourites and shared them with other staffers.

They would never have done so[*] had the people in question been in the room, but because the experience is intermediated by an impersonal computer screen, which can inflict no retribution on them, their worst instincts get the better of them.

When discussing software in the 21st Century, we cannot ever treat privacy as just one incidental aspect of a greater system. Privacy defines the system. Starting an argument by throwing it aside in the first subordinate clause gives little weight to any argument that follows.


[*] On consideration, that’s not strictly true. History shows that surveillance societies are perfectly practicable even without significant automation. The East German Stasi are but one example. The critical factor in such cases is of course that the state sanctioned, encouraged, even required this behaviour of its citizens. So let me modulate my statement to say:

They would never have taken this unsanctioned action had they had any sense that they were being subjected to similar – or any – scrutiny.

No Circus

I am tempted to channel the spirit of Juvenal and state that, what with all the slack we gave them, the least our leaders could have done was put on a circus or two. Instead, we get a shadow play about bogeymen being chased by armed men with more enthusiasm than training.

[Originally published in the Vanuatu Daily Post’s Weekender Edition.]

“The People who once upon a time handed out military command, high civil office, legions – everything, now restrains itself and anxiously hopes for just two things: bread and circuses.”

The Roman poet Juvenal wrote these lines in his Satires a little over a hundred years after the birth of Christ. He accuses the people of Rome – at the time the most powerful empire in the world – of losing sight of their civic responsibilities, giving everything up in exchange for gifts of grain and public entertainments.

People are always quick to draw parallels between modern USA and ancient Rome in its decline. But we can draw a more direct lesson from Juvenal’s tirade: Whether through a lack of concern or naïveté, our own choices have led us to the apparent security crisis we face today.

At least the Romans got free food and entertainment out of the bargain. Here in Vanuatu, we don’t even get that. We relinquish our societal responsibilities to others, and receive only danger in exchange.

Read more “No Circus”

Trust Works All Ways

Over the weekend, I’ve been thinking about last week’s disclosure concerning Debian’s OpenSSL package, which in effect stated that all keys and certificates generated by this compromised code have been trivially crackable since late 2006.

There’s a pretty good subjective analysis of the nature of the error on Ben Laurie’s blog (thanks, Rich), and of course the Debian crew itself has done a fairly good job of writing up the issue.

The scope of this vulnerability is pretty wide, and the ease with which a weak key can be compromised is significant. Ubuntu packaged up a weak key detector script containing an 8MB data block which, I’m told, included every single possible key value that the Debian OpenSSL package could conceivably create.

The question that kept cropping up for me is: This one-line code change apparently went unnoticed for well over a year. Why is it that crackers and script kiddies never found it and/or exploited it? Numerous exploits on Microsoft Windows would have required far more scrutiny and creativity than this one. Given the rewards involved for 0-day exploits, especially in creating platforms for cross-site scripting attacks, why is it nobody bothered to exploit this?

My hypothesis – sorry, my speculation is this: People at every stage of the production process and everywhere else in the system trusted that the others were doing their job competently. This includes crackers and others with a vested interest in compromising the code. I should exclude from this list those who might have a reasonable motivation to exploit the vulnerability with stealth and to leave no traces. If, however, even they didn’t notice the danger presented by this tiny but fundamental change in the code base, well my point becomes stronger.

The change itself was small, but not really obscure.  It was located, after all, in the function that feeds random data into the encryption process. As Ben Laurie states in his blog, if any of the OpenSSL members had actually looked at the final patch, they would almost certainly have noticed immediately that it was non-optimal.

In all this time, apparently, nobody using Debian’s OpenSSL package has actually (or adequately) tested to see whether the Debian flavour of OpenSSL was as strong as it was supposed to be.  That level of trust is nothing short of astounding. If in fact malware authors were guilty of investing the same trust in the software, then I’d venture to state that there’s a fundamental lesson to be learned here about human nature, and learning that lesson benefits the attacker far more than the defender:

Probe the most trusted processes first, because if you find vulnerabilities, they will yield the greatest results for the least effort.

P.S. Offhand, there’s one circumstance that I think could undermine the credibility of this speculation, and that’s if there’s any link between this report of an attack that compromised not less than 10,000 servers and the recent discovery of the Debian OpenSSL vulnerability.

Stop Bad Errors

I recently upgraded to Ubuntu 8.04, which comes with the most recent beta of Firefox 3.0. The new version of Firefox has a number of interesting features, not the least of which is a set of measures to reduce drive-by infection of PCs.

If they wander from the beaten path, people now see a big red sign warning them about so-called ‘Attack Sites’ – websites that are reported to have used various means to infect visiting systems with malicious software:

The graphic is fairly well done, but interestingly, there’s no obvious way to over-ride the warning and go to the site anyway. Not that one would want to, but it does raise the bar for circumventing this anti-rube device while raising questions about who gets to decide what’s bad and what’s good.

The ‘Get Me Out Of Here!’ button smacks of Flickr-style smarminess, sending (in my humble opinion) the wrong kind of message. Either be the police constable or be my buddy, but don’t try to be both. That’s just patronising.

I followed the second button to see how the situation would be explained to the curious. I was brought to a page providing a less-than-illuminating statement that the site in question had been reported to be infected by so-called ‘badware’.

The StopBadWare.org service tracks websites whose content has been compromised, deliberately or not, and provides data about these sites to the public in order to protect Internet users from drive-by infection. With sponsorship from Google, Lenovo, Sun, PayPal, VeriSign and others, the service is obviously viewed in the corporate community as a necessary and responsible answer to the issue of malware infection.

At the time of this writing, the Stop Badware databases listed over a quarter of a million websites as infected.

The report page itself was less than a stellar example of information presentation, especially about a security-related topic. In the top left corner is a colour-coded circle with three states:

Safe StopBadware testing has found badware behavior on this site.
Caution One or more StopBadware partners are reporting badware behavior on this site.
Badware No StopBadware partners are reporting badware behavior on this site.

So the difference between red and yellow here is not one of degree, it’s based on who reported it. Not only is this useless as a threat measurement, it sends the wrong message to people using the service, implying that there’s a distinction to be made between what Stop Badware finds out for themselves and what their partners find. By treating the sources differently, they’re inadvertently creating a distinction between gospel and rumour, implying that some sources are less reliable than others.

The report page for the domain in question is populated using the GET method, meaning that you can plug any domain name right into the address bar (if you know the URL components) and get a report on it. Unfortunately, it never occurred to the good people at Stop Badware that some might want to use this capability to check the status of an arbitrary domain. (Amusingly, this method also circumvents the captcha on the ‘official’ report page.)

When I checked the status of my own domain, I was informed that, in effect, I’d recently stopped beating my wife:

Google has removed the warning from this site.

It’s interesting when you’re faced with a sentence in which nearly every word is wrong. Google has removed the site? Where am I? Isn’t this Stop Badware? Removed the warning for this site? There never was one. And even if there was a warning at one point in time, people don’t need to be told that. This message is a bit like saying, ‘So-and-so is a great guy! He doesn’t drink at all any more.

I applaud the Stop Badware service and the concept, and I look forward to the day when someone actually does a bit of usability research for them.

P.S. Could we please do something about the term ‘badware’? It’s almost sickeningly patronising. Some might argue that terms like ‘virus’, ‘trojan’ and ‘malware’ are too arcane, but I say we should just pick one and stick with it, regardless of how accurate it actually is.

People know and (ab)use the term ‘virus’, so why don’t we get the geek-stick out of our lexical butt and just use it? It’s a virus. You’ve got a virus. Who cares what it is or how you got it. You got a virus and now your computer needs to be treated before you can use it safely again. Now, how hard was that?

Gooooolag

UPDATE: How wrong could I be about the severity of this threat? Very wrong, apparently. I haven’t confirmed it yet, but it’s hard to imagine how this week’s mass server hack could have happened without tools like the one described below. I’ll write more about this in this week’s column….


Heh, cute:

Cult of the Dead Cow Announces Goolag Vulnerability Search Engine.goooooolagOnce you get past the Chinese porn silliness, there’s a real story here:

Google’s effectiveness as a search engine also makes it an effective… well, search engine. Common website weaknesses are exposed by search engines such as Google, and anyone can access them by using specially crafted queries that take advantage of Google’s advanced searching capabilities. As the cDc press release indicates, there are approximately 1500 such searches published and readily accessible on the Internet. And now the cDc has built a(n a)cutely satirical web front end and are offering a downloadable desktop search application for Windows, giving script kiddies the world over something else to do with their time.

What effect has this had on website security? It’s difficult to tell. The principle of using Google as a scanning tool has been common knowledge since at least 2006, but according to Zone-H, who record large numbers of website defacements every year, the only significant increase in website attacks since then was the result of an online gang war between various Russian criminal factions, back in 2006. Ignoring that anomalous rise in activity, the rate of attack actually fell slightly in 2007 compared to recent years, relative to the number of active websites.

Zone-H’s latest report proves only that the percentage of insecurely configured websites scales on a roughly linear basis with the number of available websites, and that the choice of technology has almost no bearing on the likelihood of a successful attack. Indeed, most exploits are simple attacks on inherent weaknesses: guessing admin passwords or copying them when they’re sent in cleartext, misconfigured shares and unsafe, unpatched applications. Attacks requiring any amount of individual effort are not very common at all. Man-in-the-middle attacks rated only fifth place in the list of common exploits, representing only 12% of that total. But researchers have elsewhere noted that cross-site-scripting attacks are on the rise, and are being used mostly by spammers to increase the size of their bot nets.

The lesson here is fairly obvious: Making simple mistakes is the easiest way to expose yourself to attack. And search tools like Goolag make finding those mistakes remarkably easy. You won’t be targeted so much as stumbled across. Given the recent rise in the number of websites being used to inject malicious software into people’s computers, spammers and other online criminals appear to have a strong incentive to use even the less popular websites to ply their trade.

Your choice of technology won’t save you, either. Most popular web servers are fairly secure these days and though not all server operating systems are created equal, the big ones have improved markedly. But the same cannot be said of the applications and frameworks that run on them. The old adage that ease of use is universal still applies. When you make things easy for yourself and your users, you are liable to make things easy for other, less welcome guests as well.

The lesson for the average website owner: Do the simple things well. Don’t waste your time trying to imagine how some intrepid cyber-ninja is going to magically fly across your digital alligator moat. Just make sure your systems are well-chosen and properly patched, pay attention to access control and treat authentication seriously. Statistically, at least, this will drop your chances of being Pwned to nearly nil, or close enough as makes no never mind.

#@)(!*^ing Encryption

A few words about the title: The first seven letters are written using a very simple code, or cypher. Each of the letters in the original word is replaced by the non-alphabetical character to which it is closest on a US keyboard. The process of hiding a message by substituting other letters, numbers or symbols is known as encryption. When the code is reversed, the title reads ‘Explaining Encryption’.

But it also looks like swearing, doesn’t it? In fact, the use of characters like this to denote swearing is a simple (dare we say crude?) kind of encryption. A child too innocent to know such words derives no meaning from the random collection of characters. Someone well versed in the ways of the world, though, can add up the number of characters and quickly deduce what was intended.

On and off over the last two months, we’ve been looking at various aspects of online security. This week, we’re going to consider what steps we can take to make the information we send over the Internet secure from prying eyes.

We’ll also consider why it is that no one uses these measures, and why most of us won’t any time soon.

Read more “#@)(!*^ing Encryption”

Idea: Personal Navajo

Instead of exposing the painful ritual of public/private key exchange, software developers should instead be using metaphors of human trust and service.

A ‘translator’ service,  for example. The user ‘invents’ an imaginary language, then decides who among her friends is allowed to speak it with her. She then instructs her ‘translator’ (e.g. her own personal Navajo) to convey messages between herself and her friend’s translator.

(Only the personal Navajos actually need to speak this ‘language’ of course. As far as the two correspondents are concerned, the only change is that they’re sending the message via the ‘translator’ rather than directly, but even that is a wafer-thin bit of functionality once the channel is established and the communications process automated.)

Quick encryption, well understood, and easy to implement. Most importantly, you don’t have to explain encryption, public and private keys,  or any other security gobbledygook to someone who really doesn’t want – and shouldn’t need – to hear it.

Update: Of course, the greatest weakness to this idea is if Microsoft were to create an implementation of this and name it Bob.

The Coconut Wireless

Last week’s column introduced a broad but important topic about current trends in technology. Over the next few weeks, we’ll take some time to look in more detail about the issues of privacy and access to information. What are the current trends? How are they going to affect us here in Vanuatu? What can we do to mitigate the worst effects and maximise the best of them?

Before we go into detail, though, it’s important to establish a bit of context. We’ve already described how people often make the wrong assumptions about the level of privacy they enjoy when using computers and the Internet. But let’s look at this issue in more practical terms.

Everyone in Vanuatu knows what ‘Coconut Wireless’ means. It refers to the lively rumours that spread via word of mouth concerning anything – or anyone – of interest to people as they idle away their spare time. In small doses, it’s generally unreliable, but when information is amalgamated from numerous sources, an assiduous listener can gather a good deal of interesting (sometimes deliciously scurrilous) and surprisingly accurate information.
Read more “The Coconut Wireless”

Ghost in the Machine

In the most recent RISKS mailing list digest, Peter Neuman includes a brief article by Adi Shamir describing a method of exploiting minor faults in math logic to break encryption keys in a particular class of processor.

Titled Microprocessor Bugs Can Be Security Disasters, the article makes an interesting argument. In fairly concise terms, Shamir outlines an approach that quickly circumvents much of the hard work in breaking private keys, no matter how heavily encrypted. He uses the RSA key encryption method in his example, probably out of humility. With even my limited knowledge of mathematics, I was able to follow the broad strokes of the approach.

Put most simply, if you know there is a math flaw in a particular kind of processor, then you can exploit that by injecting ‘poisoned’ values into the key decryption process. By watching what happens to that known value, you can infer enough about the key itself that you can, with a little more math, quickly break the private key.

And of course, once you’ve got someone’s private key, you can see anything that it’s been used to encrypt.

This is in some ways a new twist on a very old kind of attack. Code breakers have always exploited mechanical weaknesses in encryption and communications technology. During the Second World War, code breakers in the UK learned to identify morse code transmissions through the radio operator’s ‘hand’ – the particular rhythm and cadence that he used. This sometimes gave them more information than the contents of the communications themselves. Flaws in the Enigma coding machines allowed the Allies to break the device some time before Alan Turing and his early computers got their ‘Bombe’ computer working efficiently:

One mode of attack on the Enigma relied on the fact that the reflector (a patented feature of the Enigma machines) guaranteed that no letter could be enciphered as itself, so an A could not be sent as an A. Another technique counted on common German phrases, such as “Heil Hitler” or “please respond,” which were likely to occur in a given plaintext; a successful guess as to a plaintext was known at Bletchley as a crib. With a probable plaintext fragment and the knowledge that no letter could be enciphered as itself, a corresponding ciphertext fragment could often be identified. This provided a clue to message keys.

These days, computing processors and encryption are used in almost every aspect of our lives. The risks presented by this new class of attack are outlined in fairly plain English by Shamir:

How easy is it to verify that such a single multiplication bug does not exist in a modern microprocessor, when its exact design is kept as a trade secret? There are 2^128 pairs of inputs in a 64×64 bit multiplier, so we cannot try them all in an exhaustive search. Even if we assume that Intel had learned its lesson and meticulously verified the correctness of its multipliers, there are many smaller manufacturers of microprocessors who may be less careful with their design. In addition, the problem is not limited to microprocessors: Many cellular telephones are running RSA or elliptic curve computations on signal processors made by TI and others, FPGA or ASIC devices can embed in their design flawed multipliers from popular libraries of standard cell designs, and many security programs use optimized “bignum packages” written by others without being able to fully verify their correctness. As we have demonstrated in this note, even a single (innocent or intentional) bug in any one of these multipliers can lead to a huge security disaster, which can be secretly exploited in an essentially undetectable way by a sophisticated intelligence organization.

I’m surprised that I haven’t seen much concern voiced about this class of attacks. Maybe I just hang out with an insufficiently paranoid crowd….

Black Smoke and Storm Clouds

Every weekday morning, in every street in Port Vila, we see a steady stream of people walking into town. On the road beside them, innumerable buses and cars drive by, belching black smoke into their faces. Just as regularly, we see complaints in the local media about this smoke. But nothing ever gets done about it.

Police and inspection officials don’t enforce the laws, and the drivers don’t make any real effort to clean up their act. Everybody knows they should. Everybody knows that this pollution causes health problems. Even the simplest metrics, like the dirt it leaves on our clothing, on our skin and under our nails, makes it impossible to deny that there’s a problem. And yet we do nothing.

Why? The answer is simple….

Read more “Black Smoke and Storm Clouds”