CRYPTO-GRAM March 15, 2003 by Bruce Schneier Founder and CTO Counterpane Internet Security, Inc. schneier@counterpane.com A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available at . To subscribe, visit or send a blank message to crypto-gram-subscribe@chaparraltree.com. Copyright (c) 2003 by Counterpane Internet Security, Inc. ** *** ***** ******* *********** ************* In this issue: Practical Cryptography Crypto-Gram Reprints SSL Flaw The Doghouse: Random Cryptography Companies News Counterpane News Security Notes from All Over: Woodland Ants SSL Patent Infringement Comments from Readers ** *** ***** ******* *********** ************* Practical Cryptography Niels Ferguson and I have written a new book. Called "Practical Cryptography," it's a practical guide to using cryptography in security design. If "Applied Cryptography" has any flaw -- aside from it being seven years old -- it's that it was too broad. I tried to cover all of cryptography at the time and, while it makes for an interesting book, it was too much. It can be daunting for an engineer to look through the book and try to choose what algorithms and protocols he needs for his applications. And most of the esoteric algorithms and protocols just aren't relevant to the engineer. At the time, I was enamored of and excited by the promise of what cryptography could do, and I wanted to share all those possibilities. I know a lot of people appreciated that, but I'm sure it caused a lot of confusion as well. In "Practical Cryptography," we took a single problem and discussed it deeply. The most common problem cryptography solves is what I call a secure channel: Alice and Bob want to communicate securely over some insecure communications line, so they need to establish a secure channel on top of that insecure line. This book is about cryptography as it is used in real-world systems -- about cryptography as an engineering discipline rather than cryptography as a mathematical science. Building real-world cryptographic systems is vastly different from the abstract world of most books on cryptography, which discuss a pure mathematical ideal that magically solves your security problems. Designers and implementers live in a very different world, where nothing is perfect and where experience shows that most cryptographic systems are broken due to problems that have nothing to do with mathematics. This book is about how to apply the cryptographic functions in a real-world setting in such a way that you actually get a secure system. This is the book we wish we'd had more than a decade ago when we started our cryptographic careers. It collects our combined experiences on how to design cryptographic systems the right way. In some ways, this book is a sequel to "Applied Cryptography," but it focuses on very practical problems, and on how to build a secure system rather than just design a cryptographic protocol. Note: This book is not my book on general security that I mentioned in this newsletter in November 2002. That book will be published in September by Copernicus Books. Web site for the book (including the Table of Contents and the Preface): Order the hardcover from Amazon here: And the paperback from Amazon here: ** *** ***** ******* *********** ************* Crypto-Gram Reprints Crypto-Gram is currently in its sixth year of publication. Back issues cover a variety of security-related topics, and can all be found on . These are a selection of articles that appeared in this calendar month in other years. SNMP vulnerabilities: Bernstein's Factoring Breakthrough? Richard Clarke on 9/11's Lessons Security patch treadmill: Insurance and the future of network security: The "death" of IDSs: 802.11 security: Software complexity and security: Why the worst cryptography is in systems that pass initial cryptanalysis: ** *** ***** ******* *********** ************* SSL Flaw Last month, a flaw in the SSL protocol made the news. Although first reported as a flaw in the protocol itself, it is actually an implementation flaw. While technically interesting, the flaw doesn't affect most people's SSL security. And even if it did compromise SSL security, it doesn't really matter. The attack is one of a general class of side-channel attacks. The attacker can use timing variations in certain implementations of SSL to gain information about encrypted data. In some circumstances, the attacker can use the information to decrypt the data and recover the SSL password, which can then be used to compromise the entire SSL secure channel. This is a real attack, and a good scientific result, but it's not applicable to most SSL users. For the attack to work, the SSL software needs to use a block cipher (preferably with a 64-bit block, like triple-DES) in CBC mode. The vast majority of SSL implementations default to RC4, which is not susceptible to the attack. And the attack is a man-in-the-middle attack, meaning that the attacker must be able to insert himself into the SSL connection between the client and the server; an attacker who is passively eavesdropping on the connection cannot mount the attack. And finally, the attacker needs some special characteristics of the SSL connection to be able to form a certain sequence of messages in order for his attack to work; in most normal browsing, this just isn't going to happen. All of this points to the attack being primarily of theoretical interest, which doesn't mean that vendors shouldn't fix their implementations. Users don't have to rush to download patches, though. In a Reuters article on the topic, I was quoted as saying that "Nobody bothers eavesdropping on the communications while it is in transit." This isn't a misquote (grammar mistake and all). Even if SSL were irrevocably broken, it wouldn't affect Internet security very much. There are two reasons. One, SSL is almost never used in a secure manner. And two, SSL doesn't solve an important security problem. SSL establishes a secure channel between a client and a server. In order for you, the SSL client, to ensure that the channel is secure, you need to authenticate the server. You can do this by looking at the SSL certificate (your browser allows you to do this) and making sure that the server you have established a secure channel with is the one you want to talk to. My guess is that approximately no one ever does this. I certainly never do it. This means that you are using SSL to establish a secure channel with a random person. Imagine you are sitting in a lightless room with a stranger. You know that your conversation cannot be eavesdropped on. What secrets are you going to tell the stranger? Nothing, because you have no idea who he is. SSL is kind of like that. SSL solves the security problem of transferring sensitive information between browsers and webservers. Mostly, I see it used to protect credit card transactions; people are concerned about hackers stealing their credit card numbers as they move through the network. By now it should be obvious that hackers don't steal credit card numbers one by one across the network; they steal them in bulk -- by the thousands or even millions -- by breaking into poorly protected networks. Many smaller e-commerce sites don't use SSL to protect their credit card transactions, and even there this kind of attack simply doesn't happen. I admit that my Reuters quote is a bit of an overstatement. SSL is used to protect personal information between customers and online banks or brokerage houses, employees and employers, patients and insurance companies, etc., but by and large SSL is for show. The real risks to personal data are the large databases at the endpoints, not the communications between them. I wouldn't discard SSL as being irrelevant, but neither would I worry very much if it could be attacked. Security is only as strong as the weakest link, and SSL is nowhere close to being the weakest link. The research paper: Reuters article: or Slashdot discussion: or ** *** ***** ******* *********** ************* The Doghouse: Random Cryptography Companies I am continually amazed by how many of these there are. Thanks to everyone who sends me these Doghouse nominations. Vadium Technology. They have a one-time pad. Need I say more? PMC Ciphers. The theory description is so filled with pseudo-cryptography that it's funny to read. Hypotheses are presented as conclusions. Current research is misstated or ignored. The first link is a technical paper with four references, three of them written before 1975. Who needs thirty years of cryptographic research when you have polymorphic cipher theory? hierocryptX Technologies. The long PDF explaining their "polymorphic cipher theory" is, unfortunately, no longer available. The Web site is filled with extraordinary but unsubstantiated claims regarding their cipher. What is it about polymorphic cipher theory? Is it just that the name sounds so cool? PureNoise. "Uses 128 rounds of a ridiculously strong 3072 bit paranoid encryption that far exceeds even military standards!" Wow. jSoft Studios. Their FastEncrypt product has "three different forms of encryption: RAVE (Random Adaptive Variable Encryption), variable bit-length, and irreversible encoding which provides virtually unbreakable security." There's no more technical information than that; it's proprietary, of course. But the Association for Cryptographic Variety probably thinks that all this snake oil is a good thing: Note: I don't think the above site is a hoax, but it's so close to the edge that it might be. ** *** ***** ******* *********** ************* News For you to decide if a Web page is authentic, you not only have to trust the page author, you need to trust the browser as well: Another paper on the spread of the SQL Slammer worm: Good seven-page summary of the 289-page HIPAA regulations: Steganography program hides data in executable code: At the request of Citibank, the High Court in London has imposed an injunction on Ross Anderson and other Cambridge University security experts who claim to have uncovered serious failings in the system banks use to secure ATM PIN codes. Scary stuff on the U.S. Justice Department's new proposal to criminalize encryption: Hacker falls prey to social engineering attack. Kind of a cute story. The SmartGate facial recognition trial at Sydney Airport has suffered an embarrassing setback, when two Japanese visitors fooled the system simply by swapping passports. You can play "automatic face recognition" at home. Here are two pictures of recently arrested Khalid Shaikh Mohammed. Are they the same person? More cell phone hacking: Computer pioneer, and computer security don, Roger Needham has died. He will be missed. Two enterprising Japanese criminals used keyboard sniffers to collect bank account information and passwords, and used that information to steal $136,000. or I debated the National Strategy to Secure Cyberspace with John Thompson, the CEO of Symantec, in the San Jose Mercury News. My essay shouldn't be news to any Crypto-Gram readers: Thompson's essay: ** *** ***** ******* *********** ************* Counterpane News Schneier is giving the keynote address at the 2003 Computers, Freedom, and Privacy conference in New York: April 2 at 8:30 AM. Schneier is giving a keynote address at the IBM Almaden Institute Symposium on Privacy in San Jose. He will be speaking on "Privacy and Technology": April 10 at 9:00 AM. Registration is invitational. Schneier is speaking at the RSA Conference in San Francisco. He is moderating the Cryptographers' Panel on Monday, April 14 (5:00 - 6:00). He is speaking on "Security Proxies and Agenda" on Wednesday, April 16, at 9:00 AM and on "How to Think About Security" on Thursday, April 17, at 10:00 AM. ** *** ***** ******* *********** ************* Security Notes from All Over: Woodland Ants The woodland ant (Pheidole dentata) survives in areas dominated by fire ant colonies, even though the fire ant colonies both tend to be up to one hundred times larger and make lousy neighbors. The woodland ant's trick is counterattack: it has a permanent population -- about 10 percent -- of warrior caste ants who do nothing but patrol with the worker ants. Whenever any of these ants sees a fire ant, they rush it, get some of its smell on themselves, and then run home, laying a scent trail. They immediately alert the hive, and any other workers and warriors they encounter along the way. A huge force of these woodland ants arrives shortly, kills the offending fire ant, and then spreads out to search for survivors. The warriors will keep circling the area for hours, looking for and killing any other fire ants they find. By throwing the maximum possible effort into this counterattack, the woodland ants ensure that no fire ants get back to their hive with accurate information about the woodland ants' whereabouts. ** *** ***** ******* *********** ************* SSL Patent Infringement Leon Stambler claims that at least two of the U.S. patents he owns cover the SSL protocol. He's collected millions from companies by threatening to sue them. VeriSign and RSA decided to fight the patent in court. And they won. This case has been going on for over a year, and many people have asked me whether or not the patents are valid. But honestly, I didn't (and still don't) have the strength to read the actual patents. It's eight patents -- hundreds and hundreds of pages of dense legalese. There are over a hundred claims, some of which are so general they can apply to any authentication protocol. It took a phalanx of legal experts to figure this one out, and I am pleased to say that I was not retained -- and have no intention of being retained -- by any of them. But the whole thing looked fishy to me. Stambler first filed his patent application in 1992, but some of the patents weren't issued until 1998 and 1999. The SSL protocol was developed in 1994 (a patent for it was granted in 1997). This is what's called a "submarine patent," made possible by a property of the U.S. patent system called "continuation." For any technology that you've patented, you always keep one or two patents "open"; i.e., you keep it in the patent process. When you file a patent, you have to file a disclosure document that describes the technology. The disclosure document cannot be changed later, but the patent claims can. And it is quite easy to delay the patent issuing process by procedural means. So you delay a few continuation patents (extensions of your basic patent). Now suppose you see something significantly different from the thing you patented, but related (like the SSL protocol). You then try to rewrite the claims of the continuation patent to cover the new thing. Getting the new claims approved is a negotiation process between you and the patent office, and the patent office doesn't necessarily know what you're trying to sneak by, so you have a good chance of getting the new claims approved. You then let the continuation patent be issued, with the claims that directly cover the new thing, and sue. Patent examination isn't secret; something called the patent "wrapper" contains all of the drafts, correspondence, and other paperwork related to the patent before it was issued. Anyone can get a patent wrapper for any issued patent from the patent office, although you have to pay a hefty photocopying charge for what could be thousands of pieces of paper. And that's where the lawyers come in. Stambler isn't stupid. He accepted $400,000, plus some ongoing royalties, from Certicom. I'm sure Certicom looked at the patent and said: "This can't be valid." But Certicom's lawyers said: "Look. It'll cost you $400,000 for us to read the patent, read the wrapper, and engage in litigation. And the outcome of litigation is never without doubt." Certicom isn't stupid, either. They reasonably decided that paying was cheaper than fighting. Openwave paid the same amount, and First Data supposedly paid $4 million! (I honestly don't believe that number.) I'm pleased that VeriSign and RSA fought, and thrilled that they won, but the game illustrates a serious problem with the current patent system: it falls to a divide-and-conquer attack. Let's say that successfully fighting a patent costs $5 million. (I'm making these numbers up, but that's not an unreasonable cost for a patent litigation.) The patent owner approaches ten companies and offers to license the patent for $1M. Since fighting costs $5M and the companies are rational, they pay up. But if the ten companies banded together and successfully fought the patent, they would each save $500K. By fighting, RSA and VeriSign did charity work for the industry, in addition to serving their own self-interest. If they succeed in nullifying these patents, they're doing everyone an enormous favor. (Who knows what other key negotiation protocols Stambler will litigate against next. Stambler has already claimed that his patents cover Microsoft PPTP, PCK, FIPS 196, SET, and Authenticode, and he's probably looking at the various Digital Rights Management protocols.) But it's because of this flawed U.S. patent system that they had to do so in the first place. News article: Stambler's patents (you can get copies at ): 5,974,148, 5,936,541, 5,793,302, 5,646,998, 5,555,303, 5,524,073, 5,267,314, RE31,302 Stambler loses the case: There's "More to Come," though: )) ** *** ***** ******* *********** ************* Comments from Readers From: "Venables, Phil" Subject: Locks and Full Disclosure We need to add the color of two further dimensions to this problem. 1) Time. In the medium-long run full disclosure is an imperative. But in the short term, at immediate discovery of a vulnerability (note: not exploit), a vendor should be given an appropriate amount of time to acknowledge, fix, test and make available a fix. 2) Detail. At some point of time in the cycle from discovery to full disclosure, there should some intermediate disclosure that enables people to prepare to move quickly on remediating the vulnerability; e.g., expect a IE JVM patch in 4 weeks, nature of impact could be X, Y, Z. This could give people time to spin up remediation resources and make tactical risk decisions about mitigating controls but would not give a broad community access to the detail that would hasten exploits before the exposure was reduced. Although I do admit, tantalizing detail would often lead to demand for full disclosure. Like anything in life, a flat environment is all shades of gray; it's only black and white when you split it into different dimensions. From: Ben Day Subject: Locks and Full Disclosure I wanted to comment on your analogy between master key lock vulnerabilities and the digital security paradigms you've spoken on behalf of. While I find the case for "open source" electronic security (if I can call it that) entirely persuasive, there are serious problems with applying this model to "physical" security. This doesn't mean that I disagree with your criticism of reliance on secrecy in the locksmithing community, necessarily, but I don't think it would have the beneficial results it has in the world of digital security of forcing people to use better, less penetrable security mechanisms, due to the contingencies of lock security. To explain: Matt Blaze's paper is, I think, less interesting than it appears: pin-tumbler locks are incredibly easy to pick -- anyone can find the famous MIT guide to lock-picking by "Ted the Tool" online and, with an hour or two of practice, start taking padlocks and doors apart using household items. Locks with master-keys are even easier to pick, since several of the pins are usually cut in two places, and can thus be set at either of the cuts. Shaving an individualized key to get a master is like using a flame-thrower to light your cigarette when you've got a lighter in your pocket, although I'm sure both have happened for entertainment purposes. The complication with Blaze's (and your) analysis of the vulnerability, though, comes in when we ask WHY ninety percent of the locks out there are tumblers, and are so easy to pick (or get into through other tricks, like the master key scheme), when lock mechanisms exist that are essentially unpickable and resistant to other non-destructive exploits. The answer is: THEY ARE INTENTIONALLY MADE SO THAT THEY CAN BE BROKEN INTO. You and Blaze make the crucial mistake of assuming that the goal of locks is to prevent people from being able to get in without the proper key, but in fact, locks are made in most cases precisely so that people -- with slightly esoteric but not rare skills -- can open them without a key. This is simply because people don't want to have the locks on their house or school locker drilled open and replaced every time they lose their keys or lock them inside. Most cars made since the 1990s come with unpickable Illco-style locks (these take the keys with smooth, rounded bumps instead of sharp or squared teeth) -- but this is only because when you lock your keys in the car, nobody goes in through the lock, but picks or wedges open the door. If the loopholes by which car doors could be broken into were corrected, I guarantee you that the locks would be pickable -- cars have to be built so that they can be broken into, paradoxically. This is where the "open source" approach becomes tricky. If lock-picking were to become "public knowledge" in the sense of public know-how (everyone knowing how to pick locks), most locks would be pretty useless. However, if the vulnerability of tumbler locks were to become public knowledge, this would not necessarily lead to "tighter" lock security -- for the reasons described above. So there is a rationality in the locksmiths' desire for secrecy that doesn't apply to the digital world: people who know that their lock is not terribly secure will NOT necessarily go out and buy an unpickable lock, even if it costs the same. However, the more people that know how to pick locks easily there are, the less of a deterrent locks will be (and this is essentially all that most locks attempt to accomplish: deterrence, not safeguarding per se). I said above that I don't necessarily disagree with your criticism of the secrecy attempts, but I would agree mostly on the rights of consumers to have an accurate understanding of what the product they are purchasing does and does not do. I don't agree because I think that this culture of secrecy is perpetuating crummy lock-making -- as secrecy in cryptography can easily perpetuate poor security protocols. I suspect that the "functional vulnerability" of most locks is not paralleled in the digital universe since in the latter we are often not actually securing access to something, but securing TRANSMISSION of a something that can be duplicated. Or in other words, we don't always "lose" data if we lose the key to it. In cases where security IS a matter of ultimate access -- logging on to operating systems, for example -- I think we find ourselves approaching the "crummy lock" paradigm in which backdoors are intentionally built in, usually available to those with physical access to the machine the data is on. Although there are cases in the digital world -- as in the physical world -- where we want to totally and irreversibly limit access to those holding or remembering the proper key, this always comes with a high risk of loss, which means that we don't necessarily turn to this approach with more valuable information or items. From: Douglas Albert Sibley Subject: Signature Verification on Checks at Banks For checks, costs may also be substantially lower in the aggregate by having customers check their records. It is reasonable to assume that it would be difficult for a potential attacker to get a blank check from a potential victim and that customers will check their own records so such fraud will not go undetected. Given the very high volume of checks processed legitimately, the fact that alterations on the amount are a bigger problem (and given that large value checks are looked at, one where there already is some protection), the variation in legitimate signatures, and the fact that many or most customers would not want to pay extra for writing checks, makes only verifying signatures on checks for large amounts a good policy. From: Dale Southard Subject: Meganet You forgot one important Meganet link: Here's the reverse-engineered implementation of VME, which if correct, is easily as funny as their corporate Web site: or ** *** ***** ******* *********** ************* CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on . To subscribe, visit or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit . Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety. CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of "Secrets and Lies" and "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography. Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide. Copyright (c) 2003 by Counterpane Internet Security, Inc.