CRYPTO-GRAM July 15, 2003 by Bruce Schneier Founder and CTO Counterpane Internet Security, Inc. schneier@counterpane.com A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available at . To subscribe, visit or send a blank message to crypto-gram-subscribe@chaparraltree.com. Copyright (c) 2003 by Counterpane Internet Security, Inc. ** *** ***** ******* *********** ************* In this issue: How to Fight The Doghouse: YTech More E-mail Filtering Idiocy News Counterpane News Security Notes from All Over: Red Wine Password Safe Crying Wolf Comments from Readers ** *** ***** ******* *********** ************* How to Fight I landed in Los Angeles at 11:30 PM, and it took me another hour to get to my hotel. The city was booked, and I was lucky to get a reservation where I did. When I checked in, the clerk insisted on making a photocopy of my driver's license. I tried fighting, but it was no use. I needed the hotel room. There was nowhere else I could go. The night clerk didn't really care if he rented the room to me or not. He had rules to follow, and he was going to follow them. My wife needed a prescription filled. Her doctor called it in to a local pharmacy, and when she went to pick it up the pharmacist refused to fill it unless she disclosed her personal information for his database. The pharmacist even showed my wife the rule book. She found the part where it said that "a reasonable effort must be made by the pharmacy to obtain, record, and maintain at least the following information," and the part where is said: "If a patient does not want a patient profile established, the patient shall state it in writing to the pharmacist. The pharmacist shall not then be required to prepare a profile as otherwise would be required by this part." Despite this, the pharmacist refused. My wife was stuck. She needed the prescription filled. She didn't want to wait the few hours for her doctor to phone the prescription in somewhere else. The pharmacist didn't care; he wasn't going to budge. I had to travel to Japan last year, and found a company that rented local cell phones to travelers. The form required either a Social Security number or a passport number. When I asked the clerk why, he said the absence of either sent up red flags. I asked how he could tell a real-looking fake number from an actual number. He said that if I didn't care to provide the number as requested, I could rent my cell phone elsewhere, and hung up on me. I went through another company to rent, but it turned out that they contracted through this same company, and the man declined to deal with me, even at a remove. I eventually got the cell phone by going back to the first company and giving a different name (my wife's), a different credit card, and a made-up passport number. Honor satisfied all around, I guess. It's stupid security season. If you've flown on an airplane, entered a government building, or done any one of dozens of other things, you've encountered security systems that are invasive, counterproductive, egregious, or just plain annoying. You've met people -- guards, officials, minimum-wage workers -- who blindly force you to follow the most inane security rules imaginable. Is there anything you can do? In the end, all security is a negotiation among affected players: governments, industries, companies, organizations, individuals, etc. The players get to decide what security they want, and what they're willing to trade off in order to get it. But it sometimes seems that we as individuals are not part of that negotiation. Security is more something that is done to us. Our security largely depends on the actions of others and the environment we're in. For example, the tamper resistance of food packaging depends more on government packaging regulations than on our purchasing choices. The security of a letter mailed to a friend depends more on the ethics of the workers who handle it than on the brand of envelope we choose to use. How safe an airplane is from being blown up has little to do with our actions at the airport and while on the plane. (Shoe-bomber Richard Reid provided the rare exception to this.) The security of the money in our bank accounts, the crime rate in our neighborhoods, and the honesty and integrity of our police departments are out of our direct control. We simply don't have enough power in the negotiations to make a difference. I had no leverage when trying to check in without giving up a photocopy of my driver's license. My wife had no leverage when she tried to fill her prescription without divulging a bunch of optional personal information. The only reason I had leverage renting a phone in Japan was because I deliberately sneaked around the system. If I try to protest airline security, I'm definitely going to miss my flight and I might get myself arrested. There's no parity, because those who implement the security have no interest in changing it and no power to do so. They're not the ones who control the security system; it's best to think of them as nearly mindless robots. (The security system relies on them behaving this way, replacing the flexibility and adaptability of human judgment with a three-ring binder of "best practices" and procedures.) It would be different if the pharmacist were the owner of the pharmacy, or if the person behind the registration desk owned the hotel. Or even if the policeman were a neighborhood beat cop. In those cases, there's more parity. I can negotiate my security, and he can decide whether or not to modify the rules for me. But modern society is more often faceless corporations and mindless governments. It's implemented by people and machines that have enormous power, but only power to implement what they're told to implement. And they have no real interest in negotiating. They don't need to. They don't care. But there's a paradox. We're not only individuals; we're also consumers, citizens, taxpayers, voters, and -- if things get bad enough -- protestors and sometimes even angry mobs. Only in the aggregate do we have power, and the more we organize, the more power we have. Even an airline president, while making his way through airport security, has no power to negotiate the level of security he'll receive and the tradeoffs he's willing to make. In an airport and on an airplane, we're all nothing more than passengers: an asset to be protected from a potential attacker. The only way to change security is to step outside the system and negotiate with the people in charge. It's only outside the system that each of us has power: sometimes as an asset owner, but more often as another player. And it is outside the system that we will do our best negotiating. Outside the system we have power, and outside the system we can negotiate with the people who have power over the security system we want to change. After my hotel stay, I wrote to the hotel management and told them that I was never staying there again. (Unfortunately, I am collecting an ever-longer list of hotels I will never stay in again.) My wife has filed a complaint against that pharmacist with the Minnesota Board of Pharmacy. John Gilmore has gone further: he hasn't flown since 9/11, and is suing the government for the constitutional right to fly within the U.S. without showing a photo ID. Three points about fighting back. First, one-on-one negotiations -- customer and pharmacy owner, for example -- can be effective, but they also allow all kinds of undesirable factors like class and race to creep in. It's unfortunate but true that I'm a lot more likely to engage in a successful negotiation with a policeman than a black person is. For this reason, more stylized complaints or protests are often more effective than one-on-one negotiations. Second, naming and shaming doesn't work. Just as it doesn't make sense to negotiate with a clerk, it doesn't make sense to insult him. Instead say: I know you didn't make the rule, but if the people who did ever ask you how it's going, tell them the customers think the rule is stupid and insulting and ineffective." While it's very hard to change one institution's mind when it is in the middle of a fight, it is possible to affect the greater debate. Other companies are making the same security decisions; they need to know that it's not working. Third, don't forget the political process. Elections matter; political pressure by elected officials on corporations and government agencies has a real impact. One of the most effective forms of protest is to vote for candidates who share your ideals. The more we band together, the more power we have. A large-scale boycott of businesses that demand photo IDs would bring about a change. (Conference organizers have more leverage with hotels than individuals. The USENIX conferences won't use hotels that demand ID from guests, for example.) A large group of single-issue voters supporting candidates who worked against stupid security would make a difference. Sadly, I believe things will get much worse before they get better. Many people seem not to be bothered by stupid security; it even makes some feel safer. In the U.S., people are now used to showing their ID everywhere; it's the new security reality post-9/11. They're used to intrusive security, and they believe those who say that it's necessary. It's important that we pick our battles. My guess is that most of the effort fighting stupid security is wasted. No hotel has changed its practice because of my strongly worded letters or loss of business. Gilmore's suit will, unfortunately, probably lose in court. My wife will probably make that pharmacist's life miserable for a while, but the practice will probably continue at that chain pharmacy. If I need a cell phone in Japan again, I'll use the same workaround. Fighting might brand you as a troublemaker, which might lead to more trouble. Still, we can make a difference. Gilmore's suit is generating all sorts of press, and raising public awareness. The Boycott Delta campaign had a real impact: passenger profiling is being revised because of public complaints. And due to public outrage, Poindexter's Terrorism (Total) Information Awareness program, while not out of business, is looking shaky. When you see counterproductive, invasive, or just plain stupid security, don't let it slip by. Write the letter. Create a Web site. File a FOIA request. Make some noise. You don't have to join anything; noise need not be more than individuals standing up for themselves. You don't win every time. But you do win sometimes. Privacy International's Stupid Security Awards: Stupid Security Blog: Companies Cry 'Security' to Get A Break From the Government: Gilmore's suit: Relevant Minnesota pharmacist rules: How you can help right now: Tell Congress to Get Airline Security Plan Under Control! TIA Update: Ask Your Senators to Support the Data-Mining Moratorium Act of 2003! Congress Takes Aim at Your Privacy Total Information Awareness: Public Hearings Now! Don't Let the INS Violate Your Privacy Demand the NCIC Database Be Accurate Citizens' Guide to the FOIA ** *** ***** ******* *********** ************* The Doghouse: YTech YTech has the ShadowX algorithm. It's proprietary to the company, of course. This kind of thing is nothing new, and normally I wouldn't bother. But this sentence has me really worried: "Two modes of encryption 'Self Mode' and 'Key mode.'" Um, how secure can it possibly be if there isn't a key? ** *** ***** ******* *********** ************* More E-Mail Filtering Idiocy I use Postini as a spam filter. Postini automatically scans all of my incoming e-mail. Anything it considers spam it shunts to another mailbox, which I check occasionally. There I can quickly scan my spam for legitimate e-mail, and specify certain e-mail addresses as ones that should be allowed rather than shunted. It's a good system. I see almost no spam anymore. Not everyone else has such a nice spam filter. Crypto-Gram is fighting a seemingly endless battle against filters of various sorts. There are people who simply can't get this newsletter because it is tagged as spam or porn. (I don't think anyone on MSN gets Crypto-Gram anymore, for example.) Most of the time I never hear about this, but occasionally I get error messages back from corporate filters. Some of them are entertaining. Some filters block Crypto-Gram if it is larger than 50K. Once, a filter blocked an issue that used the term "ILOVEYOU." Another was returned with the following message: "Body contains word(s)/phrase(s) 'bomb, gun.'" Another filter blocked an issue because the words "blow" and "job" appeared in the e-mail, even though they were in different paragraphs. The most recent issue was blocked by one filter because it contained more than two links to Geocities Web sites. (It seems that many Geocities Web sites are pornographic.) The same issue was also blocked by another filter for containing unspecified "dirty words"; the person involved pointed out that the same filter didn't block penis enlargement spam. Sadly, the above paragraph will trigger all the same spam filters, so the people who don't get Crypto-Gram because of them will not get this issue either, and hence will never know why. And my stories pale in comparison to Neil Gaiman's experience with the spam filter at DC Comics, publisher of Sandman. It seems that the filter automatically blocked all e-mail containing the word "Sandman" without informing either the sender or the receiver. Gaiman was unable to communicate with his publisher about his own writing. The EFF's position on spam filters is: "Any measure for stopping spam must ensure that all non-spam messages reach their intended recipients." It's a laudable goal, but one that's very difficult to implement in practice. Newsletters like Crypto-Gram are problematic. I know that everyone who gets my newsletter has subscribed, but how does any filter know that? I send 80,000 of these out every month; the only difference between me and a spammer is that my recipients asked to receive this e-mail. But I'm sure that some of my recipients don't remember subscribing. To them, Crypto-Gram is unsolicited e-mail: spam. Despite my personal difficulties with sending out Crypto-Gram, I have a lot of sympathy for spam filters. There's a lot of "throwing the baby out with the bathwater" going on, but the bathwater is so foul that many companies don't mind the occasional loss of baby. The spam problem is so bad that draconian solutions are the only workable ones right now. EFF on spam filters: or Neil Gaiman's story: or Original article on e-mail filtering idiocy: ** *** ***** ******* *********** ************* News Another DDOS variant: British cryptanalysis work against Russian ciphers during World War II: or Spammers are using Trojans to take over home PCs: Long, but good, article on homeland security: Erroneous timestamps on ATM withdrawals result in the arrest of three innocents: June 25th was the 100th anniversary of George Orwell's birth. For years I've been saying that securing data in servers is much harder than securing data in transit, and that encryption is an irrelevant security technology in many situations. Here's another essay that makes similar points: A new California law requires companies to report security breaches: or In the days after 9/11, lots of people took advantage of malfunctioning cash machines and stole millions. or Vulnerability management. With so many out there, you have to prioritize. Web privacy policies confuse more than they enlighten, according to a survey. This is hardly surprising; I kind of figured confusion was the point. It took just one week for the new Harry Potter book to be available online: Security through diversity. Remember that this only works if your system is as secure as the union of the security of the diverse subsystems. If your system is as secure as the intersection of the security of the diverse subsystems, then diversity is going to hurt rather than help. ** *** ***** ******* *********** ************* Counterpane News Counterpane had an excellent second quarter. Read about it here: Bruce Schneier is delivering the keynote speech at BlackHat: 7/31 at 8:00 AM in Las Vegas. ** *** ***** ******* *********** ************* Security Notes from All Over: Red Wine "Some women dining out in Tegucigalpa's fancier restaurants always order red rather than white wine, I was told. That way, if a robber comes in with a gun, they can discreetly drop their rings and earrings into the wine glass where they will not be spotted as they would be in a glass of white." This idea intrigues me. It's a simple security countermeasure, and one likely to be effective in a quick and stressful robbery. But why is wine required? Couldn't the women equally effectively use their napkins, their blouse, or the floor? I suppose moving to sip wine is a more natural, and therefor less noticed, maneuver. And I wonder if restaurants might start offering a cheap house red just for this purpose. ** *** ***** ******* *********** ************* Password Safe Password Safe 1.92b is available. Many computer users today have to keep track of dozens of passwords: for network accounts, online services, premium Web sites. Some write their passwords on a piece of paper, leaving their accounts vulnerable to thieves or in-house snoops. Others choose the same password for different applications, which makes life easy for intruders of all kinds. Password Safe is a free Windows utility (originally developed at Counterpane Labs) that allows users to keep their passwords securely encrypted on their computers. A single Safe Combination -- just one thing to remember -- unlocks them all. Password Safe has always been free, but it only become open source last year. This April, Rony Shapiro took charge of the project. (Applause and accolades.) He's released a new version, based on work by a small team of volunteers. Password Safe 1.92 has a number of small improvements, all of which make it easier to use and more customizable to each user's preferences. The changes include: resizable main window, displaying username and notes in main window, ability to search the database for a given string, listing last database opened, ability to define generated password policies, ability to pass the name of a database via command line. The Release Notes list all the changes in gory detail. If you're a user of Password Safe 1.7 (the most recent version available on the Counterpane Web site), you'll have no trouble going back and forth with the same database. Password Safe 2.0 is currently under development. The significant new features are: an ability to organize passwords in hierarchical view, portability to other platforms (PocketPC, Linux, Palm, probably in that order), and an extensible database format (meaning that they will be able to add more features easily). The overall goal is to keep Password Safe a small and simple application. As with any open source non-commercial project, schedules are fluid. Right now, the end of this year is a good conservative estimate for a non-beta 2.0 release. Password Safe Web site: Download Password Safe 1.92b: or Discussions on Password Safe 2.0: ** *** ***** ******* *********** ************* Crying Wolf On July 2, both the U.S. government and ISS (a company that sells computer security products) sent out a story about something called the "Defacers Challenge." Supposedly thousands of Web sites would be defaced on July 6 as part of some game. The press picked the story up, and soon it was international news. At Counterpane we discounted it as nonsense, but when our customers started calling us we put out an advisory. July 6 came and went; nothing happened. My guess is that it was all a hoax. Not that we could do anything if something did happen. Most of the news reports and advisories told people to make sure their security was up to date and their patches current. That's good advice any day of the year. Worrying about July 6 didn't make it less likely that Web sites would get attacked. For years, the security industry has tried to survive on FUD: fear, uncertainty, and doubt. The basic idea is that if you scare your potential customers, they're going to buy your products. (Greed and fear are two major human motivators, and both are exploited endlessly by corporate -- and government -- marketers.) The problem is that FUD only works for a while. Eventually people realize that there's nothing to be scared about. Eventually people ignore the warnings. And when that happens, they ignore the real warnings as well as the hyped ones. FUD is hard to prevent. Even those of us who knew better had to deal with the Defacers Challenge story. A few reporters covered it because it's kind of a cool story, and then everyone else had to follow. I remember talking to one reporter. He said that he ignored the story at first, realizing that it was FUD. But when other papers picked it up, his editor demanded that he write about it, too. It didn't matter that it wasn't real news; it was news solely because it was reported elsewhere. And in a weird way, the reporting made the threat real. Thousands of would-be Web site defacers, who would never have heard about the Defacers Challenge read about it in the newspapers. "Sounds like fun," they might have thought. Recently I've read several articles about why the computer security industry is in the doldrums. People, it seems, are not buying the new cool security products. There are half a dozen reasons for this, but FUD is a big one. We have threatened customers with the big bad nasties of the Internet. We have promised customers that -- this time for sure -- our products would solve their problems. But guess what? Customers have gotten cynical. They've noticed that it isn't all that bad out there. And they've noticed that they have problems whether or not they buy the products. Here's my hint to anyone trying to sell computer security: demonstrate value. Demonstrate ROI. Demonstrate that your product enables customers to manage their risk better. FUD doesn't work anymore. It doesn't sell anything, and it pisses off your potential customers. Unfortunately, the U.S. government is going to have to learn this same lesson. Since 9/11, the Department of Homeland Security has elevated the terrorist threat level to Orange twice (I think). Every time, we were told to be on our guard, but go about out business. And every time, nothing happened. Terrorist attacks are rare, and if the color-threat level changes willy-nilly with no obvious cause or effect, then people will simply stop paying attention. And the threat levels are publicly known, so any terrorist with a lick of sense will simply wait until the threat level goes down. The U.S. military has a similar system; DEFCON 1-5 corresponds to the five threat alerts levels: Green, Blue, Yellow, Orange, and Red. The difference is that the DEFCON system is tied to particular procedures; military units have specific actions they need to perform every time the DEFCON level goes up or down. The color-alert system, on the other hand, is not tied to any specific actions. People are left to worry, or are given nonsensical instructions to buy plastic sheeting and duct tape. Even local police departments and government organizations largely have no idea what to do when the threat level changes. The threat levels actually do more harm than good, by needlessly creating fear and confusion (which is an objective of terrorists) and anesthetizing people to future alerts and warnings. If the color-alert system became something better defined, so that people knew exactly what caused the levels to change, what the change meant, and what actions they needed to tak e in the event of a change, then it could be useful. But even then, the real measure of effectiveness is in the implementation. There has to be some measurable result, even if there is no actual attack. You can only cry wolf so many times before people ignore you. Note: One excellent Web source for uncovering FUD has been Vmyths. For years, Vmyths has been a voice of reason in the security community. Now the site may close down because it can't support itself. If you're a company looking for a *good* PR boost, consider taking over this site. News articles before: or News articles after: or Counterpane's alert: Vmyths alert on the Defacers Challenge: Vmyths may disappear: What the government thinks those threat levels mean: ** *** ***** ******* *********** ************* Comments from Readers From: Rob Lemos Subject: Cyberterrorism Whenever I talk about cyberterrorism, I point out that the Queensland consultant, Vitek Boden, released 1 million liters of pollution into an estuary that was cleaned up in a week. A couple of months later, a bird landed on a transformer in the Ohio River valley, blew itself and the transformer up, and released about 2.5 million gallons (call it 10 million liters) of sewage into the river. So it seems that we should be more worried about birds than hackers. Or to be less cheeky, physical attacks than Internet attacks. From: "Allan Dyer" Subject: Teaching Viruses It is not the teaching of how exploits, viruses, and worms work that is the problem. It is the unnecessary creation of self-replicating code. We need more people who understand viruses and how to combat them, but it is not necessary to create a virus to understand them. Additionally, knowing how to create a virus is nowhere near the complete skill set needed to combat them. Combined with the inherent dangers of self-replicating code this makes virus writing practicals unnecessary and unethical. The inherent dangers are a result of three properties of self-replicating code: generality, range of effect, and persistence These change how we need to think about security. In particular, if the precautions taken to prevent escape of the code from the secure laboratory fail, then there is no pre-determined limit on how much damage it can cause, or how long it can survive. As we know there are no absolute guarantees in security, the course organiser should therefore minimise the potential for damage by supplying anti-virus developers with samples of all the viruses created. One University class of new viruses each year (say, 50 viruses) is not going to make a big difference to the total number of new viruses -- there are currently at least 50,000 known types. However, if this is a good and useful course, then every University, world-wide, should have a similar course and we could see 50,000 new viruses a year, just from those courses. So, is it possible to study viruses and worms without creating them? The feature that differentiates a virus from other programs is modifying other programs to include a copy of itself, but, in terms of studying techniques and understanding, what is the difference between: i) modify program A to include a copy of program B. ii) modify program A to include a copy of yourself. Would the student's understanding of the techniques involved be reduced if he wrote a program to do (i) instead of (ii)? How do they compare in terms of safety? The program from (i) could be used by a miscreant to modify programs, perhaps creating Trojans with bad effects wherever the miscreant introduced the Trojans. The program from (ii) is a virus, and, as noted above, capable of spreading indefinitely, modifying other programs with unknown results. So: (i) is a tool that, when used with intent to damage can cause harm -- no worse than an axe, (ii) can spread like wildfire from a single accident or careless incident. A dropped cigarette butt and an axe can both destroy a forest, but one takes a lot more work and intent. So, new infection methods can be examined by creating programs that create arbitrary programs -- making it self replicating is not necessary for understanding the technique. Universities should be teaching students how to work and research safely and ethically. Undergraduate medical students don't cut up live people, they learn anatomy cutting up dead people. When I was learning microbiology and genetic engineering, we learnt about containment of our experiments, how to sterilise our equipment, before and after, and safe disposal of the cultures. Computer science students should be learning how to research computer viruses without creating them. We do need to teach this stuff, but that does not require virus writing practicals, just as police officer training does not require murder practicals. Understanding self-replicating code is different from writing it. In fact, reverse engineering is a much more important skill for an anti-virus researcher -- when presented with an unknown program, how do you work out everything it does, without inadvertently allowing it to cause damage or escape. I hope that makes it clearer why it is not necessary for students to write viruses, and why it is not responsible to do so. Many anti-virus researchers have a similar opinion, as can be seen from this open letter: The signatories are not just anti-virus vendor insiders; many are from major players in the IT industry, and IT users, including commercial and academic organisations. The University of Calgary has its academic freedom, but it should consider the reasons why so many of its peers, and those in the field it claims it is serving, object before proceeding. From: Paul Kocher Subject: Attacking VMs Using Memory Errors At the end of your comment on the above topic, you write: "Now that the attack is known, it can easily be prevented. Simple measures like parity checking or error-correcting codes can defeat this technique." Glitching attacks have been known for a long time (this is a creative example of one), and have proven extremely difficult to prevent. Error correction helps, but often just forces the attacker to whack the target harder until an error slips through. Error detection can also be helpful, but creates a new problem: reduced reliability. These approaches are well suited to RAM, but are much more difficult to apply to processors and other portions can be glitched. Finally, the suggestion that the problem will be fixed because it is known is also optimistic. Some vendors will do a great job, but others will ignore it completely unless their customers actually start defecting because of the problem. From: George Robert Blakley III Subject: Coins at Football Matches When I was growing up in Buffalo, I used to go watch the Sabres play hockey. They weren't very good then, but they sure had mean fans. When a particularly despised opponent (e.g. the Boston Bruins) would come to town, fans would take coins from their pockets, heat them up by holding them in their hands for a minute or two, and throw them into the rink. Since the players wore lots of pads, helmets, etc..., it wasn't likely that a coin was going to injure a player by impact, but that wasn't the point. The point was much more subtle -- a warm coin will sink into ice a bit, at which point it becomes a significant impediment to the progress of an ice skate. Sometimes it took 30 or 40 minutes to get the pennies out of the ice and Zamboni the surface. From: "Owen Minns" Subject: Self-destructing DVDs You suggest that the technology "solved the problem of needing an infrastructure to process DVD returns." In the US, perhaps, but does not globally absolve Disney of this responsibility. This system might work in the US, where Disney and other companies can still convince consumers to buy expensive packaging and products that become garbage after a few days, but in the EU, progress has dictated that producers assume greater responsibility for the full life-cycle of their products, including recycling/disposal. Presumably Disney will be responsible for the management and disposal of "former-DVDs" in that more rational jurisdiction. One would hope that a company with the resources of Disney could develop reliable security measures without generating even more waste! From: Greg Jennings Subject: Telephoning Account Data Your link to the DirecTV story (Hacking customer privacy in DirecTV) in the June 15, 2003 Crypto-Gram reminded me of how a store clerk and an accomplice can get credit card information. I once purchased an expensive item with my Visa card. The computer apparently instructed the clerk to call Visa and then hand me the phone. The Visa representative had me verify my home phone and mother's maiden name and the allowed the transaction to go through. However, and it did not occur to me at the time, I had no way of verifying that the person on the other end of the phone was from Visa! It could just as easily been someone in the back room or anywhere else for that matter. [This is the strangest piece of mail I have ever received, by several orders of magnitude. I reprint it here solely for entertainment purposes.] From: Somewhere Subject: I haven't a clue, really On January 15, 2003, I was banking on-line at Lee bank in Lee, Massachusetts. Zone Alarm informed me on the computer (mostly everything I have is documented) that a "would be hacker" was trying to penetrate my account. I wrote down the port numbers, called the bank, and was told by a very young secretary that I would have to come in and change my password. The Lee Bank of course later denied it, wanting to pretend that our systems are all secure. I thought "oh, they are just changing their systems -- I'll call back in 15 minutes. I was told to come in and change my password. The bank of course, later denied it. The portal numbers were the same as the one I would run into later. Fifteen minutes later I was back to my on-line computer and there was my ex-husband's (and now wife's) yellow e-mail staring me in the face. He was mailing things back to himself as he had done over the years. He had all sorts of "spy ware" installed on the first computer in our house. When we outgrew our, "Windows 95," I decided to get Jake a new computer. (I have 2 children, Jake and Hallie, and had remarried in 2000.) The new Compaq was bought in 1999. I don't know how long he had been e-mailing things back to himself. What came through when I pressed file, was our daughter's picture. Then, I pressed source & view and print. Pages started printing out -- So many that I ran out of paper. I showed these to a computer forensic person in Boston. He said that the program might show that they were laundering money, running pornography or Chuck could have been stealing money from George Gilder's bank account. George Gilder is the man responsible for predicting the stocks on the Gilder Technology report. Please forgive this very unprofessional letter. My house was broken into night after night. My jewelry was all changed with copper wire and numbered. Everything I touched looked like a little disk to hold information on it and it was covered in microchips in silver and copper. No one believed me. I had recently started taking medications for ADD. That made my second husband furious. Little did I know that he may have been involved in what I believe to be cryptography? I found a bag that the FBI will test for substances. I woke up groggy. I was followed by the same car day in and day out. They wanted to know when they could use my house. A private investigator from New York is coming tonight. The FBI will come tomorrow. I had a bag from New Mexico that I looked up on the internet. I was not allowed to use the computer when I wouldn't do my ex-husband's program. My calls were intercepted. We thought we had Verizon DSL. My computer was controlled by my ex-husband Edward Charles Frank. I had read in his notes of his running the v2ks. When I would wake up in the morning, floppy disks would be at my bedside, I was to run them and I am not a computer forensic person but I knew they weren't bible verses. Now comes the hard part. My house was broken into at least a dozen times. Watches, purses, coats, and my own belief in myself disappeared and reappeared on a daily basis. The Lee Police never visited my house one time. They, in fact, called in mental health -- one of the most humiliating experiences I have ever endured. The social worker said that my problems seemed to be called externally, the state police threw me out and I know how to ask calm and mannerly, as I am an opera singer. I stopped singing. They had already (I assume) been told that I was crazy, or maybe they were paid off. I just couldn't believe the treatment I received. When I called to tell them my purse was stolen out of my house in the night, I heard "Oh, you'll have to wait to talk to officer Buffis, he's handling this." For weeks the same cars followed me like hornets. Something on me told them my location. They had keys to my house and my cars. I had my locks changed. That night, even my bedroom lock and chain were penetrated. I heard a tape of my present husband testing the mikes and I also found a tape of myself in every room of the house, speaking distinctly. There is much more to the story and much more to be solved. I believe I am entitled to some compensation for the mental abuse and suffering I went through. 3 computers are at Kroll. Will you work with me? I started taking down license plates (about 7 or 8). Just this afternoon, all of the cars appeared across the street and seemed very angry. I have a lot of evidence, even the bag they used to drug my Labrador. I noticed a HUGE Verizon truck across the street at the way-station. Funny right, now we have no service at all. [This letter arrived in a box, approximately 10 inches on a side, filled with a pile of CD-ROMs, pens, costume jewelry, bits of metal, a fishing lure, and assorted other garbage all individually wrapped and secured with tape. Thankfully, the box was sent not to my home or business address, but to a mail drop I maintain. It might be a hoax, but the writing seems too authentic. It's hard to fake delusional paranoia that well.] ** *** ***** ******* *********** ************* CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on . To subscribe, visit or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit . Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety. CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of "Secrets and Lies" and "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography. Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide. Copyright (c) 2003 by Counterpane Internet Security, Inc.