C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

Crypto-Gram

January 2004


by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@counterpane.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and=20 commentaries on security: computer and otherwise.


In this issue:

  1. Color-Coded Terrorist Threat Levels

  2. Crypto-Gram Reprints

  3. Fingerprinting Foreigners

  4. News

  5. Terrorists and Almanacs

  6. Counterpane News

  7. More "Beyond Fear" Reviews

  8. Security Notes from All Over: President Musharraf and Signal Jammers

  9. WEIS

  10. New Credit Card Scam

  11. Diverting Aircraft and National Intelligence

  12. Comments from Readers

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4


 

Color-Coded Terrorist Threat Levels

From 21 December 2003 to 9 January 2004, the national threat level -- as established by the U.S. Department of Homeland Security -- was Orange. Orange is one level above Yellow, which is as low as the threat level has gotten since the scale was established in the months following 9/11. There are two levels below Yellow. There's one level above Orange.

This is what I wrote in Beyond Fear:  The color-coded threat alerts issued by the Department of Homeland Security are useless today, but may become useful in the future. The U.S. military has a similar system; DEFCON 1-5 corresponds to the five threat alerts levels: Green, Blue, Yellow, Orange, and Red. The difference is that the DEFCON system is tied to particular procedures; military units have specific actions they need to perform every time the DEFCON level goes up or down. The color-alert system, on the other hand, is not tied to any specific actions. People are left to worry, or are given nonsensical instructions to buy plastic sheeting and duct tape. Even local police departments and government organizations largely have no idea what to do when the threat level changes. The threat levels actually do more harm than good, by needlessly creating fear and confusion (which is an objective of terrorists) and anesthetizing people to future alerts and warnings. If the color-alert system became something better defined, so that people know exactly what caused the levels to change, what the change means, and what actions they need to take in the event of a change, then it could be useful. But even then, the real measure of effectiveness is in the implementation. Terrorist attacks are rare, and if the color-threat level changes willy-nilly with no obvious cause or effect, then people will simply stop paying attention. And the threat levels are publicly known, so any terrorist with a lick of sense will simply wait until the threat level goes down. 

Living under Orange reinforces this. It didn't mean anything. Tom Ridge's admonition that Americans  be alert, but go about their business  reinforces this; it's nonsensical advice. I saw little that could be considered a good security trade-off, and a lot of draconian security measures and security theater.

I think the threat levels are largely motivated by politics. There are two possble reasons for the alert.

Reason 1: CYA. Governments are naturally risk averse, and issuing vague threat warnings makes sense from that perspective. Imagine if a terrorist attack actually did occur. If they didn't raise the threat level, they would be criticized for not anticipating the attack. As long as they raised the threat level they could always say  We told you it was Orange,  even though the warning didn't come with any practical advice for people.

Reason 2: To gain Republican votes. The Republicans spent decades running on the  Democrats are soft on Communism  platform. They've just discovered the  Democrats are soft on terrorism  platform. Voters who are constantly reminded to be fearful are more likely to vote Republican, or so the theory goes, because the Republicans are viewed as the party that is more likely to protect us.

(These reasons may sound cynical, but I believe that the Administration has not been acting in good faith regarding the terrorist threat, and their pronouncements in the press have to be viewed under that light.)

I can't think of any real security reasons for alerting the entire nation, and any putative terrorist plotters, that the Administration believes there is a credible threat.

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on [http://www.schneier.com/crypto-gram.html]. These are a selection of articles that appeared in this calendar month in other years.

Militaries and Cyber-War:

A cyber Underwriters Laboratories?

Code signing

Block and stream ciphers

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

Fingerprinting Foreigners

Imagine that you're going on vacation to some exotic country. You get your visa, plan your trip, and take a long flight. How would you feel if, at the border, you were photographed and fingerprinted? How would you feel if your biometrics stayed in that country's computers for years? If your fingerprints could be sent back to your home country? Would you feel welcomed by that country, or would you feel like a criminal?

This week the U.S. government began doing just that to an expected 23 million visitors to the U.S. The US-VISIT program is designed to capture biometric information at our borders. Only citizens of 27 countries who don't need a visa to enter the U.S., mostly in Europe, are exempt. Currently all 115 international airports and 14 seaports are covered, and over the next three years this program will be expanded to cover at least 50 land crossings, and also to screen foreigners exiting the country.

None of this comes cheaply. The program cost $380 million in 2003 and will cost at least the same in 2004. But that's just the start; the Department of Homeland Security's total cost estimate nears $10 billion.

According to the Bush administration, the measures are designed to combat terrorism. As a security expert, it's hard for me to see how. The 9/11 terrorists would not have been deterred by this system; many of them entered the country legally on valid passports and visas. We have a 5,500-mile long border with Canada, and another 2,000-mile long border with Mexico. Two-to-three-hundred thousand people enter the country illegally each year from Mexico. Two-to-three-million people enter the country legally each year and overstay their visas. Capturing the biometric information of everyone entering the country doesn't make us safer.

And even if we could completely seal our borders, fingerprinting everyone still wouldn't keep terrorists out. It's not like we can identify terrorists in advance. The border guards can't say - "this fingerprint is safe; it's not in our database" because there is no comprehensive fingerprint database for suspected terrorists.

More dangerous is the precedent this program sets. Today the program only affects foreign visitors with visas. The next logical step is to fingerprint all visitors to the U.S., and then everybody, including U.S. citizens.

Following this train of thought quickly leads to sinister speculation. There's no reason why the program should be restricted to entering and exiting the country; why shouldn't every airline flight be "protected?" Perhaps the program can be extended to train rides, bus rides, entering and exiting government buildings. Ultimately the government will have a biometric database of every U.S. citizen--face and fingerprints--and will be able to track their movements. Do we want to live in that kind of society?

Retaliation is another worry. Brazil is now fingerprinting Americas who visit that country, and other countries are expected to follow suit. All over the world, totalitarian governments will use the our fingerprinting regime to justify fingerprinting Americans who enter their countries. This means that your prints are going to end up on file with every tin-pot dictator from Sierra Leone to Uzbeckistan. And Tom Ridge has already pledged to share security information with other countries.

Security is a trade-off. When deciding whether to implement a security measure, we must balance the costs against the benefits. Large-scale fingerprinting is something that doesn't add much to our security against terrorism, costs an enormous amount of money that could be better spent elsewhere. Allocating the funds on compiling, sharing, and enforcing the terrorist watch list would be a far better security investment. As a security consumer, I'm getting swindled.

America's security comes from our freedoms and our liberty. For over two centuries we have maintained a delicate balance between freedom and the opportunity for crime. We deliberately put laws in place that hamper police investigations, because we know we are a more secure because of them. We know that laws regulating wiretapping, search and seizure, and interrogation make us all safer, even if they make it harder to convict criminals.

The U.S. system of government has a basic unwritten rule: the government should be granted only limited power, and for limited purposes, because of the certainty that government power will be abused. We've already seen the US-PATRIOT Act powers granted to the government to combat terrorism directed against common crimes. Allowing the government to create the infrastructure to collect biometric information on everyone it can is not a power we should grant the government lightly. It's something we would have expected in former East Germany, Iraq, or the Soviet Union. In all of these countries greater government control meant less security for citizens, and the results in the U.S. will be no different. It's bad civic hygiene to build an infrastructure that can be used to facilitate a police state.

A version of this essay originally appeared in Newsday.

Office of Homeland Security webpage for the program

News articles:
[washtimes.com]
[washtimes.com]
[nytimes.com]
[gcn.com]
[sunspot.net]
[cnn.com]
[nytimes.com]
[ilw.com]
[theage.com.au]
[thestar.co.za]
[ilw.com]

Opinions:
[mysanantonio.com]
[rockymountainnews.com]
[shusterman.com]
[washingtontechnology.com]

Brazil fingerprints U.S. citizens in retaliation:
[msnbc.com]

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

News

Yahoo is planning on combating spam by requiring e-mail to be authenticated. The problem, they claim, is that there's no way of knowing who the sender really is. It seems obvious to me that this won't stop spam at all. Spammers are already breaking into computers and hijacking legitimate users' e-mail systems. Spammers are already sending mail out of random countries and stolen accounts. How exactly will this make things better?
[newscientist.com]
[cnn.com]

Regularly I've written that secrecy is more often harmful to security than helpful. This article discusses that: the Bush Administration is using terrorism as an excuse to keep many aspects of government secret, but the real threat is more often the government itself.
[usnews.com]

Here's some good Microsoft news. The new update to Windows XP will include the Internet Connection Firewall (ICF). It will be on by default and more rigorous in its protection. Seems like a security improvement to me.
[eweek.com]

OnStar, the communications and navigation system in GM cars, can be used to surreptitiously eavesdrop on passengers:
[newsmax.com]

More TSA absurdity:
[post-gazette.com]

And the British react to a decision to put sky marshals on selected flights into the U.S.:
[channelnewsasia.com]

Interesting article on a computer security researcher who is using biological metaphors in an effort to create next-generation computer-security tools. This is interesting work, but I am skeptical about a lot of it. The biological metaphor works only marginally well in the computer world. Certainly the monoculture argument makes sense in the computer world, but biological security is generally based on sacrificing individuals for the good of the species -- which doesn't really apply in the computer world.
[computerworld.com]

There's two interesting aspects to this case. First, the judge ruled that a player has a claim of ownership to virtual property in computer game. And second, the software company was partially liable for damages because of bugs in their code. The case was in China, which isn't much of a precedent for the rest of the world, but it is still interesting news.
[news.yahoo.com]

An interesting blackmail story. "Cyber blackmail artists are shaking down office workers, threatening to delete computer files or install pornographic images on their work PCs unless they pay a ransom, police and security experts said."
[www.cnn.com]

An article on the future of computer security. The moral is identical to what I've been saying: things will get better eventually, but before that things will get worse:
[www.computerworld.com]

A story about security in the National Football League:
[www.csoonline.com]

How to hack password-protected MS-Word documents. Not only can you view protected documents, you can also make changes to them and reprotect them. This is a huge security vulnerability.
[www.securityfocus.com]

Last month Bush snuck into law one of the provisions of the failed PATRIOT ACT 2. The FBI can now obtain records from financial institutions without requiring permission from a judge. The institution can't tell the target person that his records were taken by the FBI. And the term "financial institution" has been expanded to include insurance companies, travel agencies, real estate agents, stockbrokers, the U.S. Postal Service, jewelry stores, casinos, and car dealerships.
[www.wired.com/]

Adobe has special code in its products to prevent counterfeiting. I think this is a great security countermeasure. It's not designed to defend against the professional counterfeiters, with their counterfeit plates and special paper. It's designed to defend against the amateur counterfeiter, the hobbyist. Color copiers have had anti-counterfeiting defenses for years. Raising the bar is a great defense here.
[www.miami.com]

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

Terrorists and Almanacs

It's so bizarre it's almost self-parody. The FBI issued a warning to police across the nation to be on the watch for people carrying almanacs, because terrorists may use these reference books "to assist with target selection and pre-operational planning."

Gadzooks! People with information are not suspect. Almanacs, reference books, and informational websites are not dangerous tools that aid terrorism. They're useful tools for all of us, and people who use them are better people because of them. I worry about alerts like these, because they reinforce the myth that information is inherently dangerous.

The FBI's bulletin

News article

Clever commentary

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

Counterpane News

Counterpane continues to offer its Enterprise Protection Suite, which combines Managed Security Monitoring with Managed Vulnerability Scanning, fully outsourced Device Management, and Security Consulting services, at a 15% discount to Crypto-Gram readers (and, by extension, everyone)

EMEA press release

Schneier was chosen as Best Industry Spokesman by Info Security Magazine

Q&A with Schneier in Infoworld

Schneier essay on Blaster and the blackout (Salon.com)

Schneier op-ed essay on semantic attacks (San Jose Mercury News)

Schneier's op-ed essay on casual surveillance and the loss of personal privacy (Minneapolis Star-Tribune)

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

More "Beyond Fear" Reviews

"Beyond Fear" continues to sell well. The book is going into its second printing, so if it's not at your local bookstore, be patient for a couple of weeks. A new review: [http://www.vnunet.com/Analysis/1151575]

Two different reviews from Computing Reviews

Book's website

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

Security Notes from All Over:
President Musharraf and Signal Jammers

Attackers wired a bridge in Pakistan with explosives, intending to detonate it when President Musharraf's motorcade drove over it. But according to a Pakistani security official, "The presidential motorcade has special jamming equipment, which blocks all remote-controlled devices in a 200-metre radius."

Unfortunately, by publishing this information in the paper, the jamming equipment is unlikely to save him next time.

It's rare that secrecy is good for security, but this is an example of it. Musharraf's jamming equipment was effective precisely because the attackers didn't expect it. Now that they know about it, they're going to use some other detonation mechanism: wires, cell phone communications, timers, etc.

But maybe none of this is true.

Think about it: if the jamming equipment worked, why would the Pakistani security tell the press? There are several reasons. One, the bad guys found out about it, either when their detonator didn't work or through some other mechanism, so they might as well tell everybody. Two, to make the bad guys wonder what other countermeasures the motorcade has. Or three, because the press story is so cool that it's worth rendering the countermeasure ineffective. None of these explanations seems very likely.

There's another possible explanation: there's no jamming equipment. The detonation failed for some other, unexplained, reason, and Pakistani security forces are pretending that they can block remote detonations.

Deception is another excellent security countermeasure, and one that--at least to me--is a more likely explanation of events.

[www.salon.com]

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

WEIS

The Third Workshop on Economics and Information Security will be held in Minneapolis in May. This is currently my favorite security conference. I think that economics has a lot to teach computer security, and it is very interesting to get economists, lawyers, and computer security experts in the same room talking about issues.

Conference website Websites for the First and Second Workshops, including many of the papers presented

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

New Credit Card Scam

This one is clever.

You receive a telephone call from someone purporting to be from your credit card company. They claim to be from something like the security and fraud department, and question you about a fake purchase for some amount close to $500.

When you say that the purchase wasn't yours, they tell you that they're tracking the fraudsters and that you will receive a credit. They tell you that the fraudsters are making fake purchases on cards for amounts just under $500, and that they're on the case.

They know your account number. They know your name and address. They continue to spin the story, and eventually get you to reveal the three extra numbers on the back of your card.

That's all they need. They then start charging your card for amounts just under $500. When you get your bill, you're unlikely to call the credit card company because you already know that they're on the case and that you'll receive a credit.

It's a really clever social engineering attack. They have to hit a lot of cards fast and then disappear, because otherwise they can be tracked, but I bet they've made a lot of money so far.

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

Diverting Aircraft and National Intelligence

Security can fail in two different ways. It can fail to work in the presence of an attack: a burglar alarm that a burglar successfully defeats. But security can also fail to work correctly when there's no attack: a burglar alarm that goes off even if no one is there.

Citing "very credible" intelligence regarding terrorism threats, U.S. intelligence canceled 15 international flights in the last couple of weeks, diverted at least one more flight to Canada, and had F-16s shadow others as they approached their final destinations.

These seem to have been a bunch of false alarms. Sometimes it was a case of mistaken identity. For example, one of the "terrorists" on an Air France flight was a child whose name matched that of a terrorist leader; another was a Welsh insurance agent. Sometimes it was a case of assuming too much; British Airways Flight 223 was detained once and canceled twice, on three consecutive days, presumably because that flight number turned up on some communications intercept somewhere. In response to the public embarrassment from these false alarms, the government is slowly leaking information about a particular person who didn't show up for his flight, and two non-Arab-looking men who may or may not have had bombs. But these seem more like efforts to save face than the very credible evidence that the government promised.

Security involves a trade-off: a balance of the costs and benefits. It's clear that canceling all flights, now and forever, would eliminate the threat from air travel. But no one would ever suggest that, because the trade-off is just too onerous. Canceling a few flights here and there seems like a good trade-off because the results of missing a real threat are so severe. But repeatedly sounding false alarms entails security problems, too. False alarms are expensive -- in money, time, and the privacy of the passengers affected -- and they demonstrate that the "credible threats" aren't credible at all. Like the boy who cried wolf, everyone from airport security officials to foreign governments will stop taking these warnings seriously. We're relying on our allies to secure international flights; demonstrating that we can't tell terrorists from children isn't the way to inspire confidence.

Intelligence is a difficult problem. You start with a mass of raw data: people in flight schools, secret meetings in foreign countries, tips from foreign governments, immigration records, apartment rental agreements, phone logs and credit card statements. Understanding these data, drawing the right conclusions -- that's intelligence. It's easy in hindsight but very difficult before the fact, since most data is irrelevant and most leads are false. The crucial bits of data are just random clues among thousands of other random clues, almost all of which turn out to be false or misleading or irrelevant.

In the months and years after 9/11, the U.S. government has tried to address the problem by demanding (and largely receiving) more data. Over the New Year's weekend, for example, federal agents collected the names of 260,000 people staying in Las Vegas hotels. This broad vacuuming of data is expensive, and completely misses the point. The problem isn't obtaining data, it's deciding which data is worth analyzing and then interpreting it. So much data is collected that intelligence organizations can't possibly analyze it all. Deciding what to look at can be an impossible task, so substantial amounts of good intelligence go unread and unanalyzed. Data collection is easy; analysis is difficult.

Many think the analysis problem can be solved by throwing more computers at it, but that's not the case. Computers are dumb. They can find obvious patterns, but they won't be able to find the next terrorist attack. Al-Qaida is smart, and excels in doing the unexpected. Osama bin Laden and his troops are going to make mistakes, but to a computer, their "suspicious" behavior isn't going to be any different than the suspicious behavior of millions of honest people. Finding the real plot among all the false leads requires human intelligence.

More raw data can even be counterproductive. With more data, you have the same number of "needles" and a much larger "haystack" to find them in. In the 1980s and before, East German police collected an enormous amount of data on 4 million East Germans, roughly a quarter of their population. Yet even they did not foresee the peaceful overthrow of the Communist government; they invested too heavily in data collection while neglecting data interpretation.

In early December, the European Union agreed to turn over detailed passenger data to the U.S. In the few weeks that the U.S. has had this data, we've seen 15 flight cancellations. We've seen investigative resources chasing false alarms generated by computer, instead of looking for real connections that may uncover the next terrorist plot. We may have more data, but we arguably have a worse security system.

This isn't to say that intelligence is useless. It's probably the best weapon we have in our attempts to thwart global terrorism, but it's a weapon we need to learn to wield properly. The 9/11 terrorists left a huge trail of clues as they planned their attack, and so, presumably, are the terrorist plotters of today. Our failure to prevent 9/11 was a failure of analysis, a human failure. And if we fail to prevent the next terrorist attack, it will also be a human failure.

Relying on computers to sift through enormous amounts of data, and investigators to act on every alarm the computers sound, is a bad security trade-off. It's going to cause an endless stream of false alarms, cost millions of dollars, unduly scare people, trample on individual rights and inure people to the real threats. Good intelligence involves finding meaning among enormous reams of irrelevant data, then organizing all those disparate pieces of information into coherent predictions about what will happen next. It requires smart people who can see connections, and access to information from many different branches of government. It can't be seen by the various individual pieces of bureaucracy; the whole picture is larger than any of them.

These airline disruptions highlight a serious problem with U.S. intelligence. There's too much bureaucracy and not enough coordination. There's too much reliance on computers and automation. There's plenty of raw material, but not enough thoughtfulness. These problems are not new; they're historically what's been wrong with U.S. intelligence. These airline disruptions make us look like a bunch of incompetents who cry wolf at the slightest provocation.

This essay originally appeared in Salon.

News articles:
[www.usnews.com]
[www.napanews.com [www.startribune.com]
[www.contracostatimes.com]
[www.reuters.com]
[www.smh.com.au]
[www.usatoday.com]

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

 

Comments from Readers

From: russfink@SAFe-mail.net
Subject: Blaster and the August 14th Blackout

I just read your article, and have an additional question worth knowing about.

The article's hypothesis is that the massive blackout was indirectly aided by alarm systems that failed due to MS Blast, and these failed alarm systems allowed other equipment failures and adverse conditions to go undetected by the power operators. Because the technicians didn't know about the adverse conditions, their hands were tied and massive cascading failure resulted.

My question is, under normal circumstances, assuming the alarm systems are operational, how often do equipment failures or adverse conditions normally occur such that the alarm systems detect them in time, and humans can intervene and head off massive cascading failures?

I suspect that if the computers were working that day, the technicians would have learned about the alarm conditions, and they could have headed off the catastrophe. I just want to know how likely these alarm conditions occur on a day-to-day basis.

In other words, how many problems occur that we, the general public, don't ever hear about?

If we knew this probability metric, we could assess the relative hazard of worms leading to widespread blackouts as a function of alarm condition probability and alarm system/Internet interconnectedness.

I don't expect anyone to come forward to corroborate your hypothesis, as that would be tantamount to an admission of failure by the responsible IS/security staff, and likely grounds for dismissal. Perhaps some lone whistle blower might come out much later.


From: Andrew Odlyzko [odlyzko@dtc.umn.edu]
Subject: Computerized and Electronic Voting

The voting booth does provide some security against bribery and coercion, but only as long as we can stop camera phones from being used in them!


From: Fred Heutte [phred@sunlightdata.com]
Subject: Computerized and Electronic Voting

Thanks for your cogent thoughts on ballot security. I almost completely agree and was one of the first signers of David Dill's petition. I am also involved professionally in voter data -- from the campaign side, with voter files, not directly with voting equipment -- but we're close enough to the vote counting process to see how it actually works.

I would only disagree slightly in one area. Absentee voting is quite secure when looking at the overall approach and assessing the risks in every part of the process. As long as reasonable precautions like signature checking are done, it would be difficult and expensive to change the results of mail voting significantly.

For example, in Oregon, ballots are returned in an inside security envelope which is sealed by the voter. The outside envelope has a signature area on the back side. This is compared to the voter's signature on file at the elections office. The larger counties actually do a digitized comparison, and back that up with a manual comparison with a stratified random sample (to validate machine results on an ongoing basis), as well as a final determination for any questionable matches.

Certainly it is possible to forge a signature. However, this authentication process would greatly raise the cost of forged mail ballots, absent consent of the voter. In turn, interference or coercion with absentee voting would require much higher travel costs (at least) than doing so at a polling place, for a given change in the outcome.

It is true that precincts have poll watchers, and absentee voters do not. But consider this. Ballot boxes, which are often delivered by temporary poll workers from the precinct to the elections office, are occasionally stolen, but mail ballots are handled within a vast stream of other mail by employees with paychecks and pensions at stake. The relatively low level of mail fraud inside the postal system is a testament to its relative security, and the points where ballots are aggregated for delivery to the elections office are usually on public property and can also be watched by outside observers if need be.

Oregon has had some elections with 100% "vote by mail" since 1996, and all elections since 1999. So far, no verifiable evidence of voter fraud has emerged, despite many checks and some predictions by those with a political axe to grind that we would be engulfed in a wave of election fixing.

The reality is that Oregon's system, which is based on some common-sense security principles, has proven to be robust. The one lingering problem has been the need of some counties to make their voters use punch cards at home because of their antiquated vote counting equipment. But while this is a vote integrity issue -- since state statistics show a much higher undervote and spoiled ballot total for punch cards as compared to mark-sense ballots -- it is not a security issue per se. And with Help America Vote Act (HAVA) funding to convert to more modern vote counting systems, the Oregon chad remains in only one county and will go extinct after 2004.

The mark-sense ("fill in the ovals") ballots we have work well, and have low rates of over-votes and under-votes, despite the lack of automated machine checking that is possible in well-designed precinct voting systems. This suggests that reasonable visual design and human-friendly paper and pencil/pen home voting is a very reliable and secure system. When aided by automated counting equipment, we even have the additional benefit of very fast initial counts.

The increase in voter participation in Oregon since the advent of vote-by-mail -- 10 to 30 percentage points above national averages, depending on the kind of election -- leads to the only other issue, which is slow machine counts on election night after the polls close due to the surge of late ballots received at drop-off locations around the state. Oregon in fact isn't really "vote by mail," it's vote-at-home, with a paper ballot that can be mailed or left at any official drop-off point in the state, including county election offices, many schools and libraries, malls, town squares, etc.

The great advantage of the Oregon system is that it relies on the principle that if you appeal to the best instincts of the citizen, the overwhelming majority will "do our part" to ensure the integrity of the democratic voting process, whether it is full consideration of the candidates and issues before voting, watching to make sure all ballots are securely transferred and counted, or favoring those laws and policies that insure that everyone eligible can vote, that their votes are counted, and that the candidates and measures with the most votes win.

The system is also cheaper than running traditional precinct elections. What's not to like?


From: Paul Rubin [phr-2003@nightsong.com]
Subject: gambling machines vs. voting machines

The document at [gaming.state.nv.us] shows what anyone designing a new gambling machine (e.g. video poker machine) has to do to get it certified in Nevada. Note per page 4, all source code for the game-specific parts of the machine must be submitted to the gaming commission along with enough framework for the commission to test it, and I'm told they actually examine it line by line (approval takes about six months). There are also specifications for the physical security of the machines.

After deployment, the audit department apparently does random spot checks, going into casinos and pulling out machines, making sure that the EPROM images actually running in them are the same as the images that were approved. Four or five other states apparently do similar examinations to certify equipment. The rest of the states then go along with what the main five or six gambling states decide.

It's bizarre that voting machine vendors squawk so much about getting their code audited, since they face the same issues as gambling machine vendors do (the purpose of the code review must be partly to make sure the machine isn't sneakily grabbing a few extra points of revenue), and the gambling machine vendors seem to tolerate the requirement.

There are also some federal standards about code certification for firmware running inside medical implants or in avionics. I'm trying to find out more about that. Voting machine code seems to have no standards at all.

http://gaming.state.nv.us/forms/frm141.pdf

From: Arno Sch=E4fer [arno_schaefer@gmx.de]
Subject: Modem hack

 ] This is an old scam.  A man uses a computer virus to change
 ] Internet dial-up numbers of victims' computers to special
 ] premium rates, in an attempt to make a pile of money.  How he
 ] thought that he wouldn't get caught is beyond me.

That remark is interesting. In Germany, these so-called "dialer" programs are an enormous problem, so much so that the German parliament recently passed a special law in order to contain the deluge of these scams. Today, running a "dialer protection" tool is as essential to German Internet users as having virus protection and a personal firewall in place. Apparently, the danger of getting caught and prosecuted is small compared to the financial incentive for these people. One of the reasons for this is that it is often virtually impossible to find out who is behind one of these "premium rate" dial-up numbers. There is a whole industry of sellers and resellers for German premium rate numbers, many of which are in other countries, far from German jurisdiction. The fees for these calls (the most blatant of which go up to $100 US per minute or up to $1000 US per single call!) were collected with the regular phone bill. When someone discovered they had accidentally "contracted" a dialer program, it often was impossible to track down the culprits, or it was already too late and they had disappeared. Moreover, you had to prove that you had not deliberately activated the dialer program, as these were usually declared as a "service" (e.g., in order to access adult content). So in fact having someone actually prosecuted for this kind of fraud was rather the exception than the rule. Luckily, the legal position for victims of these scams has markedly improved by now in Germany.


From: John Viega [viega@securesoftware.com]
Subject: Amit Yoran

I was surprised in reading this month's Crypto-Gram to see you place Amit Yoran in the doghouse for the following quote:

"For example, should we hold software vendors accountable for the security of their code or for flaws in their code? In concept, that may make sense. But in practice, do they have the capability, the tools to produce more secure code?"

The only problem I personally see with this quote is that it doesn't have enough context attached to it to make an absolute determination of the intent. I do see how you could interpret it to mean, "It's impossible to produce more secure code than we do today." However, just from reading the quote, it seems that he's more saying that forcing companies to accept liability isn't going to solve the problem, because even with incentive to have perfectly secure software, companies will be unable to deliver, due to the complexities of software development and the lack of good tools and methodologies.

If that is the intent of Mr. Yoran's statement (which I'm sure it is, as I'll get to in a moment), then he is dead-on. While there are clearly easy things people can do that will help with the problem (e.g., use any language other than C), the goal of building a system without risk is more or less unachievable in practice. And the security industry has done little to make it easy on developers, for whom security can not ever be more than a part-time concern.

Not only have we, as an industry, not provided adequate tools to support designing, building, and maintaining more secure systems, but the "out of the box" security solutions we provide lend themselves to misuse as well. For example, while Java is sometimes billed as a "secure" language, I can tell you that we still tend to find one significant security risk for every thousand lines of code (or so) in Internet-enabled Java programs. Perhaps a better example is SSL/TLS, where the libraries we provide developers encourage misuse. The mental model developers need to use these libraries in a way that protects against simple man-in-the-middle attacks is far more complex than the model they tend to have (i.e., that SSL/TLS is a drop-in solution for securing network traffic). As a result, the vast majority of applications that deploy SSL/TLS get it wrong the first time, in a major way.

Yes, there are software security problems that nobody should ever be making, particularly the buffer overflow and its ilk. But, I'm sure that you of all people should know how many things can go wrong in networked applications (particularly when there are complex protocols involved) and how obscure some of the faults in software systems can be. For example, there was a recent problem in your own Password Safe that showed up despite a defensive design.

Moreover, I'm sure you're aware that design and analysis techniques for software security are still in their infancy. I'm currently working on putting together a consortium to develop better design methodologies that better integrate with existing software engineering practices, because there is nothing effective out there yet. And, while static analysis technologies such as model checking are decades old, we've only been applying them to security problems for a few years now. And such technologies are still fairly far away from the point that they'll be fairly complete and integrate adequately with the workflow of finicky developers.

Even in a world with great design and analysis technologies, we're going to have a hard time educating developers on the world of risk around them to the point that social engineering attacks become totally impractical. It's not unreasonable to say that we're far away from the point where it would make financial sense to make software vendors liable for security mistakes.

I do know Amit Yoran personally, and I know him fairly well. He is extremely intelligent and understands the software security problem and the limits of current technology. He understands this problem so well that, before he accepted the job of Cybersecurity Tzar, he took a very active interest in the affairs of our startup and our analysis technology. As a result, I can say quite definitively that Amit not only understands the software security problem far better than most people do, he believes that it is important for the security industry to pioneer a trail toward liability by providing better technologies and methodologies.

I do see how you could have misinterpreted Amit's stance from the ambiguity in that one quote. I am surprised, though, that you would come to such a snap judgment based on it alone. Beyond the fact that you've undoubtedly been misquoted to your detriment on at least one occasion, a bit of diligence on Amit certainly would have turned up the fact that he's actually quite clued in on this subject, and is not in the same class as the typical snake-oil you expose on a monthly basis.


From: Mary Ann Davidson [mary.ann.davidson@oracle.com]
Subject: Amit Yoran

I am responding to a comment you made in the latest Cryptogram about a quote Amit Yoran made (I have not read the original interview, so please bear with me):

"'For example, should we hold software vendors accountable for the security of their code or for flaws in their code?' Mr. Yoran asked in an interview. 'In concept, that may make sense. But in practice, do they have the capability, the tools to produce more secure code?'"

"The sheer idiocy of this quote amazes me. Does he really think that writing more secure code is too hard for companies to manage? Does he really think that companies are doing absolutely the best they possibly can?"

I don't necessarily read that as indicating that it is too hard for companies (with some caveats I will explain below) to write better code. Referring to his comment about "the tools to produce more secure code," I think it is true that the tools are lacking to help make it easier to find nasty bugs in development.

This does not -- she said repeatedly -- excuse the overabundance of avoidable, preventable security faults, but the lack of good code scanning and QA tools does make it harder to do better, even if you want to do better. I have seen development groups that "get" security, have internalized it, who are all proud of themselves for checking input conditions to prevent buffer overflows, but they only checked 20 out of 21 input conditions. One mistake still leads to a buffer overflow, and is still really embarrassing and expensive to fix. If you can automate more of these checks, it obviously will lead to better code.

Most of the code scanning/vulnerability assessment tools I see are designed by consulting firms, which mean that they are generally not designed to run against a huge code base every night, they don't work on source code, they are not designed to be extensible, they have too many false positives, and so on. Venture capitalists as a group are often more interested in funding "Band-Aid" companies with outrageous claims ("Our security product slices, dices, protects against all known julienne fry attacks, and makes your teeth whiter, too!") than vaccine companies ("scan code, find bugs, fix bugs before product ships so you don't need Band-Aids, or need fewer of them"). You can make more money on Band-Aids than on vaccines, which is probably one reason there are so many snake-oil security products out there instead of a few really good code scanning tools. Defense in depth is necessary, but we would not need so much of it if we all made better products.

Clearly, corporate will to do a better job is a prerequisite, or nobody would buy code-scanning tools, much less take the time to use them in development. Most of the security issues in industry come down to "crummy code," and writing less crummy code is a matter of culture and tools to do the job. What amazes me is that almost every discussion about this issue is prefaced with "...but we all know we can't build perfect code." That does not mean we should stop trying, or that the status quo is acceptable.

To answer your question (Does he really think that companies are doing absolutely the best they possibly can?), I've met Amit and talked to him a couple of times. No, he is not letting industry off the hook and no, I don't believe he thinks industry is doing everything they can. I've never read his comments that way, at any rate.

C
R
Y
P
T
O
|
G
R
A
M

J
A
N
-
2
0
0
4

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on [www.schneier.com].

To subscribe, visit [Crypto-Gram] or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit [crypto-gram-faq].

Comments on CRYPTO-GRAM should be sent to schneier@counterpane.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See [www.schneier.com].

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide. See [www.counterpane.com].

Copyright © 2004 by Bruce Schneier.