Safely secured - when is a security program to be trusted

Someone who wants to secure his computer system has plenty of security products to choose from. The accompanying brochures typically list the most exotic features and promise optimal security against all kinds of malicious attackers. But how can the average user know whether a security program can truly be trusted? And what is in fact important when choosing an encryption system?

Visible trust

When buying a word processor it is easy to establish whether the program works as advertised or not. If the button "Italics" in fact turns the text into a bold font, or saved files cannot be read back, it is clear that is program should be trashed. More subtle errors, such as a spreadsheet program that rounds the wrong way, usually only surface after a substantial amount of searching and checking. However, almost all of these errors can be and often are detected by users under normal circumstances. For security programs this is quite different.

A program that promises to encrypt files such that they cannot be cracked by anyone, usually is able to produce, after prompting for a password, a file that to the user appears to be totally random. The user can look at this file, and feed it to the decryption program where he discovers that entering the same password gives back the original file. It would seem that the program operates in accordance with its manual. However, this raises the question: how can the user know determine whether the method used to encrypt the file is truly "uncrackable"? There are many factors that can influence the trustworthiness of a security program.

Home crafts

A common occurrence with security software is to use of encryption algorithms developed in-house. There are many well-known encryption algorithms, such as DES, IDEA, Blowfish or RSA, whose details are publicly available and which (possibly subject to patent licenses) may be used by anyone in their security products. These algorithms have been tested by mathematicians and cryptographic experts for many years. Sometimes mistakes were found in the algorithm. These would then be published so that the developers can use the feedback to improve the algorithm and fix them.

Home-grown algorithms, which often are classified as company secrets, are almost never tested in such an extensive fashion. Usually someone who wants to verify the algorithm can only look at the input and the resulting encrypted file. He cannot study the algorithm itself to look for fundamental weaknesses. This makes evaluating such a program a very difficult matter. Still, many such secret encryption algorithms used in popular programs have been cracked in this fashion, which should tell you something about their (in)security.

So then, why would any security program be based on home-grown algorithms rather than publicly available and publicly evaluated algorithms? An important reason is speed. It is not very easy to integrate a standard library in a particular program, as the respective interfaces usually differ slightly. Programmers then often choose to implement their own routines which work exactly the way they want them to. Sometimes using someone else's algorithm costs money, as copyright of patent licenses have to be paid. Writing your own algorithm is then a convenient way to say some money.

Writing cryptographic software is not a trivial matter. For example, version 2.0 of the Netscape Web browsing software contains an error in the routines that generated session keys for use in secure transmissions between browsers and webservers. These keys should have been chosen randomly, but due to a very subtle bug the program would only use a very small number of different keys. Once this is known, it is easy to crack the encrypted transmissions by simply trying out all the possible keys. Considering that these encrypted transmissions would usually contain credit card numbers and other personal information, is error could have had serious consequences. Fortunately it was discovered and fixed in time.

Using standard algorithms therefore serves as a good indicator of the trustworthiness of a program. While there is still a chance of errors in the implementation, the risks associated with home-grown libraries are much bigger.

Hack us!

An interesting marketing trick is to put at a server which people are invited to hack, or to offer an encrypted message that needs to be cracked. Usually the offer is accompanied with a financial reward. When after a number of months nobody was able to hack into the server, or decrypt the message, the company can advertised as fact and make it look as if their product is safe.

Such a test proves nothing. Usually the reward is so low that people are not inclined to try it out seriously, but maybe once for the fun. Bruce Schneier, a well-known cryptographer and author of various encryption algorithms, once remarked that most cryptographers receive more for a day's work than the average reward, and so they have no reason to make a serious attempt to crack the system. Additionally, it might very well be possible that the system can be cracked in a slightly different situation. For example, it might be possible to recover the encryption key if a hundred different encrypted messages are compared, or when a small portion of the original message is known. Such a hacking test then only gives a false sense of security.

The company RSADSI, which holds the patent to RSA, has held a competition for years in which participants are invited to decrypt messages encrypted with RSA. This competition is not intended to show the world that RSA is secure, but rather to measure the state-of-the-art in cryptography. In february of 1999 a 465-bits RSA key was cracked in six weeks. In 1978, when RSA was just invented, it was expected that a 256-bits key would be completely uncrackable. Today, 1024 bits is considered a minimum.

Distributed Computing Technologies, also known as distributed.net, organizes hacking-competitions on the Internet in which anyone can participate. Every participant installs a program on his computer that runs in the background and tries to decrypt a message by simply trying out a large number of keys. A central server coordinates the efforts and insures that everyone tries out different keys. If only enough people participate, then at some point the right key will be found. In januari 1999 a message encrypted with DES was cracked in less than one day.

The purpose of this competition is to demonstrate that small keys (such as the 56 bits key used in DES) are insecure. If a group of Internet users can crack a message in their spare time, then any dedicated organization with large supercomputers can do the same much faster. To prove this, the Electronic Frontier Foundation constructed a special computer in July 1998 that was capable of decrypting a DES-encrypted message in only 56 hours. In January 1999 this machine participated in the distributed.net challenge in which another DES-encrypted message was cracked in less than a day.

The key length is important when determining the trustworthiness, although it is not as important as the algorithm used. A lot of software developed in the United States uses relatively short keys, due to export restrictions on software using longer keys. The maximum permitted key length (56 bits at the moment) is so short that any software using it should be regarded as inherently insecure, regardless of the algorithm used.

Open Source

A well-known phenomenon associated with Linux, but also not unheard of under Windows, is the notion of Open Source. This basically means that the source code of a computer program is freely available to all. Users may adapt the software to improve its or to expand its functionality. An example is Netscape or of course the Linux operating system itself. Open Source is also important for cryptographic software, but because of a different reason.

Using only an encrypted message it is usually very hard to crack an encryption system. Having knowledge of the algorithm and the implementation makes it a lot easier to crack the system. Many companies use this as an argument to keep the algorithm and source code a secret. However, hackers are often able to recover the algorithm and implementation anyway, and can then use this information to crack the system and steal data. At the same time it is impossible for ordinary users to verify whether the system is secure.

When a program is open source, it source code is available for anyone. While hackers are still able to crack the system, it is now also possible for a benevolent programmer to find any mistakes and fix them. He can then spread information about the fix, so that the system becomes more secure for everyone. The encryption program PGP was one of the first security programs that used this approach. Thanks to dozens of programmers who corrected errors and implemented additional functionality, PGP is now one of the most secure programs in the world.

While the fact that a program is open source does not automatically imply that it is secure, but the chances that it contains an error (intentional or not) are much smaller.

Digital signatures

Using digital signatures (in Dutch) a unique code can be added to any electronic file. This code is unique for the person who generated the digital signature and for the file to which it belongs. The recipient of the file can now use a certificate to verify whether the file has been modified and who placed the signature. This technique is thus well-suited for protecting software. A manufacturer can accompany an update with a digital signature, so that users know that they are installing a real update and no forgery or Back Orifice server in disguise.

Microsoft, for example, uses digital signatures in its Authenticode system, with which ActiveX controls are secured. And ActiveX control can do anything an ordinary Windows application can (ranging from opening files to rebooting the system or formatting the hard disk), but because of the digital signature the user of the software can identify the owner of the control. He then knows who to talk to if the control does not work as promised. A software developer only receives a certificate if he promises not to include malicious code in the ActiveX control.

In practice the system does not appear to be working as intended. Fred McClain wrote an ActiveX control that could automatically reboot a user's computer, and was able to obtain the necessary certificate without any problems. Even worse, of the certificates are not correct because they have been issued by the wrong entity or because they are associated with all the versions of the control in question. Many installation instructions therefore recommend the user to ignore any messages about invalid signatures! Clearly this reduces the security offered by digital signatures to about zero.

Too much trust

There is, however, another important factor that determines the security of the system. This is the user himself. It is necessary that the user realizes what a security program protects him against and how safe this security is. A good example of this problem is the privacy-hole discovered by Richard Smith in April 1999 in the Anonymizer. This is a service that acts as a proxy server and allows people to anonymously surf the Web. The Web server only sees the IP address of the Anonymizer itself. The Anonymizer forwards the pages to the real user. The security hole discovered by Smith was that a web page was able to use a Javascript to find out the real IP address of the user.

Strictly speaking this is not a bug in the Anonymizer. It is impossible for service like that to check all scripts in every web page for malicious scripts. The only solution would be to automatically remove all scripts from all pages, and this would not be acceptable to many people, because it would also remove useful scripts. Still many people assumed that they were completely anonymous if the used the Anonymizer service. They did not take any additional security measures, like disabling Javascript or preventing their browsers from automatically sending e-mail without confirmation.

Digital signatures can also be trusted too much. If a program is digitally signed, but does not mean that it is trustworthy or that one really can take legal action against its author. A 'secure' connection (like SSL) also does not offer a guarantee that the system is secure; the user still has to trust the company on the other end of the connection.

Conclusion

While many different techniques, programs and protocols are available to secure Web surfing, e-mailing and other Internet-related activities, they are no more than aids with which a user can increase his trust in his system. Offering trust is the most important task of any security program.

When trying out such a program, there should be taken to evaluate a number of factors that can influence its security. The above mentioned uses of known algorithms, large key length and freely available source code are such factors. A digital signature could also offer additional security, although at this time digital signatures have only limited application.

The most important factor still is knowledge about the program itself. Knowing exactly what a program does, what it protects against and how safe this security is of vital importance. The user should -- especially with open source programs-- check himself whether any errors were found in a program and ensure himself that he upgrades to the latest version. In addition he should determine for himself which the relevant threats are to him, and on that basis select the program(s) that can defend him against that.