1 / 33

Insecure Storage

Insecure Storage. What is Insecure Storage?. Insecure storage is the end result of improperly protecting the integrity of the data accessed by a web application. The following factors contribute to insecure storage:. Failure to encrypt critical data

fulks
Download Presentation

Insecure Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Insecure Storage

  2. What is Insecure Storage? Insecure storage is the end result of improperly protecting the integrity of the data accessed by a web application. The following factors contribute to insecure storage: • Failure to encrypt critical data • Insecure handling of keys, certificates, and passwords • Improper storage of secrets in memory • Poor sources of randomness • Poor choice of algorithm • Attempting to invent a new encryption algorithm

  3. Encryption and Decryption • Encryption does not allow accessing the data until/unless there are permitted parameters to decrypt the rules that are governed by the encryption process. • Encryption offers the locking of particular data, whereas Decryption is the process that unlocks the data • Two approaches: • Secret, or Symmetric key encryption • Public, or Asymmetric key encryption

  4. Keys • Although keys are not physical like the keys for your car or home, they are nonetheless used for specific roles and applications. • Keys should be labeled indicating when they were created, their purpose, their target application, owner, access restrictions and privileges. • Keying material should include the encryption key and initialization vector that are used to support the encryption. • The generation of keying material should be done using a true random numbers or at minimum, a cryptographic random number generator. Both are explained in the random numbers section of this presentation. • The storage and distribution of keying material must be secure from tampering until it is destroyed. Distribution in an electronic form must be done with key wrapping. That is the process of encrypting the keying material with a master key or using a public encryption scheme. • Once the keying material is no longer needed it must be destroyed.

  5. Secret, or Symmetric Key Encryption In secret key encryption, a single key is used for both the encryption and decryption of the data; this key is called the secret key. In this approach, the business partners who are involved in the transactions need to maintain the key and keep the key secret. Use the largest key size possible for the algorithm you are using. Larger key sizes make attacks against the key much more difficult, but can degrade performance. Use Rijndael (AES) or Triple Data Encryption Standard (3DES) when encrypted data needs to be persisted for long periods of time. Use the weaker (but quicker) RC2 and DES algorithms only to encrypt data that has a short lifespan, such as session data.

  6. Secret Key Limitations • The organization has to maintain a separate key for each customer. • It will have to maintain many numbers of keys when the amount of business transactions increases. • The exchange of the secret key should be kept very confidential.

  7. Public, or Asymmetric Key Encryption Public key encryption contains two keys: 1. Public key 2. Secret key. They are used for the encryption and decryption of the data. In simple words, a public key is a freely available key that is used for the encryption of the data, whereas the secret key is the master key used for decryption of the encrypted data. The private key is not exposed to the outside world and it is kept secret. Key-generation tools are used to generate this pair of keys.

  8. Public Key Limitations 1. Keys can be lost 2. Keys can expire 3. Keys can be stolen 4. Not suitable for encrypting large volumes of data

  9. Secret and Public Key Algorithms Public Key Algorithms RSA – 256 to 3072 bits Diffie-Hellman - exponential Elliptical Curve – 224 bits Secret Key Algorithms AES (Rijndael) – 128, 192, 256 bits DES – 64 bits 3DES – 128, 192 bits Blowfish – 32 to 448 bits

  10. Property RC2 DES 3DES Rijndael Default mode CBC CBC CBC CBC Allowed modes CBC, CFB, ECB, OFB CBC, CFB, ECB, OFB CBC, CFB, ECB, OFB CBC, CFB, ECB, OFB Unsupported Modes CTS CTS CTS CTS .NET Class RC2 CryptoService DES CryptoService TripleDES CryptoService Rijndael Managed Legal key sizes 40 ~ 128 64 128, 192 128, 192, 256 Default key size 128 64 192 256 Legal block size 64 64 64 128, 192 256 Default block 64 64 64 128 Feedback 8 8 8 128 .NET Symmetric Algorithms .NET Symmetric Encryption Algorithm Properties The security of each of the algorithms is determined by the size and quality of the key. To the encryption algorithm, the key is nothing more than a very large binary number. The strength of any encryption is based upon the algorithm and the effective key size. Each of the symmetric algorithms has different performance and encryption strength properties that are best suited for specific scenarios.

  11. .NET Symmetric Algorithms(Cont’d) • Which of these algorithms is best? • One must consider their business objectives, performance requirements, the sensitivity of the data. and the value of their data. • Although all of these algorithms have had expert review and have been extensively tested over time, there are no guarantees of everlasting security. • The selection of security mechanism will depend on the perceived adversaries and their perceived technical abilities and financial resources.

  12. .NET Symmetric Algorithms(Cont’d) • Algorithm Properties • RC2 was developed in 1987 by Ron Rivest of RSA Securities. RC2 is a block cipher with a variable key and block length. This is a widely used encryption scheme that has been incorporated in many commercial products. • DES was developed by IBM in 1974 and eventually adopted by the NIST in 1976. DES uses a 64 bit key, but since the last byte is used for parity, the effect key strength is 56 bits. • The DES key is susceptible to a brute force attack within a few days. This algorithm is ideally suited for an application that needs to be backwards compatible with DES or for an application in which the information lifespan is very short (e.g becomes worthless within a day).

  13. .NET Symmetric Algorithms(Cont’d) • Algorithm Properties • The 3DES algorithm was also developed by IBM. In essence, 3DES is three iterations of DES - each iteration with a different key. The key size on 3DES is 168 bits (56 bits x 3). But because of a “meet-in-the-middle" theoretical attack, the effective key size is 112 bits (56 bits x 2). Because it has to go through the extra iterations, 3DES is slower than DES. • The Rijndael algorithm was developed by Daemen and Rijmen as a candidate for the U.S. Advanced Encryption Standard (AES). It employs key sizes of 128, 192, or 256 bits and block lengths of 128, 192, or 256 bits. Any combination of key sizes and block lengths can be used. It was designed to resist all known attacks, have design simplicity, code compactness and speed on a wide variety of platforms.

  14. .NET Symmetric Algorithms(Cont’d) Modes of operation Because block ciphers encrypt a block at a time, two plaintext blocks that are identical, will result in the cipher text blocks that are also identical. Such a pattern could be used to recover the keys. To avoid this, the previous cipher text block is chained back into the encryption process, thereby modifying the next cipher block. This process continues until the entire plaintext is encrypted. There are different chaining modes that can be used. Their acronyms are CBC, ECB, CFB, CTS, and OFB.

  15. .NET Symmetric Algorithms(Cont’d) Chaining Modes The cipher block chaining or (CBC) is the default mode for the encryption algorithms included with the .NET Framework. It is also one of the most secure. It takes the previous ciphertext block and performs an XOR operation with the current plaintext block before it is encrypted to produce the next ciphertext block. Initially the initialization vector (IV) is XORed with the first plaintext block before it is encrypted. If the plaintext always begins the same way (Dear Sir: ) then the initialization vector never changes. The beginning of the ciphertext block will also always be the same. This is why the IV should change from session to session.

  16. .NET Symmetric Algorithms(Cont’d) Chaining Modes The electronic code book or (ECB) encrypts each block independent of the previous block. This creates a one-to-one relationship between the plaintext and the ciphertext. If there are duplicate blocks in the plaintext, there will be duplicate ciphertext blocks. This independence of the previous block makes this the highest performance mode and also the weakest mode of operation in terms of data security. The plaintext must be larger then the block size.

  17. .NET Symmetric Algorithms(Cont’d) Chaining Modes The cipher feedback (CFB) mode is similar to CBC except that it begins the encryption with a single byte rather than the entire block. This mode is ideal for data streaming. However, if there is an error in the encryption of the single byte, the remainder of the plaintext block will be corrupted.

  18. .NET Symmetric Algorithms(Cont’d) Chaining Modes Ciphertext stealing (CTS) produces ciphertext that is the same size as the plaintext in cases where the plaintext is larger than the block size. If the plaintext is smaller then the block size, padding is added to the message before it is encrypted. CTS works similarly to the CBC mode until the second last block. At that point, the last block and the second last block are XORed with each other to produce the final encrypted block. The CTS mode is not supported by any of the symmetric encryption algorithms currently shipped with the .NET Framework BCL. It is included to support new symmetric algorithms that might derive from the SymmetricAlgorithm class at a later time.

  19. .NET Symmetric Algorithms(Cont’d) Chaining Modes The output feedback (OFB) is similar to the CFB, with the exception that if an error occurs in the encryption, the remainder of the cipher text will be corrupted.

  20. .NET Symmetric Algorithms(Cont’d) Initialization vector The initialization vector (IV) is a random sequence of bytes pre-appended to the plaintext before the initial block is encrypted. The IV plays a large role by reducing the chances of successfully factoring the key using a chosen plaintext attack. The IV does not need to be secret but should vary from session to session. The U.S. Federal Government states that in regards to the government's usage, the IV should also be encrypted if the data encryption uses the CBC encryption mode and the IV needs to be transmitted over an unsecured channel.

  21. .NET Symmetric Algorithms(Cont’d) Effective key space The effective key space is one of the determining factors of the encryption strength. The difference between effective key space and key space is that the effective key space represents the maximum work effort necessary to brute force recover the keys. The key space on DES is 64 bits; however, since 8 bits are used for parity, the maximum work effort to recover the key is based on an effective key space of 56 bits. Regardless of effective key space, if the method to generate keys is predictable, which means it has little entropy, recovering the keys could be relatively easy using statistical analysis.

  22. .NET Symmetric Algorithms(Cont’d) Entropy Entropy is used to describe the amount of disorder in a sequence or system. Higher entropy has greater disorder. A 128 bit key may be equivalent to 30 bits of entropy if we base the key on a 20 character password or phrase entered by the user. Entropy is the amount of randomness in the bits. In this case the effective key size is 30 bits even though the key space is 128 bits. If using a Standard English pass phrase with each letter taking a full byte, the rule of thumb is that there would be 1.3 bits of entropy per byte. This is taken from the fact that in a Standard English phrase there would be a statistically higher occurrence of certain (e,r,s,t) letters than others. A pass phrase would have to be 85 characters long to have 110 bits of entropy, and you simply couldn't use 110 bits without first distilling it to match the encryption key requirements. To produce enough entropy for a hash function such as MD5 to consider random you would need a pass phrase at least 99 characters long.

  23. .NET Symmetric Algorithms(Cont’d) .NET cryptographic hashing algorithms Hashing algorithms are generally used to ensure data integrity by producing a unique numerical message digest or fingerprint that represents the data being hashed. Hashing takes an arbitrary amount of data and produces a message digest that is fixed length. Hashing works in a single direction. This is because you can't reproduce the data given the message digest, and it is computationally impossible to produce two documents that produce the same digest. This type of hashing is known as Message Detection Code (MDC).

  24. .NET Symmetric Algorithms(Cont’d) .NET cryptographic hashing algorithms There are different hashing algorithms that produce varying lengths of the message digest. The greater the length, the less likely there would be any collisions with two documents producing a similar message digest. MDC primitives can be used to detect changes to data. If the message digest is sent along with the data, both pieces of information could be intercepted and altered before being sent along. A solution to this is to use a keyed hash primitive.

  25. .NET Symmetric Algorithms(Cont’d) .NET cryptographic hashing algorithms The .NET Framework Cryptographic Namespace contains a number of MDC primitives with varying hash sizes: 1) MD5 2) SHA1 3) SHA256 4) SHA384 5) SHA512

  26. .NET Symmetric Algorithms(Cont’d) Keyed hash primitives Keyed hash primitives are hashes that produce a message digest based on the data and a secret key. Keyed hash algorithms are known as Message Authentication Codes (MAC). A MAC serves two purposes: to assure data integrity and to assure authentication. There are two types of MAC: 1) One based on hash algorithms such as SHA1 and 2) One based on encryption algorithms such as TripleDES. The main difference between both key hash algorithms is the restriction on the key and the size of the message digest.

  27. .NET Symmetric Algorithms(Cont’d) .NET cryptographic pseudo random number generator Generating random numbers is not an easy task. Random numbers are in essence, a series of numbers that are arbitrary, unknowable, as well as being unpredictable. Every number has an equal probability of coming up. Because of the characteristics of random numbers, using a predictable or deterministic machine such as a computer as the source of random data is not a preferable way for generating cryptographic key material. Random numbers generated by a computer are done through pseudorandom number generators (PRNG), and use a mathematical formula and an initial seed value.

  28. .NET Symmetric Algorithms(Cont’d) .NET cryptographic pseudo random number generator The preferable way to create random numbers is to use a non-deterministic source which produces randomness outside of human control. Examples of non-deterministic sources are the sampling of atmospheric noise from a radio or the measuring of radioactive decay. This type of source would produce genuine random numbers. Cryptographic PRNG algorithms use cryptographic hash or encryption functions to produce the random data. A seed is used to initialize the initialization vector and key. The problem with using a mathematical algorithm is that if you use the same seed value, you will be able to reproduce that same series of numbers. Sampling the computer environment, such as typing rates and mouse movements could potentially be tampered with by programs that take over control of the keyboard or mouse buffer.

  29. Storing Secrets in Memory Process 1. Acquire Data 2. Encrypt Data 3. Decrypt Data 4. Use Data 5. Scrub Data

  30. Storing Secrets in Memory (cont’d) Recommendations 1. Use application-level variables to store secrets. 2. On application start, initialize variables from a remote data store. 3. After using data, substitute zeroes in the space where the secret is stored to “scrub”.

  31. Storing Secrets in Memory (cont’d) • Secrets in memory need to be protected. • Encrypting secrets or wiping memory after you are done with the secret protects it from being revealed in a crash dump file or on a page file. • The process of encrypting secrets should be done using a proven cryptographic primitive. • Ideally, you should implement a double buffer, one for the plaintext secret and the other for the ciphertext. This is to protect against any possible race conditions that could use the secret before it is fully secured. • Any secrets in memory should be scrubbed.

  32. Storing Secrets in Memory (cont’d) Scrubbing memory requires changing its value to a safe default before you are done with the variable. The code required to scrub the memory will depend on the data type.

  33. Conclusion Please realize that whatever you design, develop, and implement, regardless of the implemented security measures, your code will be reverse engineered and your security mechanisms will be analyzed. Because of this, secrecy should be fully dependant on a proven encryption algorithm and on the size and quality of the encryption key.

More Related