Network Security -3

The History of Encryption

Encrypting communications is a very old idea. People have found the need to send private communications for most of the history of civilization. The need for privacy originally started from military and political needs, but has expanded beyond that. Businesses need to keep data private to maintain a competitive edge. People want to keep certain information, such as their medical records and financial records, private.

For much of human history, private communications meant encrypting written communications. Over the past century, that has expanded to radio transmission, telephone communications, and computer/Internet communications. In the past several decades, the encryption of computerized transmissions has actually become ordinary. In fact you can find computer/Internet communications encrypted more often than phone or radio. The digital environment makes implementing a particular type of encryption much easier.

Whatever the nature of the data you are encrypting, or the mode of transmitting data, the basic concept is actually quite simple. Messages must be changed in such a way that they cannot be read easily by any party that intercepts them but can be decoded easily by the intended recipient. In this section, a few historical methods of encryption will be examined. Note that these are very old methods, and they cannot be used for secure communication today. An amateur could easily crack the methods discussed in this section. However, they are wonderful examples for conveying the concept of encryption without having to incorporate a great deal of math, which is required of the more complex encryption methods.

5.1.1 The Caesar Cipher

One of the oldest recorded encryption methods is the Caesar cipher. This name is based on a claim that ancient Roman emperors used this method. This method is simple to implement, requiring no technological assistance.

You choose some number by which to shift each letter of a text. For example, if the text is “A cat” and you choose to shift by two letters, then the message becomes “C ecv”. Or, if you choose to shift by three letters, it becomes “D fdw”.

In this example, you can choose any shifting pattern you want. Either you can shift to the right or to left by any number of spaces you like. Because this is a simple method to understand, it makes a good place to start your study of encryption. It is, however, extremely easy to crack. You see, any language has a certain letter and word frequency, meaning that some letters are used more frequently than others. In the English language, the most common single-letter word is “a”. The most common three-letter word is “the”.

Knowing these two characteristics alone could help you decrypt a Caesar cipher. For example, if you saw a string of seemingly nonsense letters and noticed that a three-letter word was frequently repeated in the message, you might easily guess that this word was “the”—and the odds are highly in favour of this being correct.

Furthermore, if you frequently noticed a single-letter word in the text, it is most likely the letter “a”. You now have found the substitution scheme for a, t, h, and e. You can now either translate all of those letters in the message and attempt to surmise the rest or simply analyse the substitute letters used for a, t, h, and e and derive the substitution cipher that was used for this message. Decrypting a message of this type does not even require a computer. Someone with no background in cryptography could do it in less than ten minutes using pen and paper.

Caesar ciphers belong to a class of encryption algorithms known as substitution ciphers. The name derives from the fact that each character in the unencrypted message is substituted by one character in the encrypted text.

The particular substitution scheme used (for example, 12 or 11) in a Caesar cipher is called a substitution alphabet (that is, b substitutes for a, u substitutes for t, etc.). Because one letter always substitutes for one other letter, the Caesar cipher is sometimes called a mono-alphabet substitution method, meaning that it uses a single substitution for the encryption.

The Caesar cipher, like all historical ciphers, is simply too weak for modern use. It is presented here just to help you understand the concepts of cryptography.

5.1.2 ROT 13

ROT 13 is another single alphabet substitution cipher. All characters are rotated 13 characters through the alphabet. For example the phrase “A CAT” will become “N PNG”.

5.1.3 Multi-Alphabet Substitution

Eventually, a slight improvement on the Caesar cipher was developed, called multi-alphabet substitution (also called polyalphabetic substitution). In this scheme, you select multiple numbers by which to shift letters (that is, multiple substitution alphabets). For example, if you select three substitution alphabets (12, 22, 13), then “A CAT” becomes “C ADV”.

Notice that the fourth letter starts over with another 12, and you can see that the first A was transformed to C and the second A was transformed to D. This makes deciphering the underlying text more difficult. Although this is harder to decrypt than a Caesar cipher, it is not overly difficult to decode. It can be done with simple pen and paper and a bit of effort. It can be cracked quickly with a computer. In fact, no one would use such a method today to send any truly secure message, for this type of encryption is considered very weak.

Multi-alphabet ciphers are more secure than single-substitution ciphers. However, they are still not acceptable for modern cryptographic usage. Computer-based cryptanalysis systems can crack historical cryptographic methods (both single alphabet and multi-alphabet) easily. The single-substitution and multi-substitution alphabet ciphers are discussed just to show you the history of cryptography, and to help you get an understanding of how cryptography works.

5.1.4 Rail Fence

All the preceding ciphers are substitution ciphers. Another approach to classic cryptography is the transposition cipher. The rail fence cipher may be the most widely known transposition cipher. You simply take the message you wish to encrypt and alter each letter on a different row. So “attack at dawn” is written as

A      tc      a      dw

ta      k      ta      n

Next, you write down the text reading from left to right as one normally would, thus producing

atcadwtaktan

In order to decrypt the message, the recipient must write it out on rows:

A      tc      a      dw

ta      k      ta      n

Then the recipient reconstructs the original message. Most texts use two rows as examples; however, this can be done with any number of rows you wish to use.

5.1.5 Vigenère

Vigenère is a polyalphabetic cipher and uses multiple substitutions in order to disrupt letter and word frequency. Let us consider a simple example. Remember a Caesar cipher has a shift, for example a shift of +2 (two to the right). A polyalphabetic substitution cipher would use multiple shifts. Perhaps a +2, –1, +1, +3. When you get to the fifth letter, you simply start over again. So, consider the word “Attack”, being encrypted

A (1) + 2 = 3 or C

T (20) –1 = 19 or S

T (20) +1 = 21 or U

A (1) +3 = 4 or D

C (3) +2 = 5 or E

K (11) –1 = 10 or J

Therefore, the ciphertext is “CSUDEJ”. Given that each letter has four possible substitutions, the letter and word frequency is significantly disrupted.

Perhaps the most widely known polyalphabetic cipher is the Vigenère cipher. This cipher was actually invented in 1553 by Giovan Battista Bellaso, though it is named after Blaise de Vigenère. It is a method of encrypting alphabetic text by using a series of different mono-alphabet ciphers selected, based on the letters of a keyword. Bellaso added the concept of using any keyword one might wish, thereby making the choice of substitution alphabets difficult to calculate.

5.1.6 Enigma

It is really impossible to have a discussion about cryptography and not talk about Enigma. Contrary to popular misconceptions, the Enigma is not a single machine but rather a family of machines. The first version was invented by German engineer Arthur Scherbius near the end of World War I. It was used by several different militaries, not just the Germans.

Some military texts encrypted using a version of Enigma were broken by Polish cryptanalysts Marian Rejewski, Jerzy Rozycki, and Henryk Zygalski. The three basically reverse engineered a working Enigma machine and used that information to develop tools for breaking Enigma ciphers, including one tool named the cryptologic bomb.

The core of the Enigma machine was the rotors, or disks, that were arranged in a circle with 26 letters on them. The rotors were lined up. Essentially, each rotor represented a different single substitution cipher. You can think of the Enigma as a sort of mechanical polyalphabetic cipher. The operator of the Enigma machine would be given a message in plaintext and then type that message into Enigma. For each letter that was typed in, Enigma would provide a different ciphertext based on a different substitution alphabet. The recipient would type in the ciphertext, getting out the plaintext, provided both Enigma machines had the same rotor settings.

There were actually several variations of the Enigma machine. The Naval Enigma machine was eventually cracked by British cryptographers working at the now famous Bletchley Park. Alan Turing and a team of analysts were able to eventually break the Naval Enigma machine. Many historians claim this shortened World War II by as much as two years.

 

 

Modern Encryption Methods

Modern methods of encryption are more secure than the historical methods discussed in the previous section. All the methods discussed in this section are in use today and are considered reasonably secure. In some cases, the algorithm behind these methods requires a sophisticated understanding of mathematics.

Number theory often forms the basis for encryption algorithms. Fortunately, for our purposes, having the exact details of these encryption algorithms is not important; this means that you do not require a strong mathematics background to follow this material. More important is a general understanding of how a particular encryption method works and how secure it is.

5.2.1 Symmetric Encryption

Symmetric encryption refers to the methods where the same key is used to encrypt and decrypt the plaintext.

5.2.1.1 Binary Operations

Part of modern symmetric cryptography ciphers involves using binary operations. Various operations on binary numbers (numbers made of only zeroes and ones) are well known to programmers and programming students. However, for those readers not familiar with them, a brief explanation follows. When working with binary numbers, three operations are not found in normal math: AND, OR, and XOR operations. Each is illustrated next.

AND

To perform the AND operation, you take two binary numbers and compare them one place at a time. If both numbers have a “one” in both places, then the resultant number is a “one”. If not, then the resultant number is a “zero”, as you see below:

1 1 0 1

1 0 0 1

——-

1 0 0 1

OR

The OR operation checks to see whether there is a “one” in either or both numbers in a given place. If so, then the resultant number is “one”. If not, the resultant number is “zero”, as you see here:

1 1 0 1

1 0 0 1

——-

1 1 0 1

XOR

The XOR operation impacts your study of encryption the most. It checks to see whether there is a “one” in a number in a given place, but not in both numbers at that place. If it is in one number but not the other, then the resultant number is “one”. If not, the resultant number is “zero”, as you see here:

1 1 0 1

1 0 0 1

——-

0 1 0 0

XORing has an interesting property in that it is reversible. If you XOR the resultant number with the second number, you get back the first number. In addition, if you XOR the resultant number with the first number, you get the second number.

0 1 0 0

1 0 0 1

——-

1 1 0 1

Binary encryption using the XOR operation opens the door for some rather simple encryption. Take any message and convert it to binary numbers and then XOR that with some key. Converting a message to a binary number is a simple two-step process. First, convert a message to its ASCII code, and then convert those codes to binary numbers.

Each letter/number will generate an eight-bit binary number. You can then use a random string of binary numbers of any given length as the key. Simply XOR your message with the key to get the encrypted text, and then XOR it with the key again to retrieve the original message.

This method is easy to use and great for computer science students; however, it does not work well for truly secure communications because the underlying letter and word frequency remains. This exposes valuable clues that even an amateur cryptographer can use to decrypt the message. Yet, it does provide a valuable introduction to the concept of single-key encryption.

Although simply XORing the text is not the method typically employed, single-key encryption methods are widely used today. For example, you could simply include a multi-alphabet substitution that was then XORed with some random bit stream—variations of which do exist in a few actual encryption methods currently used.

Modern cryptography methods, as well as computers, make decryption a rather advanced science. Therefore, encryption must be equally sophisticated in order to have a chance of success.

5.2.1.2 Data Encryption Standard

Data Encryption Standard, or DES as it is often called, was developed by IBM in the early 1970s and made public in 1976. DES uses a symmetric key system, which means the same key is used to encrypt and to decrypt the message. DES uses short keys and relies on complex procedures to protect its information. The actual DES algorithm is quite complex. The basic concept, however, is as follows:

1. The data is divided into 64-bit blocks, and those blocks are then reordered.

2. Reordered data are then manipulated by 16 separate rounds of encryption, involving substitutions, bit-shifting, and logical operations using a 56-bit key.

3. Finally, the data are reordered one last time.

DES uses a 56-bit cipher key applied to a 64-bit block. There is actually a 64-bit key, but one bit of every byte is actually used for error detection, leaving just 56 bits for actual key operations. The problem with DES is the same problem that all symmetric key algorithms have: How do you transmit the key without it becoming compromised? This issue led to the development of public key encryption.

5.2.1.3 Blowfish

Blowfish is a symmetric block cipher. This means that it uses a single key to both encrypt and decrypt the message and works on “blocks” of the message at a time. It uses a variable-length key ranging from 32 to 448 bits. This flexibility in key size allows you to use it in various situations. Blowfish was designed in 1993 by Bruce Schneier. It has been analysed extensively by the cryptography community and has gained wide acceptance. It is also a non-commercial (that is, free of charge) product, thus making it attractive to budget-conscious organisations.

5.2.1.4 AES (Advanced Encryption Standard)

Advanced Encryption Standard (AES) uses the Rijndael algorithm. The developers of this algorithm have suggested multiple alternative pronunciations for the name, including “reign dahl,” “rain doll,” and “rhine dahl.” This algorithm was developed by two Belgian researchers, Joan Daemen of Proton World International and Vincent Rijmen, a postdoctoral researcher in the Electrical Engineering Department of Katholieke Universiteit Leuven.

AES specifies three key sizes: 128, 192, and 256 bits. By comparison, DES keys are 56 bits long, and Blowfish allows varying lengths up to 448 bits. AES uses a block cipher. This algorithm is widely used, considered very secure, and therefore a good choice for many encryption scenarios.

5.2.2 Public Key Encryption

Public key encryption is essentially the opposite of single-key encryption. With any public key encryption algorithm, one key is used to encrypt a message (called the public key) and another is used to decrypt the message (the private key). You can freely distribute your public key so that anyone can encrypt a message to send to you, but only you have the private key and only you can decrypt the message. The actual mathematics behind the creation and applications of the keys is a bit complex and beyond the scope of this book. Many public key algorithms are dependent, to some extent, on large prime numbers, factoring, and number theory.

5.2.2.1 RSA

The RSA method is a widely used encryption algorithm. You cannot discuss cryptography without at least some discussion of RSA. This public key method was developed in 1977 by three mathematicians: Ron Rivest, Adi Shamir, and Len Adleman. The name RSA is derived from the first letter of each mathematician’s last name.

One significant advantage of RSA is that it is a public key encryption method. That means there are no concerns with distributing the keys for the encryption. However, RSA is much slower than symmetric ciphers. In fact, in general, asymmetric ciphers are slower than symmetric ciphers.

The steps to create the key are as follow:

1. Generate two large random primes, p and q, of approximately equal size.

2. Pick two numbers so that when they are multiplied together the product will be the size you want (that is, 2048 bits, 4096 bits, etc.).

3. Now multiply p and q to get n.

4. Let n = pq.

5. Multiply Euler’s totient for each of these primes. If you are not familiar with this concept, the Euler’s Totient is the total number of co-prime numbers. Two numbers are considered co-prime if they have no common factors.

For example, if the original number is 7, then 5 and 7 would be co-prime. It just so happens that for prime numbers, this is always the number minus 1. For example, 7 has 6 numbers that are co-prime to it (if you think about this a bit you will see that 1, 2, 3, 4, 5, 6 are all co-prime with 7).

6. Let m = (p – 1)(q – 1).

7. Select another number; call this number e. You want to pick e so that it is co-prime to m.

8. Find a number d that when multiplied by e and modulo m would yield 1. (Note: Modulo means to divide two numbers and return the remainder. For example, 8 modulo 3 would be 2.)

9. Find d, such that de mod m ≡ 1.

Now you publish e and n as the public key and keep d and n as the secret key. To encrypt you simply take your message raised to the e power and modulo n = Me % n

To decrypt you take the ciphertext, and raise it to the d power modulo n:

P = Cd % n

RSA has become a popular encryption method. It is considered quite secure and is often used in situations where a high level of security is needed.

5.2.2.2 Elliptic Curve

The Elliptic Curve algorithm was first described in 1985 by Victor Miller (IBM) and Neil Koblitz (University of Washington).

The security of Elliptic Curve cryptography is based on the fact that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is difficult to the point of being impractical to do.

The size of the elliptic curve determines the difficulty of finding the algorithm, and thus the security of the implementation. The level of security afforded by an RSA-based system with a large modulus can be achieved with a much smaller elliptic curve group. There are actually several ECC algorithms. There is an ECC version of Diffie-Hellman, an ECC version of DSA, and many others.

The U.S. National Security Agency has endorsed ECC (Elliptic Curve Cryptography) by including schemes based on it in its Suite B set of recommended algorithms and allows their use for protecting information classified up to top secret with 384-bit keys.

5.2.3 Digital Signatures and Certificates

A digital signature is not used to ensure the confidentiality of a message, but rather to guarantee who sent the message. This is referred to as non-repudiation. Essentially, a digital signature proves who the sender is. Digital signatures are actually rather simple, but clever. They simply reverse the asymmetric encryption process.

Recall that in asymmetric encryption, the public key (which anyone can have access to) is used to encrypt a message to the recipient, and the private key (which is kept secure, and private) can decrypt it. With a digital signature, the sender encrypts something with his or her private key. If the recipient is able to decrypt that with the sender’s public key, then it must have been sent by the person purported to have sent the message.

5.2.3.1 Digital Certificates

Public keys are widely distributed and that getting someone’s public key is fairly easy to do and are also needed to verify a digital signature. As to how public keys are distributed, probably the most common way is through digital certificates. The digital certificate contains a public key and some means to verify whose public key it is.

X.509 is an international standard for the format and information contained in a digital certificate. X.509 is the most used type of digital certificate in the world. It is a digital document that contains a public key signed by a trusted third party, which is known as a certificate authority (CA). The contents of an X.509 certificate are:

  • Version
  • Certificate holder’s public key
  • Serial number
  • Certificate holder’s distinguished name
  • Certificate’s validity period
  • Unique name of certificate issuer
  • Digital signature of issuer
  • Signature algorithm identifier

A certificate authority issues digital certificates. The primary role of the CA is to digitally sign and publish the public key bound to a given user. It is an entity trusted by one or more users to manage certificates.

A registration authority (RA) is used to take the burden off, of a CA by handling verification prior to certificates being issued. RAs act as a proxy between users and CAs. RAs receive a request, authenticate it, and forward it to the CA.

A public key infrastructure (PKI) distributes digital certificates. This network of trusted CA servers serves as the infrastructure for distributing digital certificates that contain public keys. A PKI is an arrangement that binds public keys with respective user identities by means of a CA.

What if a certificate is expired, or revoked? A certificate revocation list (CRL) is a list of certificates that have been revoked for one reason or another. Certificate authorities publish their own certificate revocation lists. A newer method for verifying certificates is Online Certificate Status Protocol (OSCP), a real-time protocol for verifying certificates.

There are several different types of X.509 certificates. They each have at least the elements listed at the beginning of this section, but are for different purposes. The most common certificate types are listed below:

  • Domain validation certificates are among the most common. These are used to secure communication with a specific domain. This is a low-cost certificate that website administrators use to provide TLS for a given domain.
  • Wildcard certificates, as the name suggests, can be used more widely, usually with multiple sub-domains of a given domain. So rather than have a different X.509 certificate for each sub-domain, you would use a wildcard certificate for all sub-domains.
  • Code-signing certificates are X.509 certificates used to digitally sign some type of computer code. These usually require more validation of the person requesting the certificate, before they can be issued.
  • Machine/computer certificates are X.509 certificates assigned to a specific machine. These are often used in authentication protocols. For example, in order for the machine to sign into the network, it must authenticate using its machine certificate.
  • User certificates are used for individual users. Like machine/computer certificates, these are often used for authentication. The user must present his or her certificate to authenticate prior to accessing some resource.
  • E-mail certificates are used for securing e-mail. Secure Multipurpose Internet Mail Extensions (S/MIME) uses X.509 certificates to secure e-mail communications.
  • A Subject Alternative Name (SAN) is not so much a type of certificate as a special field in X.509. It allows you to specify additional items to be protected by this single certificate. These could be additional domains or IP addresses.
  • Root certificates are used for root authorities. These are usually self-signed by that authority.

5.2.3.2 PGP Certificates

Pretty Good Privacy (PGP) is not a specific encryption algorithm, but rather a system. It offers digital signatures, asymmetric encryption, and symmetric encryption. It is often found in e-mail clients. PGP was introduced in the early 1990s, and it’s considered to be a very good system.

PGP uses its own certificate format. The main difference, however, is that PGP certificates are self-generated. They are not generated by any certificate authority.\

 

Windows and Linux Encryption

Microsoft Windows provides encryption tools to prevent loss of confidential data.

  • Encrypting File System (EFS) encodes files in order anyone who is able to get the files not to be able to read them. The files are only readable when you sign in to the computer using your user account. You can use EFS to encrypt individual files and entire drives. It is recommended to encrypt folders or drives instead of individua files. When you encrypt a folder or a drive the files contained are also encrypyed. Even new files created in the encrypted folder are automatically encrypted.
  • BitLocker Drive Encryption provides another layer of protection by encrypting the entire hard drive. By linking this encryption to a key stored in a Trusted Platform Module (TPM), bitLocker reduces the risk of data being lost when a computer is stolen or when a hard disk is stolen and placed in another computer. In such scenario the thief will boot into an alternate operating system and try to retrieve data from the stolen drive or computer. With BitLocker that type of offline attacke in neutered.
  • BitLocker To Go extends BitLocker encryption to removable media such as USB flash drives.

Linux provides a number of cryptographic techniques to protect data on physical devices such as hard disks or removable media. Such technique is Linux Unified Key Setup (LUKS). This technique allows the encryption of Linux partitions.

Using LUKS you can encrypt the entire blcok device which is well suited to protect data on removable storage or the laptops disk drive. LUKS uses the existing device mapper kernel subsystem and also provides passprhase strengthening for protection against dictionary attacks.

 

 

 

Guided Exercise: Enabling BitLocker

Resources                      
Files None
Machines Windows 10

In this exercise you will enable Bitlocker to encrypt the hard drive.

Login to Windows 10 and once logged in click on the Start button and in the search box write gpedit.msc and press enter.

On the Local Group Policy Editor window expand from Computer Configuration -> Administrative Templates -> Windows Components -> BitLocker Drive Encryption -> Operating System Drives.

Double click the option Require additional authentcation at startup and on the new window that will open click Enabled (Make sure the box Allow BitLocker without a compatible TPM is ticket)and then Apply -> OK. After that close the Local Group Policy Editor window.

Open file explorer and click on This PC. Right click on the Local Disk (C:) and click on the option Turn on BitLocker.

BitLocker will verify that the computer meets the requirements and will ask how to unlock the drive at startup. Choose the option Enter a password.

Enter a password which is at least 8 characters long with uppercase and lowercase letters, symbols and spaces.

Then you should save (backup) the recovery key. You can save the recovery key to a USB flash drive, print it, save it to a file or to your Microsoft account. For this exercise choose the option to print the recovery key.

On the Print window select the option Microsoft Print to PDF and click Print.

Provide a file name for the recovery key (BitLocker Key for example) and save it to the Backup (E:) drive. Click Save and then OK. Then it will ask you to Run BitLocker system check. Uncheck the box (although you can leave it to run a check) and click Start encrypting.

The encryption process will take some time to finish although if you have a large SATA disk such as 1TB will take a long time to finish where SSDs take less time.

Click Close once the encryption process has finished.

Restart the computer to confirm that once it starts asks for the BitLocker password. Enter the password and then press enter.

Then the computer will ask for the password to login.

Guided Exercise Video

 

5.5 Guided Exercise: Encrypting a Folder Using EFS

Resources              
Files None
Machines Windows 10

In this exercise you will encrypt a folder along with its contents.

Login to Windows 10 and open File Explorer. Once File Explorer opens click on the Documents folder from the Quick Access menu on the left.  

Within the Documents folder a subfolder exist with the name Accounting. Within the Accounting folder there are 3 files which are considered sensitive.

Right click on the Accounting folder and select Properties. On the Accounting Properties window, in the General tab click on Advanced.

On the Advanced Attributes window click the option “Encrypt contents to secure data”. Then click OK and then Apply.

Once you click Apply a new window will open asking you to confirm the attribute changes. Select the option “Apply changes to this folder, subfolders and files” and click OK.  Click again OK to close the folder properties.

You will notice then that the Accounting folder has now the green color.

Now that EFS is enabled a small icon exists in the taskbar which is the EFS key backup notice. Always a good idea to backup the file encryption certificate and key. Click on the EFS icon in the taskbar and on the Window Encrypting File System select Back up now.

On the Certificate Export Wizard window click Next

Ensure that the options Personal Information Exchange – PKCS, Include all certificates in the certification path if possible and Enable certificate privacy are checked.  Then click Next.

Check the box next to password and enter a password. For password use P@ssw0rd123! and click Next.

Then click browse to select a location and a filename where the key should be saved. Select the Backup drive and for filename use exportedkey.

Click Save and then Next. On the next window simply review the File Name, File Format and other information of the certificate export. Then click Finish.

Guided Exercise Video

 

5.5 Hashing

A hash function, “H” for example, is a function that takes a variable-size input “m” and returns a fixed-size string. The value that is returned is called the hash value “h” or the digest. This can be expressed mathematically as “h = H(m)”. There are three properties a hash function should have:

  • Variable length input with fixed length output. In other words, no matter what you put into the hashing algorithm, the same sized output is produced.
  • H(x) is one-way; you cannot “un-hash” something.
  • H(x) is collision-free. Two different input values do not produce the same output. A collision refers to a situation where two different inputs yield the same output. A hash function should not have collisions.

Hashing is how Windows stores passwords. For example, if your password is “password”, then Windows will first hash it, producing something like:

“0BD181063899C9239016320B50D3E896693A96DF”. 

It then stores that hash in the SAM (Security Accounts Manager) file in the Windows System directory. When you log on, Windows cannot “un-hash” your password, so what Windows does is take whatever password you type in, hash it, and then compare the result with what is in the SAM file. If they match (exactly) then you can log in.

Storing Windows passwords is just one application of hashing. There are others. For example, in computer forensics, hashing a drive before starting a forensic examination is common practice. Then later you can always hash it again to see whether anything was changed (accidently or intentionally). If the second hash matches the first, then nothing has been changed.

In relationship to hashing, the term “salt” refers to random bits that are used as one of the inputs to the hash. Essentially, the salt is intermixed with the message that will be hashed. Salt data complicates dictionary attacks that use pre-encryption of dictionary entries. Is also effective against rainbow table attacks. For best security, the salt value is kept secret, separate from the password database/file.

5.5.1 MD5

MD5 is a 128-bit hash that is specified by RFC 1321. It was designed by Ron Rivest in 1991 to replace an earlier hash function, MD4. In 1996, a flaw was found with the design of MD5. Although it was not a clearly fatal weakness, cryptographers began recommending the use of other algorithms, such as SHA-1. The biggest problem with MD5 is that it is not collision resistant.

5.5.2 SHA

The Secure Hash Algorithm is perhaps the most widely used hash algorithm today. Several versions of SHA now exist. SHA (all versions) is considered secure and collision free. The versions include:

  • SHA-1: This 160-bit hash function resembles the MD5 algorithm. This was designed by the National Security Agency (NSA) to be part of the Digital Signature Algorithm.
  • SHA-2: This is actually two similar hash functions, with different block sizes, known as SHA-256 and SHA-512. They differ in the word size; SHA-256 uses 32-byte (256 bits) words whereas SHA-512 uses 64-byte (512 bits) words. There are also truncated versions of each standard, known as SHA-224 and SHA-384. These were also designed by the NSA.
  • SHA-3: This is the latest version of SHA. It was adopted in October of 2012.

5.6 Guided Exercise: Hashing

Resources              
Files None
Machines Windows 10

In this exercise we will use a tool called HashGenerator to generate hashes.

In the Windows 10 machine open the desktop folder called Exercises and then the folder HashGenerator. Double click the file Setup_HashGenerator to install it.

On the Hash Generator Setup window click on Next.

Click again Next on the window Choose a file location.

Click Install on the Begin Installation of Hash generator window.

Click Yes on the User Account Control window.

Click Close on the Hash Generator Setup window.

On the Hash Generator tool Select Text as the Hash Input Type and the enter the word password on the “Enter or Paste any Text” text box and click on Generate Hash.

Observe the different types of hashes and the actual hash values. 

Guided Exercise Video

 

 

 

 

 

 

 

5.7 Cracking Passwords

Cracking passwords is not the same as breaking encrypted transmissions. If anyone has successfully cracked a password and particularly the administrator/root password then other security measures are rendered irrelevant.

5.7.1 John the Ripper

John the Ripper is a password cracker popular with both network administrators and hackers.

This product is completely command line–based and has no Windows interface. It enables the user to select text files for word lists to attempt cracking a password. Although John the Ripper is less convenient to use because of its command-line interface, it has been around for a long time and is well regarded by both the security and hacking communities.

John the Ripper works with password files rather than attempting to crack live passwords on a given system. Passwords are usually encrypted and in a file on the operating system. Hackers frequently try to get that file off the machine and download it to their own system so they can crack it at will. They might also look for discarded media in your dumpster in order to find old backup tapes that might contain password files. Each operating system stores that file in a different place:

  • In Linux, it is /etc/passwd and /etc/shadow.
  • In Windows 2000 and beyond, it is in a hidden .sam file.

After you have downloaded John the Ripper, you can run it by typing in (at a command line) the word john followed by the file you want it to try to crack:

  • john passwd
  • john –wordfile:/usr/share/wordlists/rockyou.txt –rules passwd
    Cracked passwords will be printed to the terminal and saved in a file called john.pot, found in the directory into which you installed John the Ripper.

5.7.2 Rainbow Tables

In 1980 Martin Hellman described a cryptanalytic time-memory trade-off that reduces the time of cryptanalysis by using pre-calculated data stored in memory. Essentially, these types of password crackers are working with pre-calculated hashes of all passwords available within a certain character space, be that “a-z” or “a-zA-z” or “a-zA-Z0-9” etc. These files are called rainbow tables. They are particularly useful when trying to crack hashes. Because a hash is a one-way function, the way to break it is to attempt to find a match. The attacker takes the hashed value and searches the rainbow tables seeking a match to the hash. If one is found, then the original text for the hash is found. Popular hacking tools such as Ophcrack depend on rainbow tables.

5.7.3 Brute Force

This method simply involves trying every possible key. It is guaranteed to work, but is likely to take so long that it is simply not useable. For example, to break a Caesar cipher there are only 26 possible keys, which you can try in a very short time. But consider AES, with the smallest key size of 128 bits. If you tried 1 trillion keys a second, it could take 112,527,237,738,405,576,542 years to try them all.

 

 

 

5.8 Guided Exercise: Cracking Passwords

Resources                      
Files None
Machines Windows 10

In this exercise you will use the tool called Ophcrack to crack password hashes.

Open the desktop folder called Exercises and then the folder Ophcarck.

Double click the file ophcrack to start it. Some rainbow tables are already installed within Ophcrack.

From the Ophcrack folder double click the file hashes to open it. 

Once you open the file copy the line that refers to jim.

In Ophcrack click Load and then Single hash, paste the hash on Load Single Hash window and then click OK.Click Crack to start cracking the password hash. Then you will see the actual password of the user jim.

Guided Exercise Video

 

All Files of  Network Security

WhatsApp Image 2020-05-05 at 7.20.09 AM

 

 

 

Introduction to VPN

Virtual Private Networks (VPNs) are a common way to connect remotely to a network in a secure fashion. A VPN creates a private network connection over the Internet to connect remote sites or users together. Instead of using a dedicated connection, a VPN uses virtual connections routed through the Internet from the remote site or user to the private network. Security is accomplished by encrypting all the transmissions.

A VPN allows a remote user to have network access just as if were local to the private network. This means not only connecting the user to the network as if the user were local but also making the connection secure. Because most organisations have many employees traveling and working from home, remote network access has become an important security concern. Users want access, and administrators want security. The VPN is the current standard for providing both.

To accomplish its purpose, the VPN must emulate a direct network connection. This means it must provide both the same level of access and the same level of security as a direct connection. To emulate a dedicated point-to-point link, data is encapsulated, or wrapped, with a header that provides routing information allowing it to transmit across the Internet to reach its destination. This creates a virtual network connection between the two points. The data being sent is also encrypted, thus making that virtual network private.

A VPN does not require separate technology or direct cabling. It is a virtual private network, which means it can use existing connections to provide a secure connection. In most cases it is used over normal Internet connections.

A variety of methods are available for connecting one computer to another. At one time dialling up to an ISP via a phone modem was common. Now cable modems, cellular devices, and other mechanisms are more common. All of these methods have something in common: they are not inherently secure. All data being sent back and forth is unencrypted, and anyone can use a packet sniffer to intercept and view the data. Furthermore, neither end is authenticated. This means you cannot be completely certain who you are really sending data to or receiving data from. The VPN provides an answer to these issues.

This sort of arrangement is generally acceptable for an ISP. The customers connecting simply want a channel to the Internet and do not need to connect directly or securely to a specific network. However, this setup is inadequate for remote users attempting to connect to an organisation’s network. In such cases the private and secure connection a VPN provides is critical.

Individual remote users are not the only users of VPN technology. Many larger organisations have offices in various locations. Achieving reliable and secure site-to-site connectivity for such organisations is an important issue. The various branch offices must be connected to the central corporate network through tunnels that transport traffic over the Internet.

Using VPN technology for site-to-site connectivity enables a branch office with multiple links to move away from an expensive, dedicated data line and to simply utilize existing Internet connections.

 

 

 

VPN Protocols

Multiple ways exist to achieve the encryption needs of a VPN. Certain network protocols are frequently used for VPNs. The two most commonly used protocols for this purpose are Point-to-Point Tunnelling Protocol (PPTP) and Layer 2 Tunnelling Protocol (L2TP). The part of the connection in which the data is encapsulated is referred to as the tunnel. L2TP is often combined with IPSec to achieve a high level of security.

6.2.1 PPTP

PPTP is a tunnelling protocol that enables an older connection protocol, PPP (Point-to-Point Protocol), to have its packets encapsulated within Internet Protocol (IP) packets and forwarded over any IP network, including the Internet itself. PPTP is often used to create VPNs. PPTP is an older protocol than L2TP or IPSec. Some experts consider PPTP to be less secure than L2TP or IPSec, but it consumes fewer resources and is supported by almost every VPN implementation. It is basically a secure extension to PPP.

PPTP was originally proposed as a standard in 1996 by the PPTP Forum—a group of companies that included Ascend Communications, ECI Telematics, Microsoft, 3Com, and U.S. Robotics. This group’s purpose was to design a protocol that would allow remote users to communicate securely over the Internet.

Although newer VPN protocols are available, PPTP is still widely used, because almost all VPN equipment vendors support PPTP. Another important benefit of PPTP is that it operates at layer 2 of the OSI model (the data link layer), allowing different networking protocols to run over a PPTP tunnel.

When connecting users to a remote system, encrypting the data transmissions is not the only facet of security. You must also authenticate the user. PPTP supports two separate technologies for accomplishing this: Extensible Authentication Protocol (EAP) and Challenge Handshake Authentication Protocol (CHAP).

6.2.1.1 Extensible Authentication Protocol (EAP)

EAP was designed specifically with PPTP and is meant to work as part of PPP. EAP works from within PPP’s authentication protocol. It provides a framework for several different authentication methods. EAP is meant to supplant proprietary authentication systems and includes a variety of authentication methods to be used, including passwords, challenge-response tokens, and public key infrastructure certificates.

6.2.1.2 Challenge Handshake Authentication Protocol (CHAP)

CHAP is actually a three-part handshaking (a term used to denote authentication processes) procedure. After the link is established, the server sends a challenge message to the client machine originating the link. The originator responds by sending back a value calculated using a one-way hash function. The server checks the response against its own calculation of the expected hash value. If the values match, the authentication is acknowledged; otherwise, the connection is usually terminated. This means that the authorization of a client connection has three stages.

What makes CHAP particularly interesting is that it periodically repeats the process. This means that even after a client connection is authenticated, CHAP repeatedly seeks to re-authenticate that client, providing a robust level of security.

6.2.2 L2TP

Layer 2 Tunnelling Protocol is an extension or enhancement of the Point-to-Point Tunnelling Protocol that is often used to operate virtual private networks over the Internet. Essentially, it is a new and improved version of PPTP. As its name suggests, it operates at the data link layer of the OSI model (like PPTP). Both PPTP and L2TP are considered by many experts to be less secure than IPSec. However, seeing IPSec used together with L2TP to create a secure VPN connection is not uncommon.

Like PPTP, L2TP supports EAP and CHAP. However, it also offers support for other authentication methods, for a total of six:

  • EAP
  • CHAP
  • MS-CHAP
  • PAP
  • SPAP
  • Kerberos

6.2.2.1 MS-CHAP

As the name suggests, MS-CHAP is a Microsoft-specific extension to CHAP. Microsoft created MS-CHAP to authenticate remote Windows workstations. The goal is to provide the functionality available on the LAN to remote users while integrating the encryption and hashing algorithms used on Windows networks.

Wherever possible, MS-CHAP is consistent with standard CHAP. However, some basic differences between MS-CHAP and standard CHAP include the following:

  • The MS-CHAP response packet is in a format designed for compatibility with Microsoft’s Windows networking products.
  • The MS-CHAP format does not require the authenticator to store a clear-text or reversibly encrypted password.
  • MS-CHAP provides authenticator-controlled authentication retry and password-changing mechanisms. These retry and password-changing mechanisms are compatible with the mechanisms used in Windows networks.
  • MS-CHAP defines a set of reason-for-failure codes that are returned in the failure packet’s message field if the authentication fails. These are codes that Windows software is able to read and interpret, thus providing the user with the reason for the failed authentication.

6.2.2.2 PAP

Password Authentication Protocol (PAP) is the most basic form of authentication. With PAP, a user’s name and password are transmitted over a network and compared to a table of name-password pairs. Typically, the passwords stored in the table are encrypted. However, the transmissions of the passwords are in clear text, unencrypted, the main weakness with PAP. The basic authentication feature built into the HTTP protocol uses PAP. This method is no longer used and is only presented for historical purposes.

6.2.2.3 SPAP

Shiva Password Authentication Protocol (SPAP) is a proprietary version of PAP. Most experts consider SPAP somewhat more secure than PAP because the username and password are both encrypted when they are sent, unlike with PAP.

Because SPAP encrypts passwords, someone capturing authentication packets will not be able to read the SPAP password. However, SPAP is still susceptible to playback attacks (that is, a person records the exchange and plays the message back to gain fraudulent access). Playback attacks are possible because SPAP always uses the same reversible encryption method to send the passwords over the wire.

6.2.2.4 Kerberos

Kerberos is one of the most well-known network authentication protocols. It was developed at MIT and it’s named from the mythical three-headed dog that guarded the gates to Hades.

Kerberos works by sending messages back and forth between the client and the server. The actual password (or even a hash of the password) is never sent. That makes it impossible for someone to intercept it. What happens instead is that the username is sent. The server then looks up the stored hash of that password, and uses that as an encryption key to encrypt data and send it back to the client. The client then takes the password the user entered, and uses that as a key to decrypt the data. If the user entered the wrong password, then it will never get decrypted. This is a clever way to verify the password without ever being transmitted. Authentication happens with UDP (User Datagram Protocol) on port 88.

After the user’s username is sent to the authentication service (AS), that AS will use the hash of the user password that is stored as a secret key to encrypt the following two messages that get sent to the client:

  • Message A: Contains Client/TGS (Ticket Granting Service) session key encrypted with secret key of client
  • Message B: Contains TGT (Ticket Granting Ticket) that includes client ID, client network address, and validity period

Remember, both of these messages are encrypted using the key the AS generated.

Then the user attempts to decrypt message A with the secret key generated by the client hashing the user’s entered password. If that entered password does not match the password the AS found in the database, then the hashes won’t match, and the decryption won’t work. If it does work, then message A contains the Client/TGS session key that can be used for communication with the TGS. Message B is encrypted with the TGS secret key and cannot be decrypted by the client.

Now the user is authenticated into the system. However, when the user actually requests a service, some more message communication is required. When requesting services, the client sends the following messages to the TGS:

  • Message C: Composed of the TGT from message B and the ID of the requested service
  • Message D: Authenticator (which is composed of the client ID and the timestamp), encrypted using the Client/TGS session key

Upon receiving messages C and D, the TGS retrieves message B out of message C. It decrypts message B using the TGS secret key. This gives it the “Client/TGS session key”. Using this key, the TGS decrypts message D (Authenticator) and sends the following two messages to the client:

  • Message E: Client-to-server ticket (which includes the client ID, client network address, validity period, and client/server session key) encrypted using the service’s secret key
  • Message F: Client/server session key encrypted with the Client/TGS session key

Upon receiving messages E and F from TGS, the client has enough information to authenticate itself to the Service Server (SS). The client connects to the SS and sends the following two messages:

  • Message E: From the previous step (the client-to-server ticket, encrypted using service’s secret key)
  • Message G: A new Authenticator, which includes the client ID and timestamp and is encrypted using the client/server session key

The SS decrypts the ticket (message E) using its own secret key to retrieve the client/server session key. Using the session key, the SS decrypts the Authenticator and sends the following message to the client to confirm its identity and willingness to serve the client:

  • Message H: The timestamp found in client’s Authenticator

The client decrypts the confirmation (message H) using the client/server session key and checks whether the timestamp is correct. If so, then the client can trust the server and can start issuing service requests to the server. The server provides the requested services to the client.

Below are some Kerberos terms to know:

  • Principal: A server or client that Kerberos can assign tickets to.
  • Authentication Service (AS): Service that authorizes the principal and connects them to the Ticket Granting Server. Note some books/sources say server rather than service.
  • Ticket Granting Service (TGS): Provides tickets.
  • Key Distribution Centre (KDC): A server that provides the initial ticket and handles TGS requests. Often it runs both AS and TGS services.

Realm: A boundary within an organisation. Each realm has its own AS and TGS.

  • Remote Ticket Granting Server (RTGS): A TGS in a remote realm.
  • Ticket Granting Ticket (TGT): The ticket that is granted during the authentication process.
  • Ticket: Used to authenticate to the server. Contains identity of client, session key, timestamp, and checksum. Encrypted with server’s key.
  • Session key: Temporary encryption key.
  • Authenticator: Proves session key was recently created. Often expires within 5 minutes.

 

 

IPSec

Internet Protocol Security (IPSec) is a technology used to create virtual private networks. IPSec is used in addition to the IP protocol that adds security and privacy to TCP/IP communication. IPSec is incorporated with Microsoft operating systems as well as many other operating systems.

For example, the security settings in the Internet Connection Firewall that ships with Windows XP and later versions enables users to turn on IPSec for transmissions. IPSec is a set of protocols developed by the IETF (Internet Engineering Task Force; http://www.ietf.org) to support secure exchange of packets. IPSec has been deployed widely to implement VPNs.

IPSec has two encryption modes: transport and tunnel. The transport mode works by encrypting the data in each packet but leaves the header unencrypted. This means that the source and destination addresses, as well as other header information, are not encrypted. The tunnel mode encrypts both the header and the data.

This is more secure than transport mode but can work more slowly. At the receiving end, an IPSec-compliant device decrypts each packet. For IPSec to work, the sending and receiving devices must share a key, an indication that IPSec is a single-key encryption technology. IPSec also offers two other protocols beyond the two modes already described:

  • Authentication Header (AH): The AH protocol provides a mechanism for authentication only. AH provides data integrity, data origin authentication, and an optional replay protection service. Data integrity is ensured by using a message digest that is generated by an algorithm such as HMAC-MD5 or HMAC-SHA. Data origin authentication is ensured by using a shared secret key to create the message digest.
  • Encapsulating Security Payload (ESP): The ESP protocol provides data confidentiality (encryption) and authentication (data integrity, data origin authentication, and replay protection). ESP can be used with confidentiality only, authentication only, or both confidentiality and authentication.

Either protocol can be used alone to protect an IP packet, or both protocols can be applied together to the same IP packet.

IPSec can also work in two modes. Those modes are transport mode and tunnel mode. Transport mode is the mode where IPSec encrypts the data, but not the packet header. Tunnelling mode does encrypt the header as well as the packet data.

There are other protocols involved in making IPSec work. IKE, or Internet Key Exchange, is used in setting up security associations in IPSec. A security association is formed by the two endpoints of the VPN tunnel, once they decide how they are going to encrypt and authenticate. For example, will they use AES for encrypting packets, what protocol will be used for key exchange, and what protocol will be used for authentication?

All of these issues are negotiated between the two endpoints, and the decisions are stored in a security association (SA). This is accomplished via the IKE protocol. Internet Key Exchange (IKE and IKEv2) is used to set up an SA by handling negotiation of protocols and algorithms and to generate the encryption and authentication keys to be used.

The Internet Security Association and Key Management Protocol (ISAKMP) provides a framework for authentication and key exchange. Once the IKE protocol sets up the SA, then it is time to actually perform the authentication and key exchange.

The first exchange between VPN endpoints establishes the basic security policy; the initiator proposes the encryption and authentication algorithms it is willing to use. The responder chooses the appropriate proposal and sends it to the initiator. The next exchange passes Diffie-Hellman public keys and other data.

Those Diffie-Hellman public keys will be used to encrypt the data being sent between the two endpoints. The third exchange authenticates the ISAKMP session. This process is called main mode. Once the IKE SA is established, IPSec negotiation (Quick Mode) begins.

Quick Mode IPSec negotiation, or Quick Mode, is similar to an Aggressive Mode IKE negotiation, except negotiation must be protected within an IKE SA. Quick Mode negotiates the SA for the data encryption and manages the key exchange for that IPSec SA.

In other words, Quick Mode uses the Diffie-Hellman keys exchanged in main mode, to continue exchanging symmetric keys that will be used for actual encryption in the VPN.

Aggressive Mode squeezes the IKE SA negotiation into three packets, with all data required for the SA passed by the initiator. The responder sends the proposal, key material, and ID, and authenticates the session in the next packet. The initiator replies by authenticating the session. Negotiation is quicker, and the initiator and responder ID pass in the clear.

 

 

SSL/TLS

A new type of firewall uses SSL (Secure Sockets Layer) or TLS (Transport Layer Security) to provide VPN access through a web portal. Essentially, TLS and SSL are the protocols used to secure websites. If you see a website beginning with HTTPS, then traffic to and from that website is encrypted using SSL or TLS. Today, we almost always mean TLS when we say SSL. It is just that many people became comfortable to saying SSL, and the phrase stuck. This should be obvious from the brief history of SSL/TLS presented here:

  • Unreleased SSL v1 (Netscape).
  • Version 2 released in 1995 but had many flaws.
  • Version 3 released in 1996 (RFC 6101).
  • Standard TLS 1.0, RFC 2246, released in 1999.
  • TLS 1.1 defined in RFC 4346 in April 2006.
  • TLS 1.2 defined in RFC 5246 in August 2008. It is based on the earlier TLS 1.1 spec.
  • As of July 2017, TLS 1.3 is a draft and details have not been fixed yet.

In some VPN solutions the user logs in to a website, one that is secured with SSL or TLS, and is then given access to a virtual private network. However, visiting a website that uses SSL or TLS does not mean you are on a VPN. As a general rule most websites, such as banking websites, give you access only to a very limited set of data, such as your account balances. A VPN gives you access to the network, the same or similar access to what you would have if you were physically on that network.

Whether you are using SSL to connect to an e-commerce website or to establish a VPN, the SSL handshake process is needed to establish the secure/encrypted connection:

1. The client sends the server the client’s SSL version number, cipher settings, session-specific data, and other information that the server needs to communicate with the client using SSL.

2. The server sends the client the server’s SSL version number, cipher settings, session-specific data, and other information that the client needs to communicate with the server over SSL. The server also sends its own certificate, and if the client is requesting a server resource that requires client authentication, the server requests the client’s certificate.

3. The client uses the information sent by the server to authenticate the server—e.g., in the case of a web browser connecting to a web server, the browser checks whether the received certificate’s subject name actually matches the name of the server being contacted, whether the issuer of the certificate is a trusted certificate authority, whether the certificate has expired, and, ideally, whether the certificate has been revoked. If the server cannot be authenticated, the user is warned of the problem and informed that an encrypted and authenticated connection cannot be established. If the server can be successfully authenticated, the client proceeds to the next step.

4. Using all data generated in the handshake thus far, the client (with the cooperation of the server, depending on the cipher in use) creates the pre-master secret for the session, encrypts it with the server’s public key (obtained from the server’s certificate, sent in step 2), and then sends the encrypted pre-master secret to the server.

5. If the server has requested client authentication (an optional step in the handshake), the client also signs another piece of data that is unique to this handshake and known by both the client and server. In this case, the client sends both the signed data and the client’s own certificate to the server along with the encrypted pre-master secret.

6. If the server has requested client authentication, the server attempts to authenticate the client. If the client cannot be authenticated, the session ends. If the client can be successfully authenticated, the server uses its private key to decrypt the pre-master secret, and then performs a series of steps (which the client also performs, starting from the same pre-master secret) to generate the master secret.

7. Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session and to verify its integrity (that is, to detect any changes in the data between the time it was sent and the time it is received over the SSL connection).

8. The client sends a message to the server informing it that future messages from the client will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the client portion of the handshake is finished.

9. The server sends a message to the client informing it that future messages from the server will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the server portion of the handshake is finished.

 

 

 

VPN Solutions

Regardless of which protocols you use for VPN, you must implement your choice in some software/hardware configuration. Many operating systems have built-in VPN server and client connections. These are generally fine for small office or home situations. However, they might not be adequate for larger scale operations in which multiple users connect via VPN. For those situations, a dedicated VPN solution might be necessary.

6.5.1 Cisco Solutions

Cisco offers VPN solutions, including a module that can be added to many of their switches and routers to implement VPN services. It also offers client-side hardware that is designed to provide an easy-to-implement yet secure client side for the VPN.

The main advantage of this solution is that it incorporates seamlessly with other Cisco products. Administrators using a Cisco firewall or Cisco router might find this solution to be preferable. However, this solution might not be right for those not using other Cisco products and those who do not have knowledge of Cisco systems. However, many attractive specifications for this product include the following:

  • It can use 3DES encryption (an improved version of DES). But AES is preferred and strongly recommended.
  • It can handle packets larger than 500 bytes.
  • It can create up to 60 new virtual tunnels per second, a good feature if a lot of users might be logging on or off.

6.5.2 Openswan

The Openswan product (www.openswan.org/) is an open source VPN solution available for Linux operating systems. As an open source product, one of its biggest advantages is that it is free. Openswan uses IPSec, making it a highly secure VPN solution.

Openswan supports either remote users logging on via VPN, or site-to-site connections. It also supports wireless connections. However, it does not support NAT (network address translation, the new alternative to proxy servers).

 

 

 

7.1 Configuring Windows

Properly configuring Windows (Windows 7, 8, 10 and Server Editions) consists of many facets. You must disable unnecessary services, properly configure the registry, enable the firewall, properly configure the browser, and more.

Previously we have discussed the firewall concepts and the processes of both stateful packet inspection and stateless packet inspection, and a later section of this chapter discusses browser security. For now, let’s go over the other important factors in Windows security configuration.

7.1.1 Accounts, Users, Groups and Passwords

Any Windows system comes with certain default user accounts and groups. These can frequently be a starting point for intruders who want to crack passwords for those accounts and gain entrance onto a server or network. Simply renaming or disabling some of these default accounts can improve your security.

Note: Windows has an affinity to move things in the control panel with each version. Your version (7, 8, 8.1, 10, etc.) might have things in a different location. If you have not already done so, take some time to familiarize yourself with the location of utilities in your version of Windows.

In Windows 7 or Windows 8, you find user accounts by going to Start, Settings, Control Panel, Users and Groups. In Windows 10 go to Start, Settings, and Accounts.

7.1.1.1 Administrator Accounts

The default administrator account has administrative privileges, and hackers frequently seek to obtain logon information for an administrator account. Guessing a logon is a two-step process of first identifying the username, and then the password. Default accounts allow the hacker to bypass the first half of this process. Administrators should disable this account.

Having an account with administrative privileges is necessary for maintaining your server. The next step is adding a new account, one with an innocuous name and giving that account administrative privileges. Doing so makes a hacker’s task more difficult, as he must first discover what account actually has administrative privileges before he can even attempt to compromise that account.

Some experts suggest simply renaming the administrator account, or using an administrator account that has a username that indicates its purpose. That is not a recommendation for the following reasons:

  • The whole point is that a hacker should not be able to identify which username has administrative privileges.
  • Simply renaming the administrator account to a different name, but one that still indicates its administrative rights will not help this situation.

7.1.1.2 Other Accounts

The administrator account is the one most often targeted by hackers, but Windows also includes other default user accounts. Applying an equally demanding behaviour to all default accounts is a good idea. Any default account can be a gateway for a hacker to compromise a system. A few accounts that you should pay particular attention are:

  • IUSR_Machine name: When you are running IIS, a default user account is created for IIS. Its name is IUSR_ and the name of your machine. This is a common account for a hacker to attempt to compromise. Altering this one in the manner suggested for the administrator account is advisable.
  • ASP.NET: If your machine is running ASP.NET, a default account is created for web applications. A hacker that is familiar with .NET could target this account.
  • Database accounts: Many relational database management systems, such as SQL Server, create default user accounts. An intruder, particularly one who wants to get at your data, could target these accounts.

When adding any new account, always give the new account’s user or group the least number and type of privileges needed to perform their job, even accounts for IT staff members. Below are some examples:

  • A PC technician does not need administrative rights on the database server. Even though belongs to the IT department, does not need access to everything in that department.
  • Managers may use applications that reside on a web server, but they certainly should not have rights on that server.
  • Just because a programmer develops applications that run on a server does not mean that should have full rights on that server.

These are just a few examples of things to consider when setting up user rights.

Remember: Always give the least access necessary for that person to do her job. This concept is often called least privileges, and is a cornerstone of security.

7.1.2 Setting Security Policies

Setting appropriate security policies is the next step in hardening a Windows server. This does not refer to written policies an organisation might have regarding security standards and procedures. In this case, the term security policies refers to the individual machines’ policies.

The first matter of concern is setting secure password policies. The default settings for Windows passwords are not secure. The table below shows the default password policies. Maximum password age refers to how long a password is effective before the user is forced to change that password.

Enforce password history refers to how many previous passwords the system remembers, thus preventing the user from reusing passwords. Minimum password length defines the minimum number of characters allowed in a password.

Password complexity means that the user must use a password that combines numbers, letters, and other characters. These are the default security settings for all Windows versions from Windows NT 4.0 forward. If your system is protected within a business environment, the settings at Local Security will be greyed out, indicating you do not have permissions to make changes.

Policy Recommendation
Enforce password history 1 password remembered
Maximum password age 42 days
Minimum password age 0 days
Minimum password length 0 characters
Passwords must meet complexity requirements Disabled
Store password using reversible encryption for all users in the domain Disabled

The default password policies are not secure enough, but what policies should you use instead? Different experts answer that question differently. The table below shows the recommendations of Microsoft and the National Security Agency.

Policy Microsoft NSA
Enforce password history 3 passwords 5 passwords
Maximum password age 42 days 42 days
Minimum password age 2 days 2 days
Minimum password length 8 characters 12 characters
Passwords must meet complexity requirements No recommendation Yes
Store password using reversible encryption for all users in the domain No recommendation No recommendation

Developing appropriate password policies depends largely on the requirements of your network environment. If your network stores and processes highly sensitive data and is an attractive target to hackers, you must always skew your policies and settings toward greater security. However, bear in mind that if security measures are too complex, your users will find complying difficult. For example, very long, complex passwords (such as $%Tbx38T@_FgR$$) make your network quite secure, but such passwords are virtually impossible for users to remember.

7.1.3 Account Lockout Policies

When you open the Local Security Settings dialog, your options are not limited to setting password policies. You can also set account lockout policies. These policies determine how many times a user can attempt to log in before being locked out, and for how long to lock them out. The default Windows settings are shown in the table below.

Policy Default Settings
Account lockout duration Not defined
Account lockout threshold 0 invalid logon attempts
Reset account lockout counter after Not defined

These default policies are not secure. Essentially, they allow for an infinite number of log-in attempts, making the use of password crackers very easy and virtually guaranteeing that someone will eventually crack one or more passwords and gain access to your system. The table below provides the recommendations from Microsoft and National Security Agency.

Policy Microsoft NSA
Account lockout duration 0, indefinite 15 hours
Account lockout threshold 5 attempts 3 attempts
Reset account after 15 minutes 30 minutes

7.1.4 Registry Settings

The Windows Registry is a database used to store settings and options for Microsoft Windows operating systems. This database contains critical information and settings for all the hardware, software, users, and preferences on a particular computer. Whenever users are added, software is installed or any other change is made to the system (including security policies), that information is stored in the registry.

Secure registry settings are critical to securing a network. Unfortunately, that area is often overlooked. One thing to keep in mind is that if you do not know what you are doing in the registry, you can cause serious problems. So, if you are not very comfortable with the registry, do not touch it. Even if you are comfortable making registry changes, always back up the registry before any change.

7.1.5 Registry Basics

The physical files that make up the registry are stored differently depending on which version of Windows you are using. Older versions of Windows (that is, Windows 95 and 98) kept the registry in two hidden files in your Windows directory, called USER.DAT and SYSTEM.DAT. In all versions of Windows since XP, the physical files that make up the registry are stored in %SystemRoot%\System32\Config. Since Windows 8, the file has been named ntuser.dat.

Regardless of the version of Windows you are using, you cannot edit the registry directly by opening and editing these files. Instead you must use a tool, regedit.exe, to make any changes. There are newer tools like regedit32. However, many users find that the older regedit has a more user friendly “find” option for searching the registry. Either one will work.

Although the registry is referred to as a “database,” it does not actually have a relational database structure (like a table in MS SQL Server or Oracle). The registry has a hierarchical structure similar to the directory structure on the hard disk. In fact, when you use regedit, you will note it is organized like Windows Explorer. To view the registry, go to Start, Run, and type regedit. You should see the Registry Editor dialog box as shown below. Some of the folders in your dialog box might be expanded.

Your Registry Editor dialog box will likely have the same five main folders as the one shown above in the screenshot. Each of these main branches of the registry is briefly described in the following list. These five main folders are the core registry folders. A system might have additions, but these are the primary folders containing information necessary for your system to run.

  • HKEY_CLASSES_ROOT: This branch contains all of your file association types, OLE information, and shortcut data.
  • HKEY_CURRENT_USER: This branch links to the section of HKEY_USERS appropriate for the user currently logged on to the PC.
  • HKEY_LOCAL_MACHINE: This branch contains computer-specific information about the type of hardware, software, and other preferences on a given PC.
  • HKEY_USERS: This branch contains individual preferences for each user of the computer.
  • HKEY_CURRENT_CONFIG: This branch links to the section of HKEY_LOCAL_MACHINE appropriate for the current hardware configuration.

If you expand a branch, you will see its subfolders. Many of these have, in turn, more subfolders, possibly as many as four or more before you get to a specific entry. A specific entry in the Windows Registry is referred to as a key. A key is an entry that contains settings for some particular aspect of your system. If you alter the registry, you are actually changing the settings of particular keys.

7.1.6 Restrict Null Session Access

Null sessions are a significant weakness that can be exploited through the various shares that are on the computer. A null session is Windows’ way of designating anonymous connections. Any time you allow anonymous connections to any server, you are inviting significant security risks. Modify null session access to shares on the computer by adding RestrictNullSessAccess, a registry value that toggles null session shares on or off to determine whether the Server service restricts access to clients logged on to the system account without username and password authentication. Setting the value to “1” restricts null session access for unauthenticated users to all server pipes and shares except those listed in the NullSessionPipes and NullSessionShares entries.

Key Path: HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer

Action: Ensure that it is set to: Value = 1

7.1.7 Restrict Null Session Access Over Named Pipes

The null session access over named pipes registry setting should be changed for much the same reason as the preceding null session registry setting. Restricting such access helps to prevent unauthorised access over the network. To restrict null session access over named pipes and shared directories, edit the registry and delete the values, as shown below.

Key Path: HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer

Action: Delete all values

7.1.8 Restrict Anonymous Access

The anonymous access registry setting allows anonymous users to list domain user names and enumerate share names. It should be shut off. The possible settings for this key are:

  • 0—Allow anonymous users
  • 1—Restrict anonymous users
  • 2—Allow users with explicit anonymous permissions

Key Path: HKLM\SYSTEM\CurrentControlSet\Control\Lsa

Action: Set Value = 2

7.1.9 Remote Access to the Registry

Remote access to the registry is another potential opening for hackers. The Windows XP registry editing tools support remote access by default, but only administrators should have remote access to the registry. Fortunately, later versions of Windows turned this off by default. In fact, some experts advise that there should be no remote access to the registry for any person. This point is certainly debatable. If your administrators frequently need to remotely alter registry settings, then completely blocking remote access to them will cause a reduction in productivity of those administrators. However, completely blocking remote access to the registry is certainly more secure. To restrict network access to the registry:

1. Add the following key to the registry: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurePipeServers\winreg.

2. Select winreg, click the Security menu, and then click Permissions.

3. Set the Administrator’s permission to Full Control, make sure no other users or groups are listed, and then click OK.

Recommended Value = 0

7.1.10 Services

A service is a program that runs without direct intervention by the computer user. In Unix/Linux environments, these are referred to as daemons. Many items on your computer are run as services. Internet Information Services, FTP Service, and many system services are good examples. Any running service is a potential starting point for a hacker. Obviously, you must have some services running for your computer to perform its required functions. However, there are services your machine does not use. If you are not using a service, it should be shut down.

7.1.11 Encrypting File System

Beginning with Windows 2000, the Windows operating system has offered the Encrypting File System (EFS), which is based on public key encryption and takes advantage of the CryptoAPI architecture in Windows 2000.

This still exists in Windows 7, 8, and 10; however, with the later versions of Windows, EFS is only available in the upper-end editions of Windows such as Windows Professional. With this system, each file is encrypted using a randomly generated file encryption key, which is independent of a user’s public/private key pair; this method makes the encryption resistant to many forms of cryptoanalysis-based attacks. For our purposes the exact details of how EFS encryption works are not as important as the practical aspects of using it.

7.1.12 Security Templates

We have been discussing a number of ways for making a Windows system more secure, but exploring services, password settings, registry keys, and other tools can be a daunting task for the administrator who is new to security. Applying such settings to a host of machines can be a tedious task for even the most experienced administrator.

The best way to simplify this aspect of operating system hardening is to use security templates. A security template contains hundreds of possible settings that can control a single or multiple computers. Security templates can control areas such as user rights, permissions, and password policies, and they enable administrators to deploy these settings centrally by means of Group Policy Objects (GPOs).

Security templates can be customized to include almost any security setting on a target computer. A number of security templates are built into Windows. These templates are categorized for domain controllers, servers, and workstations. These security templates have default settings designed by Microsoft. All of these templates are located in the C:\Windows\Security\Templates folder. The following is a partial list of the security templates that you will find in this folder:

  • Hisecdc.inf: This template is designed to increase the security and communications with domain controllers.
  • Hisecws.inf: This template is designed to increase security and communications for client computers and member servers.
  • Securedc.inf: This template is designed to increase the security and communications with domain controllers, but not to the level of the High Security DC security template.
  • Securews.inf: This template is designed to increase security and communications for client computers and member servers.
  • Setup security.inf: This template is designed to reapply the default security settings of a freshly installed computer. It can also be used to return a system that has been misconfigured to the default configuration.

 

 

 

 

7.2 Guided Exercise: Password Policies

Resources           
Files None
Machines Windows Server

In this exercise you will create a new user account and set its password policies.

Login to Windows Server, click the Start button, write gpedit.msc and press Enter.

The Local Group Policy Editor window will open. Below Computer Configuration exand the folders Windows Setiings -> Account Policies.

Click on the Password Policy and then right click on the Minimu password length and select properties. 

On the Minimum password length Properties window change the value 0 to 8, click Apply and then OK. Then close the Local Group Policy Editor.

Guided Exercise Video

7.3 Configuring Linux

An in-depth review of Linux security would be a lengthy task indeed. One reason is the diversity of Linux setups. Users could be using Debian, Red Hat, Ubuntu, or other Linux distributions. Some might be working from the shell, while others work from some graphical user interfaces such as KDE or GNOME. Fortunately, many of the same security concepts that apply to Windows can be applied to Linux. The only differences lie in the implementation, as explained in the following list:

  • User and account policies should be set up the same in Linux as they are in Windows, with only a few minor differences. These differences are more a matter of using different names in Linux than in Windows. For example, Linux does not have an administrator account; it has a root account.
  • All services (called daemons in Linux) not in use should be shut down.
  • The browser must be configured securely.
  • You must routinely patch the operating system.

In addition to some tactics that are common to Windows and Linux, a few approaches are different for the two operating systems:

  • No application should run as the root user unless absolutely necessary. Remember that the root user is equivalent to the administrator account in Windows. Also, remember that all applications in Linux run as if started by a particular user, and therefore having an application run as root user would give it all administrative privileges.
  • The root password must be complex and must be changed frequently. This is the same as with Windows administrator passwords.
  • Disable all console-equivalent access for regular users. This means blocking access to programs such as shutdown, reboot, and halt for regular users on your server.
  • Hide your system information. When you log in to a Linux box, it displays by default the Linux distribution name, version, kernel version, and the name of the server. This information can be a starting point for intruders. You should just prompt users with a “Login:” prompt.

7.3.1 Disable Services

Every service (daemon) that runs is executing code on the server. If there is a vulnerability within that code, it is a potential weakness that can be leveraged by an attacker; it is also consuming resources in the form of RAM and CPU cycles.

Many operating systems ship with a number of services enabled by default, many of which you may not use. These services should be disabled to reduce the attack surface on your servers. Of course you should not just start disabling services with reckless abandon—before disabling a service, it is prudent to ascertain exactly what it does and determine if you require it.

There are a number of ways to ascertain which services are running on a UNIX system, the easiest of which is to use the “ps” command to list running services. Exact argument syntax can vary between versions, but the “ps ax” syntax works on most systems and will list all currently running processes. For minor variations in syntax on your operating system, check the manual page for “ps” using the command “man ps”.

Services should be disabled in start-up scripts (“rc” or “init”, depending on operating system) unless your system uses “systemd”, in which case you can refer to the following discussion on “systemd”. Using the “kill” command will merely stop the currently running service, which will start once more during a reboot. On Linux the commands are typically one of: “rc-update”“update-rc.d”, or “service”. On BSD-based systems, you typically edit the file /etc/rc.conf.

For example, on several flavours of Linux the service command can be used to stop the sshd service: service sshd stop

To start sshd (one time): service start sshd

And to disable it from starting after a reboot:update-rc.d -f sshd remove

Some Linux distributions have moved toward using “systemd” as opposed to SysV startup scripts to manage services. “systemd” can be used to perform other administrative functions with regards to services, such as reloading configuration and displaying dependency information.

To stop sshd (one time): systemctl stop sshd
To enable sshd upon every reboot: systemctl enable sshd

And to disable sshd upon further reboots: systemctl disable sshd

Older Unix/Linux operating systems may use inetd or xinetd to manage services rather than rc or init scripts. (x)inetd is used to preserve system resources by being almost the only service running and starting other services on demand, rather than leaving them all running all of the time. If this is the case, services can be disabled by editing the inetd.conf or xinetd.conf files, typically located in the /etc/ directory.

7.3.2 File Permissions

Most Unix/Linux file systems have a concept of permissions—that is, files which users and groups can read, write, or execute. Most also have the SETUID (set user ID upon execution) permission, which allows a nonroot user to execute a file with the permission of the owning user, typically root. This is because the normal operation of that command, even to a nonroot user, requires root privileges, such as su or sudo.

Typically, an operating system will set adequate file permissions on the system files during installation. However, as you create files and directories, permissions will be created according to your umask settings. As a general rule, the umask on a system should only be made more restrictive than the default. Cases where a less restrictive umask is required should be infrequent enough that chmod can be used to resolve the issue. Your umask settings can be viewed and edited using the umask command. See man umask1 for further detail on this topic.

Incorrect file permissions can leave files readable by users other than whom it is intended for. Many people wrongly believe that because a user has to be authenticated to log in to a host, leaving world or group readable files on disk is not a problem. However, they do not consider that services also run using their own user accounts.

Take, for example, a system running a web server such as Apache, nginx, or lighttpd; these web servers typically run under a user ID of their own such as “www-data.” If files you create are readable by “www-data”, then, if configured to do so, accidentally or otherwise, the web server has permission to read that file and to potentially serve it to a browser. By restricting file system-level access, we can prevent this from happening—even if the web server is configured to do so, as it will no longer have permission to open the file.

As an example, the file test can be read and written to by the owner _www, it can be read and executed by the group staff, and can be read by anybody. This is denoted by the rw-, r-x, and r– permissions in the directory listing:

$ ls -al test

-rw-r-xr–  1 _wwwstaff  1228 16 Apr 05:22 test

In the Unix/Linux file system listing, there are 10 hyphens (-), the last 9 of which correspond to read, write, and execute permissions for owner, group and other (everyone). A hyphen indicates the permission is not set; a letter indicates that it is set. Other special characters appear less often; for example, an S signifies that the SETUID flag has been set.

If we wish to ensure that others can no longer see this file, then we can modify the permissions. We can alter them using the chmod command (o= sets the other permissions to nothing):

$ sudo chmod o= test

$ ls -la test

-rw-r-x—  1 _wwwstaff  1228 16 Apr 05:22 test

7.3.3 File Integrity

File Integrity Management tools monitor key files on the file system and alert the administrator in the event that they change.These tools can be used to ensure that key system files are not tampered with, as in the case with a rootkit, and that files are not added to directories without the administrator’s permission, or configuration files modified, as can be the case with backdoors in web applications, for example.

There are both commercial tools and free/open source tools available through your preferred package management tool. Examples of open source tools that perform file integrity monitoring include Samhain and OSSEC. If you are looking to spend money to obtain extra features like providing integration with your existing management systems, there are also a number of commercial tools available.

Alternatively, if you cannot for whatever reason install file integrity monitoring tools, many configuration management tools can be configured to report on modified configuration files on the file system as part of their normal operation. This is not their primary function and does not offer the same level of coverage, and so is not as robust as a dedicated tool. However, if you are in a situation where you cannot deploy security tools but do have configuration management in place, this may be of some use.

7.3.4 Separate Disk Partitions

Disk partitions within Unix/Linux can be used not only to distribute the file system across several physical or logical partitions, but also to restrict certain types of action depending on which partition they are taking place on. Options can be placed on each mount point in /etc/fstab.

There are some minor differences between different flavours of Unix/Linux with regards to the options, and so consulting the system manual page—using man mount—before using options is recommended.

Some of the most useful and common mount point options, from a security perspective, are:

nodev

Do not interpret any special dev devices. If no special dev devices are expected, this option should be used. Typically only the /dev/ mount point would contain special dev devices.

nosuid

Do not allow setuid execution. Certain core system functions, such as su and sudo will require setuid execution, thus this option should be used carefully. Attackers can use setuid binaries as a method of backdooring a system to quickly obtain root privileges from a standard user account. Setuid execution is probably not required outside of the system-installed bin and sbin directories. You can check for the location of setuid binaries using the following command:

$ sudo find / -perm -4000

Binaries that are specifically setuid root, as opposed to any setuid binary, can be located using the following variant:

$ sudo find / -user root -perm -4000

ro

Mount the file system read-only. If data does not need to be written or updated, this option may be used to prevent modification. This removes the ability for an attacker to modify files stored in this location such as config files and static website content.

noexec

Prevents execution, of any type, from that particular mount point. This can be set on mount points used exclusively for data and document storage. It prevents an attacker from using this as a location to execute tools he may load onto a system and it can defeat certain classes of exploit.

7.3.5 Chroot

chroot alters the apparent root directory of a running process and any children processes. The most important aspect of this is that the process inside the chroot jail cannot access files outside of its new apparent root directory, which is particularly useful in the case of ensuring that a poorly configured or exploited service cannot access anything more than it needs to.

There are two ways in which chroot can be initiated:

The process in question can use the chroot system call and chroot itself voluntarily. Typically, these processes will contain chroot options within their configuration files, most notably allowing the user to set the new apparent root directory.

The chroot wrapper can be used on the command line when executing the command. Typically this would look something like:

sudo chroot /chroot/dir/ /chroot/dir/bin/binary -args

For details of specific chroot syntax for your flavor of Unix, consult man chroot.3

It should be noted, however, that there is a common misconception that chroot offers some security features that it simply does not. Chroot jails are not impossible to break out of, especially if the process within the chroot jail is running with root privileges. Typically processes that are specifically designed to use chroot will drop their root privileges as soon as possible so as to mitigate this risk. Additionally, chroot does not offer the process any protection from privileged users outside of the chroot on the same system.

Neither of these are reasons to abandon chroot, but should be considered when designing use cases as it is not an impenetrable fortress, but more a method of further restricting file system access.

7.4 Guided Exercise: Linux File Permissions

Resources              
Files None
Machines Ubuntu Server

Login to Ubuntu Server with the following credentials:

Username: user
Password: Pa$$w0rd

Once logged in click on the terminal icon (last icon) on the left side menu.

Create a directory in /home called ateam with the command sudo mkdir /home/ateam. When prompted enter the user password and the press the enter button

Create a user called ateam with the command sudo useradd ateam.

Change the user ownership of the ateam directory to the user ateam with the command sudo chown ateam /home/ateam

Create a group called admins with the command sudo groupadd admins

Ensure on the ateam directory the user ateam and group admins to have full permissions. All other users of the system should not have permissions. Use the command sudo chmod 770 /home/ateam.

Confirm that you have set the correct permissions using the command sudo ls –ld /home/ateam

In your home directory create a file named file1.txt using the command touch file1.txt

View the permissions of the file using the command ls –l file1.txt

Give to the group the write permission using the command chmod g+w file1.txt. Then confirm it using the command ls –l file1.txt

Guided Exercise Video

7.4 Guided Exercise: Linux File Permissions

Resources              
Files None
Machines Ubuntu Server

Login to Ubuntu Server with the following credentials:

Username: user
Password: Pa$$w0rd

Once logged in click on the terminal icon (last icon) on the left side menu.

Create a directory in /home called ateam with the command sudo mkdir /home/ateam. When prompted enter the user password and the press the enter button

Create a user called ateam with the command sudo useradd ateam.

Change the user ownership of the ateam directory to the user ateam with the command sudo chown ateam /home/ateam

Create a group called admins with the command sudo groupadd admins

Ensure on the ateam directory the user ateam and group admins to have full permissions. All other users of the system should not have permissions. Use the command sudo chmod 770 /home/ateam.

Confirm that you have set the correct permissions using the command sudo ls –ld /home/ateam

In your home directory create a file named file1.txt using the command touch file1.txt

View the permissions of the file using the command ls –l file1.txt

Give to the group the write permission using the command chmod g+w file1.txt. Then confirm it using the command ls –l file1.txt

Guided Exercise Video

 

7.6 Operating System Patches

From time to time, security flaws are found in operating systems. As software vendors become aware of flaws, they usually write corrections to their code, known as patches or updates. Whatever operating system you use, you must apply these patches as a matter of routine.

Windows patches are probably the most well-known, but patches can be released for any operating system. You should patch your system any time a critical patch is released. You might consider scheduling a specific time simply to update patches. Some organisations find that updating once per quarter or even once per month is necessary.

7.6.1 Applying Patches

Applying patches means that the operating system, database management systems, development tools, Internet browsers, and so on are all checked for patches. In a Microsoft environment this should be easy because the Microsoft website has a utility that scans your system for any required patches to the browser, operating system, or office products. It is a very basic tenet of security to ensure that all patches are up-to-date.

This should be one of your first tasks when assessing a system. Regardless of the operating system or application vendor, you should be able to go to its website and find information regarding how to download and install the latest patches. But remember that everything must be patched—the operating system, applications, drivers, network equipment (switches, routers, etc.), literally everything.

Once you have ensured that all patches are up to date, the next step is to set up a system to ensure that they are kept up to date. One simple method is to initiate a periodic patch review where, at a scheduled time, all machines are checked for patches. There are also automated solutions that will patch all systems in your organisation. It is imperative that all machines be patched, not just the servers.

7.6.2 Automated Patch Systems

Manually patching machines can be quite cumbersome, and in larger networks, simply impractical. However, there are automated solutions that will patch all systems on your network. These solutions scan your systems at pre-set times and update any required patches.

7.6.3 Windows Update

For systems running Microsoft Windows, you can set up Windows to automatically patch your system. Recent versions of Windows have this turned on automatically. If your system is older, simply go to https://support.microsoft.com/en-us/help/12373/windows-update-faq and follow the instructions to keep your system updated. This will give that individual machine routing updates for the Windows operating system.

This approach does have a few shortcomings, the first being that it will only update Windows and not any other applications on your machine. The second drawback is that it does not provide any way to check patches on a test machine before deploying them to the entire network. Its main advantages are that it is free, and integrated with the Windows operating system.

Surprisingly, another commonly overlooked protection is some type of software update platform. Windows Server Update Services (WSUS), System Centre Configuration Manager (SCCM), and other third-party applications can keep the endpoints up-to-date with the latest security patches. Not only should you worry about regular Windows system patches, but there should also be a focus on outdated versions of commonly exploited software such as Java, Adobe Reader, Firefox, and others that are currently in use.

7.6.4 Unix/Linux Software Updates

Unlike Microsoft environments, Unix-based environments typically use a system of package management to install the majority of third-party applications.

Package management and update tools vary depending not only on which flavor of Unix you are running, but also differ depending on distribution you use. For example, Debian Linux and SUSE Linux use two different package management systems, and FreeBSD uses another.

Despite the differences, there are common themes surrounding the package management systems. Typically, each host will hold a repository of packages that are available to install on the system via local tools. The system administrator issues commands to the package management system to indicate that she wishes to install, update, or remove packages. The package management system will, depending on configuration, either download and compile, or download a binary of the desired package and its dependencies (libraries and other applications required to run the desired application), and install them on the system.

The various package management systems are so comprehensive in a modern distribution that for many environments it would be unusual to require anything further. Deploying software via package management, as opposed to downloading from elsewhere, is the preference unless there is a compelling reason to do otherwise. This greatly simplifies the issue of staying up-to-date and tracking dependencies.

The same package management system can be used to perform upgrades. As the repository of available packages is updated, new versions of already installed packages appear in the package database. These new version numbers can be compared against the installed version numbers and a list of applications due for an upgrade to a new version can determined automatically, typically via a single command line.

This ease of upgrade using package management means that unless a robust system of checking for and applying changes is in place for installed applications, the package management system should be used to provide an easy, automated method of updating all packages on UNIX application servers.

Not only does this remove the need to manually track each application installed on the application servers, along with all their associated dependencies, but it (typically) means that it has already been tested and confirmed to work on that distribution. Of course, individual quirks between systems mean that you cannot be sure that everything will always work smoothly, and so the testing process should remain. However, the testing process may be entered with a good degree of confidence.

7.6.4.1 Core Operating System Updates

Many, but not all, UNIX systems have a delineation between the operating system and applications that are installed on it. As such, the method of keeping the operating system itself up-to-date will often differ from that of the applications. The method of upgrading will vary from operating system to operating system, but the upgrade methods fall into two broad buckets:

Binary update

Commercial operating systems particularly favour the method of applying a binary update; that is, distributing precompiled binary executables and libraries that are copied to disk, replacing the previous versions. Binary updates cannot make use of custom compiler options and make assumptions about dependencies, but they require less work in general and are fast to install.

Update from source

Many open source operating systems favour updates from source, meaning that they are compiled locally from a copy of the source code and previous versions on disk are replaced by these binaries. Updating from source takes more time and is more complex, however the operating system can include custom compiler optimizations and patches.

There are many debates over which system is better, and each has its pros and cons. For the purposes of this book, however, we will assume that you are sticking with the default of your operating system as the majority of arguments centre on topics unrelated to security.

Updates to the operating system are typically less frequent than updates to third-party software. Additionally, they are more disruptive, as they typically require a reboot because they often involve an update to the kernel or other subsystems that only load at startup, unlike application updates, which can be instantiated via the restart of the appropriate daemon. Core operating updates are advisable, though as vulnerabilities are often found within both operating systems and applications.

As with any other patch of this nature, it is advisable to have a rollback plan in place for any large update such as one for an operating system. In the case of virtualized infrastructure, this can be achieved simply by taking a snapshot of the file system prior to upgrade; thus a failed upgrade can be simply rolled back by reverting to the last snapshot. In physical infrastructure this can be more problematic, but most operating systems have mechanisms to cope with this issue, typically by storing a copy of the old binaries and replacing them if required.

Nevertheless, patches to the operating system are often required in order to close security gaps, so you should have a process defined to cope with this. As with applications, the effort to upgrade the operating system is lower the more up-to-date a system already is, so we recommend remaining as current as is reasonable, leaving only small increments to update at any one time.

 

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.