Access Control:: Difference between revisions
Line 333: | Line 333: | ||
Although IPS units are access control devices, many implement a security model that is different from firewalls. Firewalls typically allow only the traffic necessary for business purposes, or only “known good” traffic. IPS units typically are configured to disallow traffic that triggers signatures, or “known bad” traffic, while allowing all else. However, IPS units can be configured to more closely mimic a device that allows only “known good” traffic.<br> | Although IPS units are access control devices, many implement a security model that is different from firewalls. Firewalls typically allow only the traffic necessary for business purposes, or only “known good” traffic. IPS units typically are configured to disallow traffic that triggers signatures, or “known bad” traffic, while allowing all else. However, IPS units can be configured to more closely mimic a device that allows only “known good” traffic.<br> | ||
<br> | <br> | ||
IPS units also contain a “white list” of IP addresses that should never be blocked. The list helps ensure that an attacker cannot achieve a denial of service by spoofing the IP of a critical host. | IPS units also contain a “white list” of IP addresses that should never be blocked. The list helps ensure that an attacker cannot achieve a denial of service by spoofing the IP of a critical host. | ||
==Quarantine== | ==Quarantine== | ||
Quarantining a device protects the network from potentially malicious code or actions. Typically, a device connecting to a security domain is queried for conformance to the domain’s security policy. If the device does not conform, it is placed in a restricted part of the network until it does conform. For example, if the patch level is not current, the device is not allowed into the security domain until the appropriate patches are downloaded and installed.<br> | Quarantining a device protects the network from potentially malicious code or actions. Typically, a device connecting to a security domain is queried for conformance to the domain’s security policy. If the device does not conform, it is placed in a restricted part of the network until it does conform. For example, if the patch level is not current, the device is not allowed into the security domain until the appropriate patches are downloaded and installed.<br> |
Revision as of 01:49, 10 April 2007
Access Control
The goal of access control is to allow access by authorized individuals and devices and to disallow access to all others.
Authorized individuals may be employees, technology service provider (TSP) employees, vendors, contractors, customers, or visitors. Access should be authorized and provided only to individuals whose identity is established, and their activities should be limited to the minimum required for business purposes.
Authorized devices are those whose placement on the network is approved in accordance with institution policy. Change controls are typically used for devices inside the external perimeter, and to configure institution devices to accept authorized connections from outside the perimeter.
An effective control mechanism includes numerous controls to safeguard and limits access to key information system assets at all layers in the network stack. This section addresses logical and administrative controls, including access rights administration for individuals and network access issues. A subsequent section addresses physical security controls.
Access Rights Administration
System devices, programs, and data are system resources. Each system resource may need to be accessed by individuals (users) in order for work to be performed. Access beyond the minimum required for work to be performed exposes the institution’s systems and information to a loss of confidentiality, integrity, and availability. Accordingly, the goal of access rights administration is to identify and restrict access to any particular system resource to the minimum required for work to be performed. The financial institution’s security policy should address access rights to system resources and how those rights are to be administered.
Management and information system administrators should critically evaluate information system access privileges and establish access controls to prevent unwarranted access. Access rights should be based upon the needs of the applicable user to carry out legitimate and approved activities on the financial institution’s information systems. Policies, procedures, and criteria need to be established for both the granting of appropriate access rights and for the purpose of establishing those legitimate activities.
Formal access rights administration for users consists of four processes:
- An enrollment process to add new users to the system
- An authorization process to add, delete, or modify authorized user access to operating systems, applications, directories, files, and specific types of information
- An authentication process to identify the user during subsequent activities
- A monitoring process to oversee and manage the access rights granted to each user on the system
The enrollment process establishes the user’s identity and anticipated business needs for information and systems. New employees, IT outsourcing relationships, and contractors may also be identified, and the business need for access determined during the hiring or contracting process.
During enrollment and thereafter, an authorization process determines user access rights. In certain circumstances the assignment of access rights may be performed only after the manager responsible for each accessed resource approves the assignment and documents the approval. In other circumstances, the assignment of rights may be established by the employee’s role or group membership, and managed by pre-established authorizations for that group. Customers, on the other hand, may be granted access based on their relationship with the institution.
Authorization for privileged access should be tightly controlled. Privileged access refers to the ability to override system or application controls.
Good practices for controlling privileged access include:
- Identifying each privilege associated with each system component
- Implementing a process to allocate privileges and allocating those privileges either on a need-to-use or an event-by-event basis
- Documenting the granting and administrative limits on privileges
- Finding alternate ways of achieving the business objectives
- Assigning privileges to a unique user ID apart from the one used for normal business use
- Logging and auditing the use of privileged access
- Reviewing privileged access rights at appropriate intervals and regularly reviewing privilege access allocations
- Prohibiting shared privileged access by multiple users
The access rights process programs the system to allow the users only the access rights they were granted. Since access rights do not automatically expire or update, periodic updating and review of access rights on the system is necessary. Updating should occur when an individual’s business needs for system use changes. Many job changes can result in an expansion or reduction of access rights. Job events that would trigger a removal of access rights include transfers, resignations, and terminations. When these job events occur, institutions should take particular care to promptly remove the access rights for users who have remote access privileges, access to customer information, and perform administration functions for the institution’s systems.
Because updating may not always be accurate, periodic review of user accounts is a good control to test whether the access right removal processes are functioning and whether users exist who should have their rights rescinded or reduced. Financial institutions should review access rights on a schedule commensurate with risk.
Access rights to new software and hardware present a unique problem. Typically, hardware and software are shipped with default users, with at least one default user having full access rights. Easily obtainable lists of popular software exist that identify the default users and passwords, enabling anyone with access to the system to obtain the default user’s access. Default user accounts should either be disabled, or the authentication to the account should be changed. Additionally, access to these default accounts should be monitored more closely than other accounts.
Sometimes software installs with a default account that allows anonymous access. Anonymous access is appropriate, for instance, where the general public accesses an informational Web server. Systems that allow access to or store sensitive information, including customer information, should be protected against anonymous access.
The access rights process also constrains user activities through an acceptable-use policy (AUP). Users who can access internal systems typically are required to agree to an AUP before using a system. An AUP details the permitted system uses and user activities and the consequences of noncompliance. AUPs can be created for all categories of system users, from internal programmers to customers. An AUP is a key control for user awareness and administrative policing of system activities.
Examples of AUP elements for internal network and stand-alone users include:
- The specific access devices that can be used to access the network
- Hardware and software changes the user can make to their access device
- The purpose and scope of network activity
- Network services that can be used and those that cannot be used
- Information that is allowable and not allowable for transmission using each allowable service
- Bans on attempting to break into accounts, crack passwords, or disrupt service
- Responsibilities for secure operation
- Consequences of noncompliance
Depending on the risk associated with the access, authorized internal users should generally receive a copy of the policy and appropriate training, and signify their understanding and agreement with the policy before management grants access to the system.
Customers may be provided with a Web site disclosure as their AUP. Based on the nature of the Web site, the financial institution may require customers to demonstrate knowledge of and agreement to abide by the terms of the AUP. That evidence can be paper based or electronic.
Authorized users may seek to extend their activities beyond what is allowed in the AUP, and unauthorized users may seek to gain access to the system and move within the system. Network security controls provide many of the protections necessary to guard against those threats.
Authentication
Authentication is the verification of identity by a system based on the presentation of unique credentials to that system. The unique credentials are in the form of something the user knows, something the user has, or something the user is. Those forms exist as shared secrets, tokens, or biometrics. More than one form can be used in any authentication process. Authentication that relies on more than one form is called multi-factor authentication and is generally stronger than any single-factor authentication method. Authentication contributes to the confidentiality of data and the accountability of actions performed on the system by verifying the unique identity of the system user.
Authentication over the Internet banking delivery channel presents unique challenges. That channel does not benefit from physical security and controlled computing and communications devices like internal local area networks (LANs), and is used by people whose actions cannot be controlled by the institution. The agencies consider the use of single-factor authentication in that environment, as the only control mechanism, to be inadequate for high-risk transactions involving access to customer information or the movement of funds to other parties. Financial institutions should perform risk assessments of their environment and, where the risk assessments indicate the use of single-factor authentication is inadequate, the institutions should implement multi-factor authentication, layered security, or other controls reasonably calculated to mitigate risk.
Authentication is not identification as that term is used in the USA PATRIOT Act (31 USC 5318(l)). Authentication does not provide assurance that the initial identification of a system user is proper. Procedures for the initial identification of a system user are beyond the scope of this booklet.
Shared secret systems uniquely identify the user by matching knowledge on the system to knowledge that only the system and user are expected to share. Examples are passwords, pass phrases, or current transaction knowledge. A password is one string of characters (e.g., “t0Ol@Tyme”). A pass phrase is typically a string of words or characters (e.g., “My car is a shepherd”) that the system may shorten to a smaller password by means of an algorithm. Current transaction knowledge could be the account balance on the last statement mailed to the user/customer. The strength of shared secret systems is related to the lack of disclosure of and about the secret, the difficulty in guessing or discovering the secret, and the length of time that the secret exists before it is changed.
A strong shared secret system only involves the user and the system in the generation of the shared secret. In the case of passwords and pass phrases, the user should select them without any assistance from any other user, such as the help desk. One exception is in the creation of new accounts, where a temporary shared secret could be given to the user for the first log-in, after which the system requires the user to create a different password. Controls should prevent any user from re-using shared secrets that may have been compromised or were recently used by them.
Passwords are the most common authentication mechanism. Passwords are generally made difficult to guess when they are composed from a large character set, contain a large number of characters, and are frequently changed. However, since hard-to-guess passwords may be difficult to remember, users may take actions that weaken security, such as writing the passwords down. Any password system must balance the password strength with the user’s ability to maintain the password as a shared secret. When the balancing produces a password that is not sufficiently strong for the application, a different authentication mechanism should be considered. Pass phrases are one alternative to consider. Due to their length, pass phrases are generally more resistant to attack than passwords. The length, character set, and time before enforced change are important controls for pass phrases as well as passwords.
Shared secret strength is typically assured through the use of automated tools that enforce the password selection policy. Authentication systems should force changes to shared secrets on a schedule commensurate with risk.
Passwords that are either not changed or changed infrequently are known as static passwords. While all passwords are subject to disclosure, static passwords are significantly more vulnerable. An attacker can obtain a password through technical means and through social engineering. Internet banking customers are targeted for such social engineering through phishing attacks. Institution employees and contractors may be similarly targeted. Static passwords are appropriate in systems whose data and connectivity is considered low risk, and in systems that employ effective compensating controls such as physical protections, device authentication, mutual authentication, host security, user awareness, and effective monitoring and rapid response.
Weaknesses in static password mechanisms generally relate to the ease with which an attacker can discover the secret.
Attack methods vary.
- A keylogger or other monitoring device on the user’s computer may record shared secrets and transmit them to the attacker. Keyloggers and other similar devices are an emerging problem for e-banking applications because financial institutions do not control the customer’s computer.
- Controls to protect against keyloggers include using different authentication methods such as dynamic passwords.
- A dictionary attack is one common and successful way to discover passwords. In a dictionary attack, the attacker obtains the system password file and compares the password hashes against hashes of commonly used passwords.
- Controls against dictionary attacks include securing the password file from compromise, detection mechanisms to identify a compromise, heuristic intrusion detection to detect differences in user behavior, and rapid reissuance of passwords should the password file ever be compromised. While extensive character sets and storing passwords as one-way hashes can slow down a dictionary attack, those defensive mechanisms primarily buy the financial institution time to identify and react to the password file compromises.
- An additional attack method targets a specific account and submits passwords until the correct password is discovered.
- Controls against these attacks are account lockout mechanisms, which commonly lock out access to the account after a risk-based number of failed login attempts.
- A variation of the previous attack uses a popular password, and tries it against a wide range of usernames.
- Controls against this attack on the server are a high ratio of possible passwords to usernames, randomly generated passwords, and scanning the Internet protocol (IP) addresses of authentication requests and client cookies for submission patterns.
- Password guessing attacks also exist. These attacks generally consist of an attacker gaining knowledge about the account holder and password policies and using that knowledge to guess the password.
- Controls include training in and enforcement of password policies that make passwords difficult to guess. Such policies address the secrecy, length of the password, character set, prohibition against using well-known user identifiers, and length of time before the password must be changed. Users with greater authorization or privileges, such as root users or administrators, should have longer, more complex passwords than other users.
- Some attacks depend on patience, waiting until the logged-in workstation is unattended.
- Controls include automatically logging the workstation out after a period of inactivity and heuristic intrusion detection.
- Attacks can take advantage of automatic log-in features, allowing the attacker to assume an authorized user’s identity merely by using a workstation.
- Controls include prohibiting and disabling automatic log-in features, screensaver activation that requires user re-authentication, and heuristic intrusion detection.
- Users’ inadvertent or unthinking actions can compromise passwords. For instance, when a password is too complex to readily memorize, the user could write the password down but not secure the paper. Frequently, written-down passwords are readily accessible to an attacker under mouse pads or in other places close to the user’s machines. Additionally, attackers frequently are successful in obtaining passwords by using social engineering and tricking the user into giving up their password.
- Controls include user training, heuristic intrusion detection, and simpler passwords combined with another authentication mechanism.
- Attacks can also become much more effective or damaging if different network devices share the same or a similar password.
- Controls include a policy that forbids the same or similar password on particular network devices.
- A keylogger or other monitoring device on the user’s computer may record shared secrets and transmit them to the attacker. Keyloggers and other similar devices are an emerging problem for e-banking applications because financial institutions do not control the customer’s computer.
Passwords can also be dynamic. Dynamic passwords typically use seeds, or starting points, and algorithms to calculate a new shared secret for each access. Because each password is used for only one access, dynamic passwords can provide significantly more authentication strength than static passwords. In most cases, dynamic passwords are implemented through tokens. A token is a physical device, such as an ATM card, smart card, or other device that contains information used in the authentication process.
Token Systems
Token systems typically authenticate the token and assume that the user who was issued the token is the one requesting access. One example is a token that generates dynamic passwords after a set number of seconds. When prompted for a password, the user enters the password generated by the token. The token’s password-generating system is identical and synchronized to that in the system, allowing the system to recognize the password as valid. The strength of this system of authentication rests in the frequent changing of the password and the inability of an attacker to guess the seed and password at any point in time.
Another example of a token system uses a challenge/response mechanism. In this case, the user identifies him/herself to the system, and the system returns a code to enter into the password-generating token. The token and the system use identical logic and initial starting points to separately calculate a new password. The user enters that password into the system. If the system’s calculated password matches that entered by the user, the user is authenticated. The strengths of this system are the frequency of password change and the difficulty in guessing the challenge, seed, and password.
Other token methods involve multi-factor authentication, or the use of more than one authentication method. For instance, an ATM card is a token. The magnetic strip on the back of the card contains a code that is recognized in the authentication process. However, the user is not authenticated until he or she also provides a PIN, or shared secret. This method is two-factor, using both something the user has and something the user knows. Two-factor authentication is generally stronger than single-factor authentication. This method can allow the institution to authenticate the user as well as the token.
Weaknesses in token systems relate to theft or loss of the token either during delivery to the user or while in the possession of the user; ease in guessing any password-generating algorithm within the token; ease of successfully forging any authentication credential that unlocks the token; the reverse engineering, or cloning, of the token; and “man-in-the-middle” attacks. Each of these weaknesses can be addressed through additional control mechanisms. Token theft or loss generally is protected against by policies that require prompt reporting and cancellation of the token’s ability to allow access to the system, and monitoring of token delivery and use. Additionally, the impact of token theft is reduced when the token is used in multi-factor authentication; for instance, the password from the token is paired with a password known only by the user and the system. This pairing reduces the risk posed by token loss, while increasing the strength of the authentication mechanism. Forged credentials are protected against by the same methods that protect credentials in non-token systems. Protection against reverse engineering requires physical and logical security in token design. For instance, token designers can increase the difficulty of opening a token without causing irreparable damage, or obtaining information from the token either by passive scanning or active input/output. Man-in-the-middle attacks can be protected against through the use of public key infrastructure (PKI).
Token systems can also incorporate public key infrastructure and biometrics.
Public Key Infrastructure
Public key infrastructure, if properly implemented and maintained, can provide a strong means of authentication. By combining a variety of hardware components, system software, policies, practices, and standards, PKI can provide for authentication, data integrity, defenses against customer repudiation, and confidentiality. The system is based on public key cryptography in which each user has a key pair—a unique electronic value called a public key and a mathematically related private key. The public key is made available to those who need to verify the user’s identity.
The private key is stored on the user’s computer or a separate device such as a smart card. When the key pair is created with strong encryption algorithms and input variables, the probability of deriving the private key from the public key is extremely remote. The private key must be stored in encrypted text and protected with a password or PIN to avoid compromise or disclosure. The private key is used to create an electronic identifier called a digital signature that uniquely identifies the holder of the private key and can only be authenticated with the corresponding public key.
The certificate authority (CA), which may be the financial institution or its service provider, plays a key role by attesting with a digital certificate that a particular public key and the corresponding private key belongs to a specific user or system. It is important when issuing a digital certificate that the registration process for initially verifying the identity of users is adequately controlled. The CA attests to the individual user’s identity by signing the digital certificate with its own private key, known as the root key. Each time the user establishes a communication link with the financial institution’s systems, a digital signature is transmitted with a digital certificate. These electronic credentials enable the institution to determine that the digital certificate is valid, identify the individual as a user, and confirm that transactions entered into the institution’s computer system were performed by that user.
The user’s private key exists electronically and is susceptible to being copied over a network as easily as any other electronic file. If it is lost or compromised, the user can no longer be assured that messages will remain private or that fraudulent or erroneous transactions would not be performed. User AUPs and training should emphasize the importance of safeguarding a private key and promptly reporting its compromise.
PKI minimizes many of the vulnerabilities associated with passwords because it does not rely on shared secrets to authenticate customers, its electronic credentials are difficult to compromise, and user credentials cannot be stolen from a central server. The primary drawback of a PKI authentication system is that it is more complicated and costly to implement than user names and passwords. Whether the financial institution acts as its own CA or relies on a third party, the institution should ensure its certificate issuance and revocation policies and other controls discussed below are followed.
When utilizing PKI policies and controls, financial institutions need to consider the following:
- Defining within the certificate issuance policy the methods of initial verification that are appropriate for different types of certificate applicants and the controls for issuing digital certificates and key pairs
- Selecting an appropriate certificate validity period to minimize transactional and reputation risk exposure—expiration provides an opportunity to evaluate the continuing adequacy of key lengths and encryption algorithms, which can be changed as needed before issuing a new certificate
- Ensuring that the digital certificate is valid by such means as checking a certificate revocation list before accepting transactions accompanied by a certificate
- Defining the circumstances for authorizing a certificate’s revocation, such as the compromise of a user’s private key or the closing of user accounts
- Updating the database of revoked certificates frequently, ideally in real-time mode
- Employing stringent measures to protect the root key including limited physical access to CA facilities, tamper-resistant security modules, dual control over private keys and the process of signing certificates, as well as the storage of original and back-up keys on computers that do not connect with outside networks
- Requiring regular independent audits to ensure controls are in place, public and private key lengths remain appropriate, cryptographic modules conform to industry standards, and procedures are followed to safeguard the CA system
- Recording in a secure audit log all significant events performed by the CA system, including the use of the root key, where each entry is time/date stamped and signed
- Regularly reviewing exception reports and system activity by the CA’s employees to detect malfunctions and unauthorized activities; and
- Ensuring the institution’s certificates and authentication systems comply with widely accepted PKI standards to retain the flexibility to participate in ventures that require the acceptance of the financial institution’s certificates by other CAs
The encryption components of PKI are addressed more fully under Encryption.
Biometrics
Biometrics can be implemented in many forms, including tokens. Biometrics verifies the identity of the user by reference to unique physical or behavioral characteristics. A physical characteristic can be a thumbprint or iris pattern. A behavioral characteristic is the unique pattern of key depression strength and pauses made on a keyboard when a user types a phrase. The strength of biometrics is related to the uniqueness of the physical characteristic selected for verification. Biometric technologies assign data values to the particular characteristics associated with a certain feature. For example, the iris typically provides many more characteristics to store and compare, making it more unique than facial characteristics. Unlike other authentication mechanisms, a biometric authenticator does not rely on a user’s memory or possession of a token to be effective. Additional strengths are that biometrics do not rely on people to keep their biometric secret or physically secure their biometric. Biometrics is the only authentication methodology with these advantages.
Enrollment is a critical process for the use of biometric authentication. The user’s physical characteristics must be reliably recorded. Reliability may require several samples of the characteristic and a recording device free of lint, dirt, or other interference. The enrollment device must be physically secure from tampering and unauthorized use.
When enrolled, the user’s biometric is stored as a template. Subsequent authentication is accomplished by comparing a submitted biometric against the template, with results based on probability and statistical confidence levels. Practical usage of biometric solutions requires consideration of how precise systems must be for positive identification and authentication. More precise solutions increase the chances a person is falsely rejected. Conversely, less precise solutions can result in the wrong person being identified or authenticated as a valid user (i.e., false acceptance rate). The equal error rate (EER) is a composite rating that considers the false rejection and false acceptance rates. Lower EERs mean more consistent operations. However, EER is typically based upon laboratory testing and may not be indicative of actual results due to factors that can include the consistency of biometric readers to capture data over time, variations in how users presents their biometric sample (e.g., occasionally pressing harder on a finger scanner), and environmental factors.
Weaknesses in biometric systems relate to the ability of an attacker to submit false physical characteristics or to take advantage of system flaws to make the system erroneously report a match between the characteristic submitted and the one stored in the system. In the first situation, an attacker might submit to a thumbprint recognition system a copy of a valid user’s thumbprint. The control against this attack involves ensuring a live thumb was used for the submission. That can be done by physically controlling the thumb reader, for instance having a guard at the reader to make sure no tampering or fake thumbs are used. In remote entry situations, logical liveness tests can be performed to verify that the submitted data is from a live subject.
Attacks that involve making the system falsely deny or accept a request take advantage of either the low degrees of freedom in the characteristic being tested, or improper system tuning. Degrees of freedom relate to measurable differences between biometric readings, with more degrees of freedom indicating a more unique biometric. Facial recognition systems, for instance, may have only nine degrees of freedom while other biometric systems have over one hundred. Similar faces may be used to fool the system into improperly authenticating an individual. Similar irises, however, are difficult to find and even more difficult to fool a system into improperly authenticating.
Attacks against system tuning also exist. Any biometric system has rates at which it will falsely accept a reading and falsely reject a reading. The two rates are inseparable; for any given system improving one worsens the other. Systems that are tuned to maximize user convenience typically have low rates of false rejection and high rates of false acceptance. Those systems may be more open to successful attack.
Authenticator Reissuance
Authorized users may need to have authenticators reissued. Many situations create that need, such as the user forgetting the shared secret, losing a token, or the change of a biometric identifier. Prior to reissuing an authenticator, institutions should appropriately verify the identity of the receiving individual. The strength of the verification should be appropriate to mitigate the risk of impersonation. For example, the comparison of Internet-banking customer responses to questions regarding readily available public information generally is not an adequate risk mitigator.
Behavioral Authentication
Behavioral authentication is the assurance gained from comparing connection-related and activity-related information with expectations. For example, many institutions may expect Internet banking activity from certain Internet Protocol (IP) ranges to use certain user agents, to traverse the Web site in a certain manner, and to submit transactions that have certain characteristics. Although behavioral authentication does not provide strong assurance that individuals are who they claim to be, it may provide a strong indication that authenticators presented are from an imposter. Accordingly, behavioral authentication is frequently useful to supplement other means of authentication.
Device Authentication
Device authentication typically takes place either as a supplement to the authentication of individuals or when assurance is needed that the device is authorized to be on the network.
Devices are authenticated through either shared secrets, such as pre-shared keys, or the use of PKI. Authentication can take place at the network level and above. At the network level, IPv6 has the built-in ability to authenticate each device.
Device authentication is subject to the same shared-secret and PKI weaknesses as user authentication, and is subject to similar offsetting controls. Additionally, similar to user authentication, if the device is under the attacker’s control or if the authentication mechanism has been compromised, communications from the device should not be trusted.
Mutual Authentication
Mutual authentication occurs when all parties to a communication authenticate themselves to the other parties. Authentications can be single or multifactor. An example of a mutual authentication is the identification of an Internet banking user to the institution, the display of a shared secret from the institution to the user, and the presentation of a shared secret from the user back to the institution. An advantage of mutual authentication is the assurance that communications are between trusted parties. However, various attacks, such as man-in-the-middle attacks, can thwart mutual authentication schemes.
Authentication for Single Sign-On Protocols
Several single sign-on protocols are in use. Those protocols allow clients to authenticate themselves once to obtain access to a range of services. An advantage of single sign-on systems is that users do not have to remember or possess multiple authentication mechanisms, potentially allowing for more complex authentication methods and fewer user-created weaknesses. Disadvantages include the broad system authorizations potentially tied to any given successful authentication, the centralization of authenticators in the single sign-on server, and potential weaknesses in the single sign-on technologies.
When single sign-on systems allow access for a single log-in to multiple instances of sensitive data or systems, financial institutions should employ robust authentication techniques, such as multi-factor, PKI, and biometric techniques. Financial institutions should also employ additional controls to protect the authentication server and detect attacks against the server and server communications.
Examples of Common Authentication Weaknesses, Attacks, and Offsetting Controls
All authentication methodologies display weaknesses. Those weaknesses are of both a technical and a nontechnical nature. Many of the weaknesses are common to all mechanisms. Examples of common weaknesses include warehouse attacks, social engineering, client attacks, replay attacks, man-in-the-middle attacks, and hijacking.
Warehouse attacks result in the compromise of the authentication storage system and the theft of the authentication data. Frequently, the authentication data is encrypted; however, dictionary attacks make decryption of even a few passwords in a large group a trivial task. A dictionary attack uses a list of likely authenticators, such as passwords, runs the likely authenticators through the encryption algorithm, and compares the result to the stolen, encrypted authenticators. Any matches are easily traceable to the pre-encrypted authenticator.
Dictionary and brute force attacks are viable due to the speeds with which comparisons are made. As microprocessors increase in speed, and technology advances to ease the linking of processors across networks, those attacks will be even more effective. Because those attacks are effective, institutions should take great care in securing their authentication databases. Institutions that use one-way hashes should consider the insertion of secret bits (also known as “salt”) to increase the difficulty of decrypting the hash. The salt has the effect of increasing the number of potential authenticators that attackers must check for validity, thereby making the attacks more time consuming and creating more opportunity for the institution to identify and react to the attack.
Warehouse attacks typically compromise an entire authentication mechanism. Should such an attack occur, the financial institution might have to deny access to all or nearly all users until new authentication devices can be issued (e.g. new passwords). Institutions should consider the effects of such a denial of access, and appropriately plan for large-scale re-issuances of authentication devices.
Social engineering involves an attacker obtaining authenticators by simply asking for them. For instance, the attacker may masquerade as a legitimate user who needs a password reset or as a contractor who must have immediate access to correct a system performance problem. By using persuasion, being aggressive, or using other interpersonal skills, the attackers encourage a legitimate user or other authorized person to give them authentication credentials. Controls against these attacks involve strong identification policies and employee training.
Client attacks are an area of vulnerability common to all authentication mechanisms. Passwords, for instance, can be captured by hardware- or software-based keystroke capture mechanisms. PKI private keys could be captured or reverse-engineered from their tokens. Protection against these attacks primarily consists of physically securing the client systems, and, if a shared secret is used, changing the secret on a frequency commensurate with risk. While physically securing the client system is possible within areas under the financial institution’s control, client systems outside the institution may not be similarly protected.
Replay attacks occur when an attacker eavesdrops and records the authentication as it is communicated between a client and the financial institution system and then later uses that recording to establish a new session with the system and masquerade as the true user. Protections against replay attacks include changing cryptographic keys for each session, using dynamic passwords, expiring sessions through the use of time stamps, expiring PKI certificates based on dates or number of uses, and implementing liveness tests for biometric systems.
Man-in-the-middle attacks place the attacker’s computer in the communication line between the server and the client. The attacker’s machine can monitor and change communications. Controls against man-in-the-middle attacks include prevention through host and client hardening, appropriate hardening and monitoring of domain name service (DNS) servers and other network infrastructure, authentication of the device communicating with the server, and the use of PKI.
Hijacking is an attacker’s use of an authenticated user’s session to communicate with system components. Controls against hijacking include encryption of the user’s session and the use of encrypted cookies or other devices to authenticate each communication between the client and the server.
Network Access
Network security requires effective implementation of several control mechanisms to adequately secure access to systems and data. Financial institutions must evaluate and appropriately implement those controls relative to the complexity of their network. Many institutions have increasingly complex and dynamic networks stemming from the growth of distributed computing.
Security personnel and network administrators have related but distinct responsibilities for ensuring secure network access across a diverse deployment of interconnecting network servers, file servers, routers, gateways, and local and remote client workstations. Security personnel typically lead or assist in the development of policies, standards, and procedures, and monitor compliance. They also lead or assist in incident-response efforts. Network administrators implement the policies, standards, and procedures in their day-to-day operational role.
Internally, networks can host or provide centralized access to mission-critical applications and information, making secure access an organizational priority. Externally, networks integrate institution and third-party applications that grant customers and insiders access to their financial information and Web-based services. Financial institutions that fail to restrict access properly expose themselves to increased operational, reputation, and legal risk from threats including the theft of customer information, data alteration, system misuse, or denial-of-service attacks.
Computer networks often extend connectivity far beyond the financial institution and its data center. Networks provide system access and connectivity between business units, affiliates, TSPs, business partners, customers, and the public. This increased connectivity requires additional controls to segregate and restrict access between various groups and information users.
An effective approach to securing a large network involves dividing the network into logical security domains. A logical security domain is a distinct part of a network with security policies that differ from other domains, and perimeter controls enforcing access at a network level. The differences may be far broader than network controls, encompassing personnel, host, and other issues.
Before establishing security domains, financial institutions should map and configure the network to identify and control all access points. Network configuration considerations could include the following actions:
Identifying the various applications and systems accessed via the network,
Identifying all access points to the network including various telecommunications channels (e.g., wireless, Ethernet, frame relay, dedicated lines, remote dial-up access, extranets, Internet),
Mapping the internal and external connectivity between various network segments,
Defining minimum access requirements for network services (i.e., most often referenced as a network services access policy), and
Determining the most appropriate network configuration to ensure adequate security and performance.
With a clear understanding of network connectivity, the financial institution can avoid introducing security vulnerabilities by minimizing access to less-trusted domains and employing encryption for less secure connections. Institutions can then determine the most effective deployment of protocols, filtering routers, firewalls, gateways, proxy servers, and/or physical isolation to restrict access. Some applications and business processes may require complete segregation from the corporate network (e.g., no connectivity between corporate network and wire transfer system). Others may restrict access by placing the services that must be accessed by each zone in their own security domain, commonly called a DMZ.
Security domains are bounded by perimeters. Typical perimeter controls include firewalls that operate at different network layers, malicious code prevention, outbound filtering, intrusion detection and prevention devices, and controls over infrastructure services such as DNS. The perimeter controls may exist on separate devices or be combined or consolidated on one or more devices. Consolidation on a single device could improve security by reducing administrative overhead. However, consolidation may increase risk through a reduced ability to perform certain functions and the existence of a single point of failure.
Additionally, devices that combine prevention and detection present unique risks. Traditionally, if a prevention device fails, the detection device may alert on any resulting malicious activity. If the detection device fails, the prevention device still may function. If both functions are on the same device, and the device fails, the otherwise protected part of the institution’s network may be exposed.
Firewalls
A firewall is a collection of components (computers, routers, and software) that mediate access between different security domains. All traffic between the security domains must pass through the firewall, regardless of the direction of the flow. Since the firewall serves as an access control point for traffic between security domains, they are ideally situated to inspect and block traffic and coordinate activities with network intrusion detection systems (IDSs).
Financial institutions have four primary firewall types from which to choose: packet filtering, stateful inspection, proxy servers, and application-level firewalls. Any product may have characteristics of one or more firewall types. The selection of firewall type is dependent on many characteristics of the security zone, such as the amount of traffic, the sensitivity of the systems and data, and applications. Additionally, consideration should be given to the ease of firewall administration, degree of firewall monitoring support through automated logging and log analysis, and the capability to provide alerts for abnormal activity.
Typically, firewalls block or allow traffic based on rules configured by the administrator. Rulesets can be static or dynamic. A static ruleset is an unchanging statement to be applied to packet header, such as blocking all incoming traffic with certain source addresses. A dynamic ruleset often is the result of coordinating a firewall and an IDS. For example, an IDS that alerts on malicious activity may send a message to the firewall to block the incoming IP address. The firewall, after ensuring the IP is not on a “white list”, creates a rule to block the IP. After a specified period of time the rule expires and traffic is once again allowed from that IP.
Firewalls are subject to failure. When firewalls fail, they typically should fail closed, blocking all traffic, rather than failing open and allowing all traffic to pass.
Packet Filter Firewalls
Packet filter firewalls evaluate the headers of each incoming and outgoing packet to ensure it has a valid internal address, originates from a permitted external address, connects to an authorized protocol or service, and contains valid basic header instructions. If the packet does not match the pre-defined policy for allowed traffic, then the firewall drops the packet. Packet filters generally do not analyze the packet contents beyond the header information. Many routers contain access control lists (ACLs) that allow for packet-filtering capabilities.
Dynamic packet filtering incorporates stateful inspection primarily for performance benefits. Before re-examining every packet, the firewall checks each packet as it arrives to determine whether it is part of an existing connection. If it verifies that the packet belongs to an established connection, then it forwards the packet without subjecting it to the firewall ruleset.
Weaknesses associated with packet filtering firewalls include the following:
- The system is unable to prevent attacks that exploit application-specific vulnerabilities and functions because the packet filter does not examine packet contents
- Logging functionality is limited to the same information used to make access control decisions
- Most do not support advanced user authentication schemes
- Firewalls are generally vulnerable to attacks and exploitation that take advantage of vulnerabilities in network protocols
- The firewalls are easy to misconfigure, which allows traffic to pass that should be blocked
Packet filtering offers less security, but faster performance than application-level firewalls. The former are appropriate in high-speed environments where logging and user authentication with network resources are not as important. They also are useful in enforcing security zones at the network level. Packet filter firewalls are also commonly used in small office/home office (SOHO) systems and default operating system firewalls.
Institutions internally hosting Internet-accessible services should consider implementing additional firewall components that include application-level screening.
Stateful Inspection Firewalls
Stateful inspection firewalls are packet filters that monitor the state of the TCP connection. Each TCP session starts with an initial “handshake” communicated through TCP flags in the header information. When a connection is established the firewall adds the connection information to a table. The firewall can then compare future packets to the connection or state table. This essentially verifies that inbound traffic is in response to requests initiated from inside the firewall.
Proxy Server Firewalls
Proxy servers act as an intermediary between internal and external IP addresses and block direct access to the internal network. Essentially, they rewrite packet headers to substitute the IP of the proxy server for the IP of the internal machine and forward packets to and from the internal and external machines. Due to that limited capability, proxy servers are commonly employed behind other firewall devices. The primary firewall receives all traffic, determines which application is being targeted, and hands off the traffic to the appropriate proxy server. Common proxy servers are the domain name server (DNS), Web server (HTTP), and mail (SMTP) server. Proxy servers frequently cache requests and responses, providing potential performance benefits.
Additionally, proxy servers provide another layer of access control by segregating the flow of Internet traffic to support additional authentication and logging capability, as well as content filtering. Web and e-mail proxy servers, for example, are capable of filtering for potential malicious code and application-specific commands (see “Malicious Code”). They may implement anti-virus and anti-spam filtering, disallow connections to potentially malicious servers, and disallow the downloading of files in accordance with the institution’s security policy.
Proxy servers are increasing in importance as protocols are tunneled through other protocols. For example, a protocol-aware proxy may be designed to allow Web server requests to port 80 of an external Web server, but disallow other protocols encapsulated in the port 80 requests.
Application-Level Firewalls
Application-level firewalls perform application-level screening, typically including the filtering capabilities of packet filter firewalls with additional validation of the packet content based on the application. Application-level firewalls capture and compare packets to state information in the connection tables. Unlike a packet filter firewall, an application-level firewall continues to examine each packet after the initial connection is established for specific application or services such as telnet, FTP, HTTP, SMTP, etc. The application-level firewall can provide additional screening of the packet payload for commands, protocols, packet length, authorization, content, or invalid headers. Application level firewalls provide the strongest level of security, but are slower and require greater expertise to administer properly.
The primary disadvantages of application-level firewalls are as follows:
- The time required to read and interpret each packet slows network traffic. Traffic of certain types may have to be split off before the application-level firewall and passed through different access controls.
- Any particular firewall may provide only limited support for new network applications and protocols. They also simply may allow traffic from those applications and protocols to go through the firewall.
Firewall Services and Configuration
Firewalls may provide some additional services:
- Network address translation (NAT)—NAT readdresses outbound packets to mask the internal IP addresses of the network. Untrusted networks see a different host IP address from the actual internal address. NAT allows an institution to hide the topology and address schemes of its trusted network from untrusted networks.
- Dynamic host configuration protocol (DHCP)—DHCP assigns IP addresses to machines that will be subject to the security controls of the firewall.
- Virtual Private Network (VPN) gateways—A VPN gateway provides an encrypted tunnel between a remote external gateway and the internal network. Placing VPN capability on the firewall and the remote gateway protects information from disclosure between the gateways but not from the gateway to the terminating machines. Placement on the firewall, however, allows the firewall to inspect the traffic and perform access control, logging, and malicious code scanning.
One common firewall implementation in financial institutions hosting Internet applications is a DMZ, which is a neutral Internet accessible zone typically separated by two firewalls. One firewall is between the institution’s private network and the DMZ and then another firewall is between the DMZ and the outside public network. The DMZ constitutes one logical security domain, the outside public network is another security domain, and the institution’s internal network may be composed of one or more additional logical security domains. An adequate and effectively managed firewall can ensure that an institution’s computer systems are not directly accessible to any on the Internet.
Firewall Policy
A firewall policy states management’s expectations for how the firewall should function and is a component of the overall security policy. It should establish rules for traffic coming into and going out of the security domain and how the firewall will be managed and updated. Therefore, it is a type of security policy for the firewall and forms the basis for the firewall rules. The firewall selection and the firewall policy should stem from the ongoing security risk assessment process. Accordingly, management needs to update the firewall policy as the institution's security needs and the risks change.
At a minimum, the policy should address:
- Firewall topology and architecture
- Type of firewall(s) being utilized
- Physical placement of the firewall components
- Monitoring firewall traffic
- Permissible traffic (generally based on the premise that all traffic not expressly allowed is denied, detailing which applications can traverse the firewall and under what exact circumstances such activities can take place)
- Firewall updating
- Coordination with security monitoring and intrusion response mechanisms
- Responsibility for monitoring and enforcing the firewall policy
- Protocols and applications permitted
- Regular auditing of a firewall’s configuration and testing of the firewall’s effectiveness
- Contingency planning
Financial institutions should also appropriately train, manage, and monitor their staffs to ensure the firewall policy is implemented properly. Alternatively, institutions can outsource the firewall management while ensuring that the outsourcer complies with the institution’s specific firewall policy.
Firewalls are an essential control for a financial institution with an Internet connection and provide a means of protection against a variety of attacks. Firewalls should not be relied upon, however, to provide full protection from attacks. Institutions should complement firewalls with strong security policies and a range of other controls.
In fact, firewalls are potentially vulnerable to attacks including:
- Spoofing trusted IP addresses
- Denial of service by overloading the firewall with excessive requests or malformed packets
- Sniffing of data that is being transmitted outside the network
- Hostile code embedded in legitimate HTTP, SMTP, or other traffic that meet all firewall rules
- Attacks on unpatched vulnerabilities in the firewall hardware or software
- Attacks through flaws in the firewall design providing relatively easy access to data or services residing on firewall or proxy servers
- Attacks against computers and communications used for remote administration
Financial institutions can reduce their vulnerability to these attacks through network configuration and design, sound implementation of its firewall architecture that includes multiple filter points, active firewall monitoring and management, and integrated security monitoring. In many cases, additional access controls within the operating system or application will provide an additional means of defense.
Given the importance of firewalls as a means of access control, good practices include:
- Hardening the firewall by removing all unnecessary services and appropriately patching, enhancing, and maintaining all software on the firewall unit (see “Systems Development, Acquisition, and Maintenance”)
- Restricting network mapping capabilities through the firewall, primarily by blocking inbound ICMP (Internet Control Messaging Protocol) traffic
- Using a ruleset that disallows all inbound and outbound traffic that is not specifically allowed
- Using NAT and split DNS to hide internal system names and addresses from external networks (split DNS uses two domain name servers, one to communicate outside the network, and the other to offer services inside the network)
- Using proxy connections for outbound HTTP connections
- Filtering malicious code
- Backing up firewalls to internal media and not backing up the firewall to servers on protected networks
- Logging activity, with daily administrator review (see “Logging and Data Collection”)
- Using security monitoring devices and practices to monitor actions on the firewall and to monitor communications allowed through the firewall (see “Security Monitoring”)
- Administering the firewall using encrypted communications and strong authentication, accessing the firewall only from secure devices, and monitoring all administrative access
- Limiting administrative access to few individuals
- Making changes only through well-administered change control procedures
Malicious Code Filtering
Perimeters may contain proxy firewalls or other servers that act as a control point for Web browsing, e-mail, P2P, and other communications. Those firewalls and servers frequently are used to enforce the institution’s security policy over incoming communications. Enforcement is through anti-virus, anti-spyware, and anti-spam filtering, the blocking of downloading of executable files, and other actions. To the extent that filtering is done on a signature basis, frequent updating of the signatures may be required.
Outbound Filtering
Perimeter servers also serve to inspect outbound communications for compliance with the institution’s security policy. Perimeter routers and firewalls can be configured to enforce policies that forbid the origination of outbound communications from certain computers. Additionally, proxy servers could be configured to identify and block customer data and other data that should not be transmitted outside the security domain.
Network Intrusion Prevention Systems
Network Intrusion Prevention Systems (nIPS) are an access control mechanism that allow or disallow access based on an analysis of packet headers and packet payloads. They are similar to firewalls because they are located in the communications line, compare activity to preconfigured or preprogrammed decisions of what packets to pass or drop, and respond with pre-configured actions. The IPS units generally detect security events in a manner similar to IDS units (See “Activity Monitoring” in the Security Monitoring section of this booklet) and are subject to the same limitations. After detection, however, the IPS unit may take actions beyond simple alerting to potential malicious activity and logging of packets. For example, the IPS unit may block traffic flows from the offending host. The ability to sever communications can be useful when the activity can clearly be identified as malicious. When the activity cannot be clearly identified, for example where a false positive may exist, IDS-like alerting commonly is preferable to blocking.
Although IPS units are access control devices, many implement a security model that is different from firewalls. Firewalls typically allow only the traffic necessary for business purposes, or only “known good” traffic. IPS units typically are configured to disallow traffic that triggers signatures, or “known bad” traffic, while allowing all else. However, IPS units can be configured to more closely mimic a device that allows only “known good” traffic.
IPS units also contain a “white list” of IP addresses that should never be blocked. The list helps ensure that an attacker cannot achieve a denial of service by spoofing the IP of a critical host.
Quarantine
Quarantining a device protects the network from potentially malicious code or actions. Typically, a device connecting to a security domain is queried for conformance to the domain’s security policy. If the device does not conform, it is placed in a restricted part of the network until it does conform. For example, if the patch level is not current, the device is not allowed into the security domain until the appropriate patches are downloaded and installed.
DNS Placement
Effective protection of the institution’s DNS servers is critical to maintaining the security of the institution’s communications. Much of the protection is provided by host security (See the “Systems Development, Acquisition, and Maintenance” section of this booklet). However, the placement of the DNS also is an important factor. The optimal placement is split DNS, where one firewalled DNS server serves public domain information to the outside and does not perform recursive queries, and a second DNS server, in an internal security domain and not the DMZ, performs recursive queries for internal users.
Wireless Issues
Wireless networks are difficult to secure because they do not have a well-defined perimeter or well-defined access points. Unlike wired networks, unauthorized monitoring and denial of service attacks can be performed without a physical wire connection. Additionally, unauthorized devices can potentially connect to the network, perform man-in-the-middle attacks, or connect to other wireless devices. To mitigate those risks, wireless networks rely on extensive use of encryption to authenticate users and devices and to shield communications. If a financial institution uses a wireless network, it should carefully evaluate the risk and implement appropriate additional controls.
Examples of additional controls may include one or more of the following:
Treating wireless networks as untrusted networks, allowing access through protective devices similar to those used to shield the internal network from the Internet environment;
Using end-to-end encryption in addition to the encryption provided by the wireless connection;
Using strong authentication and configuration controls at the access point and on all clients;
Using an application server and dumb terminals;
Shielding the area in which the wireless LAN operates to protect against stray emissions and signal interference; and
Monitoring and responding to unauthorized wireless access points and clients.
Operating System Access
Financial institutions must control access to system software within the various network clients and servers as well as stand-alone systems. System software includes the operating system and system utilities. The computer operating system manages all of the other applications running on the computer. Common operating systems include IBM zOS, OS/400, AIX, LINUX, various versions of Microsoft Windows, and Sun Solaris. Security administrators and IT auditors need to understand the common vulnerabilities and appropriate mitigation strategies for their operating systems. Application programs and data files interface through the operating system. System utilities are programs that perform repetitive functions such as creating, deleting, changing, or copying files. System utilities also could include numerous types of system management software that can supplement operating system functionality by supporting common system tasks such as security, system monitoring, or transaction processing.
System software can provide high-level access to data and data processing. Unauthorized access could result in significant financial and operational losses. Financial institutions should restrict privileged access to sensitive operating systems. While many operating systems have integrated access control software, third-party security software also is available. In the case of many mainframe systems, these programs are essential to ensure effective access control and can often integrate the security management of both the operating system and the applications. Network security software can allow institutions to improve the effectiveness of the administration and security policy compliance for a large number of servers often spanning multiple operating system environments.
The critical aspects for access control software, whether included in the operating system or additional security software, are that management has the capability to:
Restrict access to sensitive or critical system resources or processes and have the capability, depending on the sensitivity, to extend protection at the program, file, record, or field level.
Log user or program access to sensitive system resources including files, programs, processes, or operating system parameters.
Filter logs for potential security events and provide adequate reporting and alerting capabilities.
Additional operating system access controls include the following actions:
Ensure system administrators and security professionals have adequate expertise to securely configure and manage the operating system.
Ensure effective authentication methods are used to restrict system access to both users and applications.
Activate and utilize operating system security and logging capabilities and supplement with additional security software where supported by the risk assessment process.
Restrict operating system access to specific terminals in physically secure and monitored locations.
Lock or remove external drives from system consoles or terminals residing outside physically secure locations.
Restrict and log access to system utilities, especially those with data altering capabilities.
Restrict access to operating system parameters.
Prohibit remote access to sensitive operating system functions, where feasible, and at a minimum require strong authentication and encrypted sessions before allowing remote support.
Limit the number of employees with access to sensitive operating systems and grant only the minimum level of access required to perform routine responsibilities.
Segregate operating system access, where possible, to limit full or root-level access to the system.
Monitor operating system access by user, terminal, date, and time of access.
Update operating systems with security patches and using appropriate change control mechanisms. (See “Systems Development, Acquisition, and Maintenance.”)
Application Access
Sensitive or mission-critical applications should incorporate appropriate access controls that restrict which application functions are available to users and other applications. The most commonly referenced applications from an examination perspective support the information processing needs of the various business lines. These computer applications allow authorized users or other applications to interface with the related database. Effective application access control can enforce both segregation of duties and dual control. Access rights to sensitive or critical applications and their databases should ensure that employees or applications have the minimum level of access required to perform their business functions. Effective application access control involves a partnership between the security administrators, the application programmers (including TSPs and vendors), and the business owners.
Some security software programs will integrate access control for the operating system and some applications. Such software is useful when applications do not have their own access controls, and when the institution wants to rely on the security software instead of the application’s access controls. Examples of such security software products for mainframe computers include RACF, CA-ACF2, and CA-TopSecret. Institutions should understand the functionality and vulnerabilities of their application access control solutions and consider those issues in their risk assessment process.
Institution management should consider a number of issues regarding application access control. Many of these issues also could apply to oversight of operating system access:
Implementing a robust authentication method consistent with the criticality and sensitivity of the application. Historically, the majority of applications have relied solely on user IDs and passwords, but increasingly applications are using other forms of authentication. Multi-factor authentication, such as token and PKI-based systems coupled with a robust enrollment process, can reduce the potential for unauthorized access.
Maintaining consistent processes for assigning new user access, changing existing user access, and promptly removing access to departing employees.
Communicating and enforcing the responsibilities of programmers (including TSPs and vendors), security administrators, and business line owners for maintaining effective application-access control. Business line managers are responsible for the security and privacy of the information within their units. They are in the best position to judge the legitimate access needs of their area and should be held accountable for doing so. However, they require support in the form of adequate security capabilities provided by the programmers or vendor and adequate direction and support from security administrators.
Monitoring existing access rights to applications to help ensure that users have the minimum access required for the current business need. Typically, business application owners must assume responsibility for determining the access rights assigned to their staff within the bounds of the AUP. Regardless of the process for assigning access, business application owners should periodically review and approve the application access assigned to their staff.
Setting time-of-day or terminal limitations for some applications or for the more sensitive functions within an application. The nature of some applications requires limiting the location and number of workstations with access. These restrictions can support the implementation of tighter physical access controls.
Logging access and events (see “Log Transmission, Normalization, Storage, and Protection” in the Activity Monitoring section of this booklet).
Easing the administrative burden of managing access rights by utilizing software that supports group profiles. Some financial institutions manage access rights individually and this approach often leads to inappropriate access levels. By grouping employees with similar access requirements under a common access profile (e.g., tellers, loan operations, etc.), business application owners and security administrators can better assign and oversee access rights. For example, a teller performing a two-week rotation as a proof operator does not need year-round access to perform both jobs. With group profiles, security administrators can quickly reassign the employee from a teller profile to a proof operator profile. Note that group profiles are used only to manage access rights; accountability for system use is maintained through individuals being assigned their own unique identifiers and authenticators.
Remote Access
Many financial institutions provide employees, vendors, and others with access to the institution’s network and computing resources through external connections. Those connections are typically established through modems, the Internet, or private communications lines. The access may be necessary to remotely support the institution’s systems or to support institution operations at remote locations. In some cases, remote access is required periodically by vendors to make emergency program fixes or to support a system.
Remote access to a financial institution’s systems provides an attacker with the opportunity to subvert the institution’s systems from outside the physical security perimeter. Accordingly, management should establish policies restricting remote access and be aware of all remote-access devices attached to their systems. These devices should be strictly controlled.
Good controls for remote access include the following actions:
Disallow remote access by policy and practice unless a compelling business justification exists.
Require management approval for remote access.
Regularly review remote access approvals and rescind those that no longer have a compelling business justification.
Appropriately configure remote access devices.
Appropriately secure remote access devices against malware (see “Malicious Code Prevention”).
Appropriately and in a timely manner patch, update, and maintain all software on remote access devices.
Use encryption to protect communications between the access device and the institution and to protect sensitive data residing on the access device.
Periodically audit the access device configurations and patch levels.
Use VLANs, network segments, directories, and other techniques to restrict remote access to authorized network areas and applications within the institution.
Log remote access communications, analyze them in a timely manner, and follow up on anomalies.
Centralize modem and Internet access to provide a consistent authentication process, and to subject the inbound and outbound network traffic to appropriate perimeter protections and network monitoring.
Log and monitor the date, time, user, user location, duration, and purpose for all remote access.
Require a two-factor authentication process for remote access (e.g., PIN-based token card with a one-time random password generator, or token-based PKI).
Implement controls consistent with the sensitivity of remote use. For example, remote use to administer sensitive systems or databases may include the following controls:
Restrict the use of the access device by policy and configuration;
Require two-factor user authentication;
Require authentication of the access device;
Ascertain the trustworthiness of the access device before granting access;
Log and review all activities (e.g. keystrokes).
If remote access is through modems:
Require an operator to leave the modems unplugged or disabled by default, to enable modems only for specific and authorized external requests, and disable the modem immediately when the requested purpose is completed.
Configure modems not to answer inbound calls, if modems are for outbound use only.
Use automated callback features so the modems only call one number (although this is subject to call forwarding schemes).
Install a modem bank where the outside number to the modems uses a different prefix than internal numbers and does not respond to incoming calls.
Access Control References
ISO 17799 defines Access Control objectives to control access to information; prevent unauthorized access to information systems; ensure the protection of networked services; prevent unauthorized computer access; detect unauthorized activities; and ensure information security when using mobile computing and tele-network facilities. This section provides templates for Information Security standards that are required to comply with ISO Access Control objectives and support the objectives established in the Asset Protection Policy, Acceptable Use Policy, and Threat Assessment and Monitoring Policy.
- 1. Sample ISO Access Control Standard
- The Access Control Standard is required to comply with ISO Access Control objectives and builds on the objectives established in the Asset Protection Policy by providing specific requirements and instructions for controlling access to information assets.
- 2. Sample ISO Threat Monitoring Standard
- The Threat Monitoring Standard is required to comply with ISO Access Control objectives and builds on the objectives established in the Threat Assessment and Monitoring Policy by providing specific requirements for periodically identifying, analyzing, and prioritizing threats to sensitive information. The Threat Monitoring Standard provides specific requirements for performing real-time intrusion detection monitoring and periodic intrusion detection analysis to detect threat and intrusion activity.
- 3. Sample ISO Auditing Standard
- The Audit Standard required to comply with ISO Access Control objectives and builds on the objectives established in the Asset Protection Policy by providing specific auditing and logging requirements including activation, protection, retention, and storage.
- 4. Sample ISO Telecommunications Acceptable Use Standard
- The Telecommunications Acceptable Use Standard is required to comply with ISO Access Control objectives and builds on the objectives established in the Acceptable Use Policy by providing specific instructions and requirements on the proper and appropriate business use of telecommunications resources.