Advertisements

Home » ISMS: Operations security

Advertisements
Advertisements

ISMS: Operations security

Operations security involves planning and sustaining the day-to-day “rubber meets the road” processes that are critical to maintaining the security of organizations’ information environments. The extent and complexity of security operations will vary between organizations based on organizational risk tolerances and resource levels. However, each of the control areas in this chapter must be addressed in some manner to help mitigate common ubiquitous risks. The most important aspect of operations security is that the operations themselves need to be repeatable, reliable, and consistently performed. If you are just starting an information security program or looking to evaluate and improve operations security then the following approach can be very helpful:

  1. Review the following areas to assess the confidentiality, integrity, and availability of operations centre controls:
    1. Operational procedures and responsibilities: Review documentation and evaluate guidance in regards to change management, capacity management, and separation of development, test, and production environments
    2. Malware detection and prevention controls: Evaluate their level of effectiveness
    3. Data centre backup strategy: Evaluate whether backup procedures and methods (e.g., encryption) are effective both for on- and off-premises backup management
    4. Audit trails and logging: Review whether they are implemented effectively so that security reviews can be conducted to detect tampering, unauthorized access, and record user activities
    5. Installation of software on operational systems: Ensure licensing requirements are met
    6. Implement a formal vulnerability management program to proactively test IT infrastructure for vulnerabilities that can be exploited and ensure that there is an effective process in place to manage corrective actions in collaboration with stakeholders.
    7. Prepare in advance for IT controls audits to avoid service disruptions.

A.12 Operation Security

A. 12.1 Operational procedures and responsibilities

Objective:

To ensure correct and secure operations of information processing facilities.

A.12.1.1 Documented operating procedures

Implementation Guidance

Operating procedures should be documented and made available to all users who need them.

Implementation Guidance

Documented procedures should be prepared for operational activities associated with information
processing and communication facilities, such as computer start-up and close-down procedures, backup, equipment maintenance, media handling, computer room and mail handling management and safety. The operating procedures should specify the operational instructions. including:

  1. the installation and configuration of systems;
  2. processing and handling of information both automated and manual;
  3. backup
  4. scheduling requirements, including interdependencies with other systems, earliest job start and latest job completion times;
  5. instructions for handling errors or other exceptional conditions, which might arise during job execution, including restrictions on the use of system utilities;
  6. support and escalation contacts including external support contacts in the event of unexpected operational or technical difficulties;
  7. special output and media handling instructions, such as the use of special stationery or the management of confidential output including procedures for secure disposal of output from failed  jobs
  8. system restart and recovery procedures for use in the event of system failure;
  9. the management of audit-trail and system log information:
  10. monitoring procedures.

Operating procedures and the documented procedures for system activities should be treated as formal documents and changes authorized by management..Where technically feasible. information systems should be managed consistently, using the same procedures, tools and utilities.

A. 12.1.2 Change management

Control:

Changes to the organization. business processes, information processing facilities and systems that affect information security should be controlled.

Implementation Guidance

In particular, the following items should be considered:

  • identification and recording of significant changes;
  • planning and testing of changes
  • assessment of the potential impacts, including information security impacts, of such changes;
  • formal approval procedure for proposed changes;
  • verification that information security requirements have been met;
  •  communication of change details to all relevant persons;
  • fall-back procedures, including procedures and responsibilities for aborting and recovering from unsuccessful changes and unforeseen events;
  • provision of an emergency change process to enable quick and controlled implementation of changes needed to resolve an incident.

Formal management responsibilities and procedures should be in place to ensure satisfactory control of all changes. When changes are made, an audit log containing all relevant information should be retained. Inadequate control of changes to information proéessing facilities and systems is a common cause of system or security failures. Changes to the operational environment, especially when transferring a system from development to operational stage, can impact the reliability of applications.

A.12.1.3 Capacity management

Control:

The use of resources should be monitored, tuned and projections made of future capacity requirements to ensure the required system performance.

Implementation Guidance

Capacity requirements should be identified, taking into account the business criticality of the concerned system. System tuning and monitoring should be applied to ensure and. where necessary, improve the availability and efficiency of systems. Detective controls should be put in place to indicate problems in due time. Projections of future capacity requirements should take account of new business and system requirements and current and projected trends in the organization’s information processing capabilities. Particular attention needs to be paid to any resources with long procurement lead times or high costs; therefore managers should monitor the utilization of key system resources. They should identify trends in usage, particularly in relation to business applications or information systems management tools.  Managers should use this information to identify and avoid potential bottlenecks and dependence on key personnel that might present a threat to system security or services, and plan appropriate action. Providing sufficient capacity can be achieved by increasing capacity or by reducing demand. Examples of managing capacity demand include:

  1. deletion of obsolete data (disk space);
  2. decommissioning of applications, systems, databases or environments
  3. optimising batch processes and schedules
  4. optimising application logic or database queries:
  5. denying or restricting bandwidth for resource-hungry services if these are not business critical (e.g.video streaming).

A documented capacity management plan should be considered for mission-critical systems.

Other Information:

This control also addresses the capacity of human resources, as well as offices and facilities.

A.12.1.4 Separation of development, testing and operational environments

Control
Development, testing, and operational environments should be separated to reduce the risks of unauthorized access or changes to the operational environment.

Implementation Guidance

The level of separation between operational, testing. and development environments that are necessary to prevent operational problems should be identified and implemented.
The following items should be considered:

  1. rules for the transfer of software from development to operational status should be defined and documented
  2. development and operational software should run on different systems or computer processors and in different domains or directories;
  3. changes to operational systems and applications should be tested in a testing or staging environment prior to being applied to operational systems;
  4. other than in exceptional circumstances, testing should not be done on operational systems;
  5. compilers, editors and other development tools or system utilities should not be accessible from operational systems when not required;
  6. users should use different user profiles for operational and testing systems, and menus should display appropriate identification messages to reduce the risk of error;
  7. sensitive data should not be copied into the testing system environment unless equivalent controls are provided for the testing system.

Development and testing activities can cause serious problems, e.g. unwanted modification of files or system environment or system failure. There is a need to maintain a known and stable environment in which to perform meaningful testing and to prevent inappropriate developer access to the operational environment. Where development and testing personnel have access to the operating system and its information, they may be able to introduce-unauthorized and untested code or alter operational data. On some systems, this capability could be misused to commit fraud or introduce untested or malicious code, which can cause serious operational problems. Development and testing personnel also pose a threat to the confidentiality of operational information; Development and testing activities may cause unintended changes to software or information if they share the same computing environment. Separating development, testing and operational environments are therefore desirable to reduce the risk of accidental change or unauthorized access to operational software and business data.

Operating procedures must be documented and readily available to the teams for which they have relevance. These procedures should cover methods that reduce the likelihood of introducing or enhancing risks due to accidental or ill-advised changes. Before authoring documentation, it is often very important to identify up-front who the intended audience is. For instance, documentation that is intended to have value for new hires (continuity) often requires a greater degree of detail than steps for staff who regularly perform operations tasks. It is very important that operating procedures be treated as formal documents that are maintained and managed with version and approval processes and controls in place. As technology and our systems infrastructure changes, it is an absolute certainty that operational procedures will become out of date or inaccurate. By adopting formal documentation and review processes, we can help reduce the likelihood of outdated procedures that bring forth their own risks — loss of availability, failure of data integrity, and breaches of confidentiality.

What should we document?

As mentioned before, the decision on what areas deserve documentation must be informed by an understanding of organizational risks including issues that have previously been observed. However a good list of items to consider include the following items:

  • Configuration and build procedures for servers, networking equipment, and desktops.
  • Automated and Manual Information Processing
  • Backup procedures
  • System scheduling dependencies
  • Error handling
  • Change Management Processes
  • Capacity Management & Planning Processes
  • Support and escalation procedures
  • System Restart and Recovery
  • Special Output
  • Logging & Monitoring Procedures

Common Challenges

 “Not enough time.” Very often operations teams already have considerable responsibility and may indicate that there is simply not enough time for documenting processes. The allocation of time for documentation efforts is a management issue and for this reason, it is important that IT leaders have an understanding of risks associated with outdated or informal operational procedures. In addition, defining a mandatory requirement that documentation efforts be completed before closing a project or significant change can help.

Change Management Procedures

Change management processes are essential for ensuring that risks associated with significant revisions to software, systems, and key processes are identified, assessed, and weighed in the context of an approval process. It is critical that information security considerations be included as part of a change review and approval process alongside other objectives such as support and service level management. Change management is a broad subject matter, however, some important considerations from an information security perspective include:

  • Helping to ensure that changes are identified and recorded.
  • Assessing and reporting on information security risks relevant to proposed changes.
  • Helping classify changes according to the overall significance of the change in terms of risk.
  • Helping establish or evaluate planning, testing, and “back out” steps for significant changes.
  • Helping ensure that change communications is handled in a structured manner.
  • Ensure that emergency change processes are well defined, communicated and that security evaluation of these changes is also performed post-change.

Common Challenges

 “Change Management Takes Too Much Time.” Change management processes are notoriously susceptible to becoming overly complex. Staff who conduct changes are more likely to attempt to bypass change management processes they feel are too burdensome by intentionally classifying their changes at low levels or even not reporting them. If you are starting a Change Management program it is often helpful to first focus on modelling large-scale changes and then working to find the right change level definitions which help balance risk reduction with operational agility and efficiency.

Capacity Management Procedures

Formal capacity management processes involve conducting system tuning, monitoring the use of present resources and, with the support of user planning input, projecting future requirements. Controls in place to detect and respond to capacity problems can help lead to a timely reaction. This is often especially important for communications networks and shared resource environments (virtual infrastructure) where sudden changes in utilization can in poor performance and dissatisfied users.  To address this, regular monitoring processes should be employed to collect, measure, analyse and predict capacity metrics including disk capacity, transmission throughput, service/application utilization.  Also, periodic testing of capacity management plans and assumptions (whether tabletop exercises or direct simulations) can help proactively identify issues that may need to be addressed to preserve a high level of availability of services for critical services.

12.2 Protection from malware

Objective

To ensure that information and information processing facilities are protected against malware.

A.12.2.1 Controls against malware

Control:

Detection, prevention and recovery controls to protect against malware should be implemented, combined with appropriate user awareness.

Implementation Guidance

Protection against malware should be based on malware detection and repair software, information security awareness and appropriate system access and change management controls. The following guidance should be considered:

  1. establishing a formal policy prohibiting the use of unauthorized software
  2. implementing controls that prevent or detect the use of unauthorized software (e.g. application whitelisting)
  3. implementing controls that prevent or detect the use of known or suspected malicious websites (e.g. blacklisting)
  4. establishing a formal policy to protect against risks associated with obtaining files and software either from or via external networks or on any other medium, indicating what protective measures should be taken;
  5. reducing vulnerabilities that could be exploited by malware. e.g. through technical vulnerability management
  6. conducting regular reviews of the software and data content of systems supporting critical business processes; the presence of any unapproved files or unauthorized amendments should be formally investigated;
  7. installation and regular update of malware detection and repair software to scan computers and media as a precautionary control. or on a routine basis; the scan carried out should include:
    1. scan any files received over networks or via any form of storage medium. for malware before use;
    2. scan electronic mail attachments and downloads for malware before use; this scan should be carried out at different places, e.g. at electronic mail servers, desktop computers and when entering the network of the organization;
    3. Scan web pages for malware:
  8. defining procedures and responsibilities to deal with malware protection on systems, training in their use, reporting and recovering from malware attacks;
  9. preparing appropriate business continuity plans for recovering from malware attacks, including all necessary data and software backup and recovery arrangements,
  10. implementing procedures to regularly collect information, such as subscribing to mailing lists or verifying websites giving information about new malware;
  11. implementing procedures to verify information relating to malware and ensure that warning bulletins are accurate and informative; managers should ensure that qualified sources, e.g. reputable journals, reliable internet sites or suppliers producing software protecting against malware, are used to differentiate between hoaxes and real malware; all users should be made aware of the problem of hoaxes and what to do on receipt of them;
  12. isolating environments where catastrophic impacts may result.

Other Information:

The use of two or more software products protecting against malware across the information processing environment from different vendors and technology can improve the effectiveness of malware protection. Care should be taken to protect against the introduction of malware during maintenance and emergency procedures. which may bypass normal malware protection controls. Under certain conditions, malware protection might cause the disturbance within operations. Use of malware detection and repair software alone as a malware control is not usually adequate and commonly needs to be accompanied by the operating procedures that prevent the introduction of malware.

It’s rare for modern hackers to physically enter building premises because they may be caught or apprehended. Physical facility controls have a limited purpose in information security, to simply provide a local barrier for the physical intrusion. These localized barriers protect against common crimes by persons entering and leaving the facility. With the advent of the Internet, a smaller percentage of criminals will chance the risks of committing a physical crime. The new persistent threat is through electronic attacks. A hacker can commit the crime at a safe distance without fear of being physically caught. Attacks may originate from anywhere in the world or even be sponsored by a foreign government to gather intelligence data. Technical controls to protect against electronic attacks are usually spotty and inconsistent because of a lack of awareness for specific threats or lopsided implementations. It is very easy for the technical staff to inadvertently focus on only a few areas, thereby neglecting serious threats that still exist in others. Technical threats against software are usually difficult for lay-persons to visualize in the physical world. The adage out of sight, out of mind also means outside of the budget. Let’s take a moment to understand how the electronic threat will manifest in our clients’ environment.

  1. Malware: This title refers to every malicious software program ever created, whether it exploits a known vulnerability or creates its own. There are so many different ones, it’s easier to just call the entire group by the title of malware. The king of the malware threat is known as the Trojan horse.
  2. Trojan Horse: A revised concept of the historical Trojan horse has been adapted to attack computers. In a tale from the Trojan war, soldiers hid inside a bogus gift known as the Trojan horse. The unassuming recipients accepted the horse and brought it inside their fortress, only to be attacked by enemy soldiers hiding within. Malicious programs frequently use the Trojan horse concept to deliver viruses, worms, logic bombs, and other rootkits through downloaded files.
  3. Virus: The goal of a virus is to disrupt operations. Users inadvertently download a program built like a Trojan horse containing the virus. The attacker’s goal is usually to damage your programs or data files. Viruses append themselves to the end-of-file (EOF) marker on computerized files.
  4. Internet Worm: An Internet worm operates in a similar manner to the Trojan or virus, with one major exception. Worm programs can freely travel between computers because they exploit unprotected data transfer ports (software programming sockets) to access other systems. Internet worms started by trying to access the automatic update (file transfer) function through software ports with poor authentication or no authentication mechanism. It is the responsibility of the IS programmer to implement security of the ports and protocols. IT technicians for hardware and operating system support cannot fix poor programming implementations. For IT technicians, the only choice is to disable software ports, but that won’t happen if the programmer requires the port left open for the user’s application program to operate.
  5. Logic Bomb: The concept of the logic bomb is designed around a dormant program code that is waiting for a trigger event to cause detonation. Unlike a virus or worm, logic bombs do not travel. The logic bomb remains in one location, awaiting detonation. Logic bombs are difficult to detect. Some logic bombs are intentional, and others are the unintentional result of poor programming. Intentional logic bombs can be set to detonate after the perpetrator is gone.
  6. Time Bomb: Programmers can install time bombs in their program to disable the software upon a predetermined date. Time bombs might be used to kill programs on symbolic dates such as April Fools’ Day or the anniversary of a historic event. Free trial evaluation versions of software use the time bomb mechanism to disable their program after 30–60 days with the intention of forcing the user to purchase a license. Time bombs can be installed by the vendor to eliminate perpetual customer support issues by forcing upgrades after a few years. The software installation utility will no longer run or install, because the programmer’s time bomb setting disabled the program. Now when trying to run the software, a message directs the user to contact customer support to purchase an upgrade. Hackers use the same technique to disrupt operations.
  7. Trapdoor: Computer programmers frequently install a shortcut, also known as a trapdoor, for use during software testing. The trapdoor is a hidden access point within the computer software. A competent programmer will remove the majority of trapdoors before releasing a production version of the program. However, several vendors routinely leave a trapdoor in a computer program to facilitate user support. Commercial encryption software began to change in 1996 with the addition of “key recovery” features. This is basically a trap door feature to recover lost encryption keys and to allow the government to read encrypted files, if necessary.
  8. RootKit: One of the most threatening attacks is the secret compromise of the operating system kernel. Attackers embed a rootkit into the downloadable software. This malicious software will subvert security settings by linking itself directly into the kernel processes, system memory, address registers, and swap space. Rootkits operate in stealth to hide their presence. Hackers designed rootkits to never display their execution as running applications. The system resource monitor does not show any activity related to the presence of the rootkit. After the rootkit is installed, the hacker has control over the system. The computer is completely compromised. Automatic update features use the same techniques as malicious rootkits to allow the software vendor to bypass your security settings. Vendors know that using the term rootkit may alarm users. The software agent is just another name for a rootkit.
  9. Brute Force: Attack Brute force is the use of extreme effort to overcome an obstacle. For example, an amateur could discover the combination to a safe by dialling all of the 63,000 possible combinations. There is a mathematical likelihood that the actual combination will be determined after trying less than one-third of the possible combinations. Brute force attacks are frequently used against user login IDs and passwords. In one particular attack, all of the encrypted computer passwords are compared against a list of all the words encrypted from a language dictionary. After the match is identified, the attacker will use the unencrypted word that created the password match. This is why it is important to use passwords that do not appear in any language dictionary.
  10. Denial of Service (DoS): Attackers can disable a computer by rendering legitimate use impossible. The objective is to remotely shut down service by overloading the system or disable the user environment (shell) and thereby prevent the normal user from processing anything on the computer. Denial-of-service (DoS) attacks may look similar to the loss of service while your system is downloading and installing vendor updates. The message “please wait, installing update 6 of 41…” makes your system unavailable for an hour or more. That is exactly how DoS operates.
  11. Distributed Denial of Service (DDoS): The denial of service has evolved to use multiple systems for targeted attacks against another computer, to force its crash. This type of attack, distributed denial of service (DDoS), is also known as the reflector attack. Your own computer is being used by the hacker to launch remote attacks against someone else. Hackers start the attack from unrelated systems that the hacker has already compromised. The attacking computers and target are drawn into the battle—similar in concept to starting a vicious rumour between two strangers, which leads them to fight each other. The hackers sit safely out of the way while this battle wages.

There are numerous ways to protect and remove malware from our computers. No one method is enough to ensure the computer is secure. The more layers of defence, the harder for hackers to use the computer.

  1. Install a Firewall:
    A firewall enacts the role of a security guard. There are of two types of firewalls: a software firewall and hardware firewall. Each serves similar, but different purposes. A firewall is the first step to provide security to the computer. It creates a barrier between the computer and any unauthorized program trying to come in through the Internet. If you are using a system at home, turn on the firewall permanently. It makes you aware if there are any unauthorized efforts to use your system.
  2. Install Antivirus Software:
    Antivirus is one other means to protect the computer. It is software that helps to protect the computer from any unauthorized code or software that creates a threat to the system. Unauthorized software includes viruses, keyloggers, trojans etc. This might slow down the processing speed of your computer, delete important files and access personal information. Even if your system is virus free, you must install an antivirus software to prevent the system from further attack of the virus. Antivirus software plays a major role in real time protection, its added advantage of detecting threats helps computer and the information in it to be safe. Some advanced antivirus programs provide automatic updates, this further helps to protect the PC from newly created viruses.
  3. Install Anti-Spyware Software:
    Spyware is a software program that collects personal information or information about an organization without their approval. This information is redirected to a third party website. Spyware is designed in such a way that they are not easy to be removed. Anti-Spyware software is solely dedicated to combat spyware. Similar to antivirus software, the anti-spyware software offers real-time protection. It scans all the incoming information and helps in blocking the threat once detected.
  4. Use Complex and Secure Passwords:
    The first line of defence in maintaining system security is to have strong and complex passwords. Complex passwords are difficult for hackers to find. Use a password that is at least 8 characters in length and include a combination of numbers, letters that are both upper and lower case and a special character. Hackers use certain tools to break easy passwords in a few minutes. One recent study showed that a 6 character password with all lower case letters can be broken in under 6 minutes!
  5. Check on the Security Settings of the Browser:
    Browsers have various security and privacy settings that you should review and set to the level you desire. Recent browsers give you the ability to tell websites do not track your movements, increasing your privacy and security.

A.12.3 Backup

Objective:

To protect against loss of data.

A. 12.3.1 Information backup

Control:

Backup copies of information, software and system images should be taken and tested regularly in accordance with an agreed backup policy.

Implementation Guidance

A backup policy should be established to define the organization’s requirements for backup of information, software and systems. The backup policy should define the retention and protection requirements. Adequate backup facilities should be provided to ensure that all essential information and software can be recovered following a disaster or media failure.

When designing a backup plan. the following items should be taken into consideration:

  1. accurate and complete records of the backup copies and documented restoration procedures should be produced;
  2. the extent (e.g. full or differential backup) and frequency of backups should reflect the business requirements of the organization, the security requirements of the information involved and the criticality of the information to the continued operation of the organization;
  3. the backups should be stored in a remote location. at a sufficient distance to escape any damage  from a disaster at the main site;
  4. backup information should be given an appropriate level of physical and environmental protection consistent with the standards applied at the main site;
  5. backup media should be regularly tested to ensure that they can‘ be relied upon for emergency use when necessary; this should be combined with a test of the restoration procedures and checked against the restoration time required. Testing the ability to restore backed~up data should be performed onto dedicated test media. not by overwriting the original media in case the backup or restoration process fails and causes irreparable data damage or loss;
  6. institutions where confidentiality is of importance, backups should be protected by means of encryption.

Operational procedures should monitor the execution of backups and address failures of scheduled backups to ensure completeness of backups according to the backup policy. Backup arrangements for individual systems and services should be regularly tested to ensure that they meet the requirements of business continuity plans. In the case of critical systems and services, backup arrangements should cover all systems information, applications and data necessary to recover the complete system in the event of a disaster. The retention period for essential business information should be determined, taking into account any requirement for archive copies to be permanently retained.

One of the best ways to protect your computer and data from malware attacks is to make regular backups. You should always create at least two backups: one to keep offline and another to keep in the cloud. you can create a full backup using the System Image Backup tool to make a copy of your entire machine, including files, settings, apps, and OS installation. Alternatively, if you don’t have a lot of files, you could just make regular copies of your documents on a USB flash drive. If you’re a light user and files don’t change very often, you should at least be making a backup once a week. On the other hand, if you’re dealing with business files, you should be making backups at least once or twice a day.

Online backup

There are many ways to make backups online. OneDrive is a common example of online backup, but this solution should only be considered to protect your data against hardware failure, theft, or natural accidents. If your device gets infected with a ransomware or another type of malware, OneDrive is likely to sync the changes making those files stored in the cloud unusable. A better solution includes subscribing to a third-party online backup service, such as CrashPlan or IDrive that allows you to schedule or trigger backups on demand to prevent syncing infected or encrypted files. The only caveat is that most cloud storage services don’t offer bare-metal recovery. If that’s something you need, you could create a full backup as you would normally do and then upload the package to a paid cloud storage service, such as Amazon Drive, Google Drive, etc.

Have an offline backup

Your recovery plan must include a full backup of your system and data to keep offline using an external hard drive or in a local network location (e.g. Network-attached Storage (NAS)). This is the kind of backup that will ensure you can recover from any malware, hardware failure, errors, and natural accidents. Remember that there is no such as thing as enough backup. If you can make a backup of the backup that you can store offsite, do it. After creating a backup, always disconnect the external drive and store it in a safe location, or disconnect the network location where you store the backup because if the drive stays online and accessible from your computer, a malware can still infect those files.

A.12.4 Logging and monitoring.

Objective:

To record events and generate evidence.

A.12.4.1 Event logging

Control:

Event logs recording user activities, exceptions, faults and information security events should be produced, kept and regularly reviewed.

Implementation Guidance

Event logs should include, when relevant:

  1. user IDs;
  2. system activities;
  3. dates, times and details of key events, e.g. log-on and log-off;
  4. device identity or location if possible and system identifier;
  5. records of successful and rejected system access attempts;
  6. records of successful and rejected data and other resource access attempts;
  7. changes to system configuration;
  8. use of privileges;
  9. use of system utilities and applications;
  10. files accessed and the kind of access;
  11. network addresses and protocols;
  12. alarms raised by the access control system;
  13. activation and de-activation of protection systems, such as anti-virus systems and intrusion detection systems;
  14. records of transactions executed by users in applications.

Event logging sets the foundation for automated monitoring systems which are capable of generating consolidated reports and alerts on system security.

Other Information:

Event logs can contain sensitive data and personally identifiable information. Appropriate privacy protection measures should be taken. Where possible, system administrators should not have permission to erase or de-activate logs of their own activities.

A.12.4.2 Protection of log information

Control:

Logging facilities and log information should be protected against tampering and unauthorized access.

Implementation Guidance

Controls should aim to protect against unauthorized changes to log information and operational problems with the logging facility including:

  1. alterations to the message types that are recorded;
  2. log files being edited or deleted;
  3. the storage capacity of the log file media being exceeded. resulting in either the failure to record events or over-writing of past recorded events.

Some audit logs may be required to be archived as part of the record retention policy or because of requirements to collect and retain evidence

Other Information:

System logs often contain a large volume of information. much of which is extraneous to information security monitoring. To help identify significant events for information security monitoring purposes, the copying of appropriate message types automatically to a second log, or the use of suitable system utilities or audit tools to perform file interrogation and rationalization should be considered. System logs need to be protected, because if the data can be modified or data in them deleted, their ‘existence may create a false sense of security. Real-time copying of logs to a system outside the control of a system administrator or operator can be used to safeguard logs.

A. 12.4.3 Administrator and operator logs

Control

System administrator and system operator activities should be logged and the logs protected and regularly reviewed.

Implementation Guidance

Privileged user account holders may be able to manipulate the logs on information processing facilities under their direct control. therefore it is necessary to protect and review the logs to maintain accountability for the privileged users.

Other Information:

An intrusion detection system managed outside of the control of the system and network administrators can be used to monitor system and network administration activities for compliance.

A.12.4.4 Clock synchronisation

Control:

The clocks of all relevant information processing systems within an organization or security domain should be synchronised to a single reference time source.

Implementation Guidance

External and internal requirements for time representation, synchronisation and accuracy should be documented. Such requirements can be legal, regulatory, contractual requirements, standards compliance or requirements for internal monitoring. A standard reference time for use within the organization should be defined. The organization’s approach to obtaining a reference time from external sources and how to synchronise internal clocks reliably should be documented and implemented.

Other Information:

The correct setting of computer clocks is important to ensure the accuracy of audit logs, which may be required for investigations or as evidence in legal or disciplinary cases. inaccurate audit logs may hinder such investigations and damage the credibility of such evidence. A clock linked to a radio time broadcast from a national atomic clock can be used as the master clock for logging systems. A network time protocol can be used to keep all of the servers in synchronisation with the master clock.

System logs generated by servers and other various network apparatus can create data is in vast quantities, and sooner or later, attempts at managing such information in an off-the-cuff fashion are no longer viable. Consequently, information systems managers are tasked with devising strategies for taming these volumes of log data to remain compliant with company IT policy and also to gain holistic visibility across all IT systems deployed throughout the organization. log management is defining what you need to log, how to log it, and how long to retain the information. This ultimately translates into requirements for hardware, software, and of course, policies. There are five parameters to a complete log management process. These are as follows:

  1. Collection
    The organization need to collect logs over encrypted channels. Their log management solution should ideally come equipped with multiple means to collect logs, but it should recommend the most reliable means of doing so. In general, organizations should use agent-based collection whenever possible, as this method is generally more secure and reliable than its agentless counterparts.
  2. Storage
    Once they have collected them, organizations need to preserve, compress, encrypt, store, and archive their logs. Companies can look for additional functionality in their log management solution such as the ability to specify where they can store their logs geographically. This type of feature can help meet their compliance requirements and ensure scalability.
  3. Search
    Organizations need to make sure they can find their logs once they’ve stored them, so they should index their records in such a way that they are discoverable via plaintext, REGEX, and API queries. A comprehensive log management solution should enable companies to optimize each log search with filters and classification tags. It should also allow them to view raw logs, conduct broad and detailed queries, and compare multiple queries at once.
  4. Correlation
    Organizations need to create rules that they can use to detect interesting events and perform automated actions. Of course, most events don’t occur on a single host in a single log. For that reason, companies should look for a log management solution that lets them create correlation rules according to the unique threats and requirements their environments face. They should also seek out a tool that allows them to import other data sources such as vulnerability scans and asset inventories.

Effective logging allows us to reach back in time to identify events, interactions, and changes that may have relevance to the security of information resources. A lack of logs often means that we lose the ability to investigate events (e.g. anomalies, unauthorized access attempts, excessive resource use) and perform root cause analysis to determine causation. In the context of this control area, logs can be interpreted very broadly to include automated and handwritten logs of administrator and operator activities taken to ensure the integrity of operations in information processing facilities, such as data and network centres.

How do we protect the value of log information?

Effective logging strategies must also consider how log data can be protected against tampering, sabotage, or deletion that devalues the integrity of log information. This usually involves consideration of role-based access controls that partition the ability to read and modify log data based on business needs and position responsibilities. In addition, timestamp information is extremely critical when performing correlation analysis between log sources. One essential control needed to assist with this is ensuring that institutional systems all have their clocks synchronized to a common source (often achieve via NTP server) so that timelining of events can be performed with high confidence.

What should we log?

The question of what types of events to log must take into consideration a number of factors including relevant compliance obligations, institutional privacy policies, data storage costs, access control needs, and the ability to monitor and search large data sets in an appropriate time frame. When considering your overall logging strategy it can very often be helpful to “work backwards”. Rather than initially attempting to catalogue all event types, it can be useful to frame investigatory questions beginning with those issues that occur on regular basis or have a potential to be associated with significant risk events (e.g. abuse/attacks on ERP systems). These questions can then lead to a focused review of the security event data that has the most relevance to these particular questions and issues. Ideally, events logs should include key information including:

  • User IDs, System Activities; Dates, Times and Details of Key Events
  • Device identity or location, Records of Successful and Rejected System Access Attempts;
  • Records of Successful and Rejected Resource Access Attempts; Changes to System Configurations; Use of Privileges,
  • Use of System Utilities and Applications; Files Accessed and the Kind of Access; Network Addresses and Protocols;
  • Alarms raised by the access control system, Activation and De-activation of Protection systems, such as AV & IDS

5. Output

Finally, companies need to be able to distribute log information to different users and groups using dashboards, reports and email. Their log management solution should facilitate that exchange of data with other systems and the security team.

A.12.5 Control of operational software

Objective:

To ensure the integrity of the operational system

12.5.1 Installation of software on operational systems

Control:

Procedures should be implemented to control the installation of software on ‘operational systems.

Implementation Guidance

The following guidelines should be considered to control changes in software on operational systems:

  1. the updating of the operational software, applications and program libraries should only be performed by trained administrators upon appropriate management authorization;
  2. operational systems should only hold approved executable code and nondevelopment code or compilers;
  3. applications and operating system software should only be implemented after extensive and successful testing; the tests should cover usability, security, effects on other systems and user-friendliness and should be carried out on separate systems it should be ensured that all corresponding program source libraries have been updated
  4. a configuration control system should be used to keep control of all implemented software as well as the system documentation;
  5. a rollback strategy should be in place before changes are implemented;
  6. an audit log should be maintained of all updates to operational program libraries;
  7. previous versions of application software should be retained as a contingency measure;
  8. old versions of software should be archived, together with all required information and parameters, procedures, configuration details and supporting software for as long as the data are retained in the archive.

Vendor supplied software used in operational systems should be maintained at a level supported by the supplier. Over time, software vendors will cease to support older versions of the software. The organization should consider the risks of relying on unsupported software. Any decision to upgrade to a new release should take into account the business requirements for the change and the security of the release, e.g. the Introduction of new information security functionality or the number and severity of information security problems affecting this version. Software patches should be applied when they can help to remove or reduce information security weaknesses.

Physical or logical access should only be given to suppliers for support purposes when necessary and with management approval. The supplier’s activities should be monitored. Computer software may rely on externally supplied software and modules, which should be monitored and controlled to avoid unauthorized changes, which could introduce security weaknesses.

Make sure to establish and maintain documented procedures to manage the installation of software on operational systems. Operational system software installations should only be performed by qualified, trained administrators. Updates to operating system software should utilize only approved and tested executable code. It is ideal to utilize a configuration control system and have a rollback strategy prior to any updates. Audit logs of updates and previous versions of updated software should be maintained. Third parties that require access to perform software updates should be monitored and access removed once updates are installed and tested.

A.12.6 Technical vulnerability management

Objective:

To prevent the exploitation of technical vulnerabilities.

A.12.6.1 Management of technical vulnerabilities

Control:

Information about technical vulnerabilities of information systems being used should be obtained in a timely fashion, the organization’s exposure to such vulnerabilities evaluated and appropriate measures taken to address the associated risk.

Implementation Guidance

A current and complete inventory of assets is a prerequisite for effective technical vulnerability management. Specific information needed to support technical vulnerability management includes the software vendor, version numbers, current state of deployment (e.g. what software is installed on what systems) and the person(s) within the organization responsible for the software.
Appropriate and timely action should be taken in response to the identification of potential technical vulnerabilities. The following guidance should be followed to establish an effective management process for technical vulnerabilities:

  1. the organization should define and establish the roles and responsibilities associated with technical vulnerability management, including vulnerability monitoring, vulnerability risk assessment, patching, asset tracking and any coordination responsibilities required;
  2. information resources that will be used to identify relevant technical vulnerabilities and to maintain awareness about them should be identified for software and other technology based on the asset inventory list: these information resources should be updated based on changes in the inventory or when other new or useful resources are found;
  3. a timeline should be defined to react to notifications of potentially relevant technical vulnerabilities;
  4. once a potential technical vulnerability has been identified, the organization should identify the associated risks and the actions to be taken; such action could involve patching of vulnerable systems or applying other controls;
  5. depending on how urgently a technical vulnerability needs to be addressed. the action taken should be carried out according to the controls related to change management  or by following information security incident response procedures;
  6. if a patch is available from a legitimate source. the risks associated with installing the patch should be assessed. The risks posed by the vulnerability should be compared with the risk of installing the patch; patches should be tested and evaluated before they are installed to ensure they are effective and do not result in side effects that cannot be tolerated; if no patch is available, other controls should be considered, such as:
    1. turning off services or capabilities related to the vulnerability;
    2. adapting or adding access controls, e.g. firewalls, at network borders.
    3. increased monitoring to detect actual attacks;
    4. raising awareness of the vulnerability;
  7. an audit log should be kept for all procedures undertaken;
  8. the technical vulnerability management process should be regularly monitored and evaluated in order to ensure its effectiveness and efficiency;
  9. systems at high risk should be addressed first;
  10. an effective technical vulnerability management process should be aligned with the incident
    management activities, to communicate data on vulnerabilities to the incident response function and provide technical procedures to be carried out should an incident occur;
  11. define a procedure to address the situation where a vulnerability has been identified but there is no suitable countermeasure. In this situation, the organization should evaluate risks relating to the known vulnerability and define appropriate detective and corrective actions.

Other Information:

Technical vulnerability management can be viewed as a sub-function of change management and as such can take advantage of the change management processes and procedures. Vendors are often under significant pressure to release patches as soon as possible. Therefore, there is a possibility that a patch does not address the problem adequately and has negative side effects. Also, in some cases, uninstalling a patch cannot be easily achieved once the patch has been applied. If adequate testing of the patches is not possible, e.g. because of costs or lack of resources. a delay in patching can be considered to evaluate the associated risks, based on the experience reported by other users.

A.12.6.2 Restrictions on software installation

Control:

Rules governing the installation of software by users should be established and implemented.

Implementation Guidance

The organization should define and enforce a strict policy on which types of software users may install. The principle of least privilege should be applied. If granted certain privileges. users may have the ability to install software. The organization should identify what types of software installations are permitted (e.g. updates and security patches to existing software) and what types of installations are prohibited (e.g. software that is only for personal use and software whose pedigree with regard to being potentially malicious is unknown or suspect). These privileges should be granted having regard to the roles of the users concerned.

Other Information:

Uncontrolled installation of software on computing devices can lead to introducing vulnerabilities and then to information leakage, loss of integrity or other information security incidents, or to the violation of intellectual property rights.

A vulnerability is defined in the ISO 27002 standard as “A weakness of an asset or group of assets that can be exploited by one or more threats”. Vulnerability management is the process in which vulnerabilities in IT are identified and the risks of these vulnerabilities are evaluated. This evaluation leads to correcting the vulnerabilities and removing the risk or a formal risk acceptance by the management of an organization (e.g. in case the impact of an attack would be low or the cost of correction does not outweigh possible damages to the organization). The term vulnerability management is often confused with vulnerability scanning. Despite the fact both are related, there is an important difference between the two. Vulnerability scanning consists of using a computer program to identify vulnerabilities in networks, computer infrastructure or applications. Vulnerability management is the process surrounding vulnerability scanning, also taking into account other aspects such as risk acceptance, remediation etc.

Depending on the size and structure of the institution, the approach to vulnerability scanning might differ. Small institutions that have a good understanding of IT resources throughout the enterprise might centralize vulnerability scanning. Larger institutions are more likely to have some degree of decentralization, so vulnerability scanning might be the responsibility of individual units. Some institutions might have a blend of both centralized and decentralized vulnerability assessment. Regardless, before starting a vulnerability scanning program, it is important to have authority to conduct the scans and to understand the targets that will be scanned. Vulnerability scanning tools and methods are often somewhat tailored to varied types of information resources and vulnerability classes. The table below shows several important vulnerability classes and some relevant tools.

Common Types of Technical Vulnerabilities Relevant Assessment Tools
Application Vulnerabilities Web Application Scanners (static and dynamic), Web Application Firewalls
Network Layer Vulnerabilities Network Vulnerability Scanners, Port Scanners, Traffic Profilers
Host/System Layer Vulnerabilities Authenticated Vulnerability Scans, Asset and Patch Management Tools, Host Assessment and Scoring Tools

 

Common Challenges

  • “Scanning Can Cause Disruptions.” IT operations teams are quite reasonably very sensitive about how vulnerability scans are conducted and keen to understand any potential for operational disruptions. Often legacy systems and older equipment can have issues even with simple network port scans; To help with this issue, it can often be useful to build confidence in scanning process by partnering with these teams to conduct risk evaluations before initiating or expanding a scanning program. It is also often important to discuss the “scan windows” when these vulnerability assessments will occur to ensure that they do not conflict with regular maintenance schedules.
  • “Drowning In Vulnerability Data and False Positives.” Technical vulnerability management practices can produce very large data-sets. It is important to realize that just because a tool indicates that a vulnerability is present that there are frequently follow-up evaluations needed to validate these findings. Reviewing all of these vulnerabilities is usually infeasible for many teams; For this reason, it is very important to develop a vulnerability prioritization plan before initiating a large number of scans. These priority plans should be risk driven to ensure that teams are spending their time dealing with the most important vulnerabilities in terms of both likelihoods of exploitation and impact.

A.12.7 information systems audit considerations

Objective: To minimise the impact of audit activities on operational systems.

A.12.7.1 information systems audit controls 

Control:

Audit requirements and activities involving verification of operational systems should be carefully planned and agreed to minimize disruptions to business processes. ‘

Implementation Guidance

The following guidelines should be observed:

  1. audit requirements for access to systems and data should be agreed with appropriate management;
  2. the scope of technical audit tests should be agreed and controlled;
  3. audit tests should be limited to read-only access to software and data;
  4. access other than read-only should only be allowed for isolated copies of system files, which should be erased when the audit is completed, or given appropriate protection if there is an obligation to keep such files under audit documentation requirements;
  5. requirements for special or additional processing should be identified and agreed;
  6. audit tests that could affect system availability should be run outside business hours;
  7. all access should be monitored and logged to produce a reference trail.

It is important to ensure that all IT controls and information security audits are planned events, rather than reactive ‘on-the-spot’ challenges. Most universities undergo a series of audits each year ranging from financial IT controls reviews to targeted assessments of critical systems. Audits that include testing activities can prove disruptive to campus users if any unforeseen outages occur as a result of testing or assessments. Through working with campus leadership, it should be possible to determine when audits will occur and obtain relevant information in advance about the specific IT controls that will be examined or tested. Develop an ‘audit plan’ for each audit that provides information relevant to each system and area to be assessed.  These audit plans should take into consideration:

  • Asset Inventory with contact information for system administrators/owners;
  • Requirements for testing/maintenance windows;
  • Information about backups (if applicable) in case systems later need to be restored due to unplanned outages;
  • Checklists or other materials provided in advance by auditors, etc.

If applicable, work with IT and other departments to provide audit preparation services to ensure that everyone understands their roles in the audit and how to respond to auditors’ questions, issues and concerns. Protecting sensitive information during audits is critical, and documents provided to auditors should be recovered if possible, shortly before audits are completed. Any and all audit activity, to assess an operational system, should always be managed to minimize any impact on the system during required hours of operation. Any testing of operating systems that could pose an adverse effect to the system should be conducted during off hours.

Example of Information Security Operations Management Procedure

A. Operational Procedures and Responsibilities

1. Documented Operating Procedures

1.1 Operating procedures and responsibilities for information systems must be authorised, documented and maintained.
1.2 Information Owners and System Owners must ensure that Standard Operating Procedures (SOP) and standards are:

  1. documented;
  2. approved by the appropriate authority;
  3. consistent with University policies;
  4. reviewed and updated periodically;
  5. reviewed and updated when there are changes to equipment/systems or changes in business services and the supporting information systems operations; and
  6. reviewed and updated following a related security incident investigation.

1.3 The documentation must contain detailed instructions regarding:

  1. information processing and handling;
  2. system restart and recovery procedures;
  3. backup and recovery including on-site and off-site storage;
  4. exceptions handling, including a log of exceptions;
  5. output and media handling, including secure disposal or destruction;
  6. Management of audit and system log information;
  7. change management including scheduled maintenance and interdependencies;
  8. computer room management and safety;
  9. Information Incident Management Process;
  10. Disaster Recovery;
  11. Business Continuity Plan; and
  12. contact information for operations, technical, emergency and business personnel.

2. Change Management

2.1 Changes to business processes and information systems that affect information security must be controlled.
2.2 All changes to the Organization’s ICT services and systems environment, including provisioning and de-provisioning of assets, promotion of code, configuration changes and changes to Standard Operating Procedures must be authorised by the IT Change Advisory Board (CAB).
2.3 The change management process must follow the guidelines, approvals and templates provided as per the Transition Process.
2.4 Changes must be controlled by:

  1. identifying and recording significant changes;
  2. assessing the potential impact, including that on security, of the changes;
  3. obtaining approval of changes from those responsible for the information system;
  4. planning and testing changes including the documentation of rollback procedures;
  5. communicating change details to relevant personnel, Users and stakeholders; and
  6. evaluating that planned changes were implemented as intended.

2.5 Information Owners and System Owners must plan for changes by:

  1. assessing the potential impact of the proposed change on security by conducting a security review and a Threat and Risk Assessment;
  2. identifying the impact on agreements with business partners and external parties including information sharing agreements, licensing and provision of services;
  3. preparing change implementation plans that include testing and contingency plans in the event of problems;
  4. obtaining approvals from affected Information Owners; and
  5. training techniques of operational staff as necessary;

2.6 Information Owners and System Owners must implement changes by:

  1. notifying affected internal parties, business partners and external parties;
  2. following the documented implementation plans;
  3. training users if necessary;
  4. documenting the process throughout the testing and implementation phases; and
  5. confirming the changes have been performed and no unintended changes took place

3. Capacity Management

3.1 The use of information system resources must be monitored and optimised with projections made of future capacity requirements.
3.2 Information Owners and System Owners are responsible for implementing capacity management processes by:

  1. documenting capacity requirements and capacity planning processes;
  2. including capacity requirements in service agreements; and
  3. monitoring and optimising information systems to detect impending capacity limit.

3.3 Information Owners and System Owners must project future capacity requirements based on:

  1. new business and information systems requirements;
  2. statistical or historical capacity requirements; and
  3. current and expected trends in information processing capabilities (e.g. the introduction of more efficient hardware or software).

3.4 Information Owners and System Owners must use trend information from the capacity management process to identify and remediate potential bottlenecks that present a threat to system security or services

4. Separation of Development, Testing and Production Environments

4.1 Development, testing and production environments must be separated to reduce the risks of unauthorised access or changes to the production environment.
4.2 Information Owners and System Owners must:

  1. separate production environments from test and development environments by using different servers, networks and where possible different domains;
  2. ensure that production servers do not host test or development services or applications;
  3. prevent the use of test and development identities as credentials for production systems;
  4. store source code in a secure location away from the production environment and restrict access to specified personnel;
  5. prevent access to compilers, editors and other tools from production systems;
  6. use approved change management processes for promoting software from development/test to production;
  7. prohibit the use of production data in development, test or training systems; and
  8. prohibit the use of sensitive information in development, test or training systems in accordance with the System Acquisition, Development and Maintenance Procedure

B. Protection from Malware

1. Controls Against Malicious Code

1.1 Detection, prevention and recovery controls – supported by user awareness procedures – must be implemented to protect against malware.
1.2 Information Owners and System Owners must protect University information systems from malicious code by:

  1. installing, updating and using software designed to scan, detect, isolate and delete malicious code;
  2. preventing unauthorised Users from disabling installed security controls;
  3. prohibiting the use of unauthorised software;
  4. checking files, email attachments and file downloads for malicious code before use;
  5. maintaining business continuity plans to recover from malicious code incidents;
  6. maintain a critical incident management plan to identify and respond to malicious code incidents;
  7. maintaining a register of specific malicious code countermeasures (e.g. blocked websites, blocked file extensions, blocked network ports) including a description, rationale, approval authority and the date applied; and
  8. developing user awareness programs for malicious code countermeasures.

1.3 The IT Security staff are responsible for communicating technical advice and providing information and awareness activities regarding malicious code

C. Backup

1. Information Backup

1.1 Backup copies of information, software and system images must be made, secured, and be available for recovery.
1.2 Information Owners and System Owners must define and document backup and recovery processes that consider the confidentiality, integrity and availability requirements of information and information systems.
1.3 Backup and recovery processes must comply with:

  1. University business continuity plans (if applicable);
  2. policy, legislative, regulatory and other obligations; and
  3. records management requirements (refer Records Management Policy).

1.4 The documentation for backup and recovery must include:

  1. types of information to be backed up;
  2. schedules for the backup of information and information systems;
  3. backup media management;
  4. methods for performing, validating and labelling backups; and
  5. methods for validating the recovery of information and information systems.

1.5 Backup media and facilities must be appropriately secure based on a security review or Risk Assessment. Controls to be applied include:

  1. use approved encryption;
  2. physical security;
  3. access controls;
  4. methods of transit to and from off-site locations;
  5. appropriate environmental conditions while in storage; and
  6. off-site locations must be at a sufficient distance to escape damage from an event at the main site principles

D. Log Management

1. Event Logging

1.1 Event logs recording user activities, exceptions, faults and information security events must be produced, kept and regularly reviewed.
1.2 Information Owners must ensure that event logs are used to record user and system activities, exceptions and events (security and operational). The degree of detail to be logged must be based on the value and sensitivity of the information and the criticality of the system. The resources required to analyse the logs must also be considered. Where applicable, event logs must include:

  1. user ID;
  2. system activities;
  3. dates, times and details of key events (e.g. login, logoff);
  4. device identity and location;
  5. login method;
  6. records of successful and unsuccessful system access attempts;
  7. records of successful and unsuccessful data and other resource access attempts;
  8. changes to system configuration;
  9. use of elevated privileges;
  10. use of system utilities and applications;
  11. network addresses and protocols;
  12. alarms raised by the access control system;
  13. activation and de-activation of protection systems (e.g. anti-virus, intrusion detection); and
  14. records of transactions executed by users in applications.

1.3 Event logs may contain sensitive information and therefore must be safeguarded in accordance with the requirements of the section on the Protection of Log Information.
1.4 System administrators must not have the ability to modify, erase or de-activate logs of their own activities.
1.5 If event logging is disabled the decision must be documented. Include the name and position of the approver, date and rationale for de-activating the log.
1.6 Event logs may be configured to alert someone if certain events or signatures are detected. Information Owners and System Owners must establish and document alarm response procedures to ensure they are responded to immediately and consistently. Normally, response to an alarm will include:

  1. identification of the event;
  2. isolation of the event and affected assets;
  3. identification and isolation of the source;
  4. corrective action;
  5. forensic analysis;
  6. action to prevent recurrence; and
  7. securing event logs as evidence

2. Protection of Log Information

2.1 Information system logging facilities and log information must be protected against tampering and unauthorised access.
2.2 Information Owners must implement controls to protect logging facilities and log files from unauthorised modification, access or destruction. Controls must include:

  1. physical security safeguards;
  2. permission for administrators and operators to erase or de-activate logs;
  3. multifactor authentication for access to highly-restricted records;
  4. backup of audit logs to off-site facilities;
  5. automatic archiving of logs to remain within storage capacity; and
  6. scheduling the audit logs as part of the records management process.

2.3 Event logs must be retained in accordance with the records retention schedule for the information system.
2.4 System logs for University critical IT infrastructure (P1 list) must be retained for at least 30 days online and archived for 90 days.
2.5 Datacentre physical access logs must be made available for at least 90 days and CCTV records must be retained for at least 30 days.
2.6 Logs must be retained indefinitely if an investigation has commenced or it is known that evidence may be obtained from them

3. Administrator and Operator Logs

3.1 Activities of privileged users must be logged and the log subject to regular independent review.
3.2 The activities of system administrators, operators and other privileged users must be logged including:

  1. the time an event (e.g. success or failure) occurred;
  2. event details including files accessed, modified or deleted, errors and corrective action is taken;
  3. the account and the identity of the privileged user involved; and
  4. the systems processes involved.

3.3 Logs of the activities of privileged users must be checked by the Information Owner or delegate. Checks must be conducted regularly and randomly. The frequency must be determined by the value and sensitivity of the information and criticality of the system. Following verification of the logs, they must be archived in accordance with the applicable records retention schedule.

4. Clock Synchronisation

4.1 Computer clocks must be synchronised for accurate recording.
4.2 System administrators must synchronise information system clocks to the local router gateway or a University-approved host.
4.3 System administrators must confirm system clock synchronisation following power outages and as part of the incident analysis and event log review

E. Control of Operational Software

1. Installation of Software on Production Systems

1.1 The installation of software on production information systems must be controlled.
1.2 To minimise the risk of damage to production systems Information Owners must implement the following procedures when installing software:

  1. updates of production systems must be planned, approved, assessed for impacts, tested and logged;
  2. a Change and Release Coordinator must be appointed to coordinate the install and update of software, applications and program libraries;
  3. operations personnel and end users must be notified of the changes, potential impacts and, if required, given additional training;
  4. production systems must not contain development code or compilers;
  5. user acceptance testing must be extensively and successfully conducted on a separate system prior to production implementation;
  6. a rollback strategy must be in place and previous versions of application software retained;
  7. old software versions must be archived with configuration details and system documentation; and
  8. updates to program libraries must be logged

F. Vulnerability Management

1. Management of Technical Vulnerabilities

1.1 Regular assessments must be conducted to evaluate information system vulnerabilities and the management of associated risk.
1.2 To support technical vulnerability management, Information Owners and System Owners must maintain an inventory of information assets in accordance with the Information Security Asset Management Procedure. Specific information must be recorded including:

  1. the software vendor;
  2. version numbers;
  3. the current state of deployment; and
  4. the person(s) responsible for the system.

1.3 Vulnerabilities which impact the information systems must be addressed in a timely manner to mitigate or minimise the impact on the operations. The IT Security Team shall ensure that vulnerability assessments (VA) are conducted for the organization’s ICT services and systems on a regular basis.
1.4 Vulnerability remediation efforts, including patch implementations, shall be coordinated and processed according to the Patch Management Procedure and Risk Management Framework.
1.5 All internal and external organization ICT systems and resources are covered in this Procedure:
(a) Internal Vulnerability Assessments

  1. Servers used for internal hosting and supporting Infrastructure
  2. Servers which will be accessed through the reverse proxy
  3. Research specific servers and applications
  4. Research devices and systems
  5. Desktops and workstations

(b) External Vulnerability Assessments

  1. Perimeter network devices exposed to the internet
  2. All external facing servers and services
  3. Network appliances, streaming devices and essential IP assets that are internet facing.
  4. Public facing research applications and devices
  5. Cloud-based services

2. Vulnerability Management Cycle

2.1 Asset Discovery

  1. Asset Discovery scan will be executed on a monthly basis or quarterly on the segments to determine the live assets connected to the network.
  2. Network team will share the IP segments of all assets within the University including Datacentres and other Virtual LAN’s with the IT Security Team.
  3. IT Security Team will perform an asset discovery scan on the segments.
  4. Any assets added or removed from the segment will be detected in the asset discovery scan.
  5. IT Security Team will share with Network team the addition/removal of servers/devices for reconfirmation based on the discovery scan.
  6. The final list of IP/IP Segments will be scanned for Vulnerabilities

2.2 Scan – Remediate – Rescan

  1. IT Security team shall perform Vulnerability Analysis Scan on all University Critical Infrastructure Servers on a monthly basis and non-critical assets on at least a quarterly basis.
  2. IT Security Team will perform a risk assessment to map the risk, threat, likelihood and impact rating for the vulnerabilities noted.
  3. The University Risk Management Framework shall be followed to perform the risk assessment.
  4. IT Security Team shall inform the System Owners regarding the results of the scans and share the vulnerability reports with the Responsible Administrators for each system.
  5. All vulnerabilities identified in the VA Scan shall be remediated by according to the Remediation timeline.
  6. The System Owners shall inform IT Security Team regarding the completion of vulnerability remediation.
  7. Vulnerabilities that cannot be actioned within the defined timeframe will need an exception approved.

2.3 Ad-Hoc Scans
Ad-hoc scans include scans on any new infrastructure devices/servers/services prior to production deployment as per the following process.

  1. New service owners shall complete a Service Desk request ticket and submit to the IT Security Team for action.
  2. The IT Security Team shall perform Vulnerability Analysis Scan of specific systems (including servers) as per the environment and technology used for the system.
  3.  VA report shall be submitted to the Business owner and respective System Owner or team.
  4. IT Security Team lead will validate with respective System Owners on the closure of all the vulnerabilities and then perform a rescan.
  5. Vulnerabilities that cannot be actioned within the defined timeframe will need an exception approved, with risk acceptance and compensating controls implemented and documented.
  6. Assets/ services/devices can be released to production only after the final sign off by IT Security Team

3. Classification of Vulnerabilities

3.1 Vulnerabilities are classified based on their impact in a given environment, to data/information or to the University’s reputation Rating

Rating Red Hat,  Microsoft, Adobe
Rating
Typical CVSS
Score
Description
Critical Critical 10 A vulnerability whose exploitation could allow code execution or complete system compromise without user interaction. These scenarios include self-propagating malware or unavoidable common use scenarios where code execution occurs without warnings or prompts. This could include browsing to a web page or opening an email or no action at all.
High Important 7.0 – 9.9 A vulnerability whose exploitation could result in compromise of the confidentiality, integrity, or availability of user data, or of the integrity or availability of processing resources. This includes common use scenarios where a system is compromised with warnings or prompts, regardless of their provenance, quality, or usability. Sequences of user actions that do not generate prompts or warnings are also covered.
Medium Moderate 4.0 – 6.9 Impact of the vulnerability is mitigated to a significant degree by factors such as authentication requirements or applicability only to non-default configurations. The vulnerability is normally difficult to exploit.
Low Low < 4.0 This classification applies to all other issues that have a security impact. These are the types of vulnerabilities that are believed to require unlikely circumstances to be able to be exploited, or where a successful exploit would give minimal consequences.

4. Remediation Timeline and Risk Acceptance

4.1 All vulnerabilities identified in a VA Scan shall be addressed within the timeline described below. If any particular vulnerability cannot be remediated within this timeframe, the risk of data loss/attack on the device should be formally documented and accepted by the respective groups in below table. Remediation time and risk acceptance for the identified vulnerabilities shall be as follows:

Vulnerability Level

Remediation Timelines

Risk Acceptance

External Facing Devices

Internal Devices

Critical 1 Week 1 Week CIO or Risk Management Office
High 2 Weeks 2 Weeks CIO
Medium 3 Weeks Next Maintenance Window Information Owner
Low Next Maintenance Window Next Maintenance Window Information Owner

5. Third Party Scans

5.1 A third party must be engaged annually to perform vulnerability assessment & penetration testing covering all internet facing organization ICT services and systems and critical internal non-internet facing ICT services and systems.

6. Vulnerability Management Roles and Responsibilities

6.1 IT Security Team

  1. Perform asset discovery and performing Vulnerability Management Process
  2. Approve the Vulnerability Assessment Schedule
  3. Oversee vulnerability remediation.
  4. Targeting vulnerability program maturity through metrics development
  5. Monitor security sources for vulnerability announcements and emerging threats that correspond to the system inventory.

6.2 System Owners

  1. Responsible for implementing remediating actions defined as a result of detected vulnerabilities.
  2. testing and evaluating options to mitigate or minimise the impact of vulnerabilities;
  3. applying corrective measures to address the vulnerabilities, and
  4. reporting to the IT Security Team on progress in responding to vulnerabilities

6.3 Depending on how urgently a technical vulnerability needs to be addressed, the action taken should be carried out according to the change management controls or by following the Information Security Incident Management Guidelines.
6.4 Responsibilities for vulnerability response must be included in service agreements with suppliers.

7. Restrictions on Software Installation

7.1 Rules governing the installation of software by users must be established and implemented.
7.2 Users are not allowed to install software on University devices unless specifically authorised by a System Owner or a system administrator. System Owners are responsible for the installation of software, updates and patches.

H. Information Security Audit Considerations

1. Information Systems Audit Controls

1.1 Audit requirements and activities involving checks on production systems must be planned and approved to minimise disruption to business processes.
1.2 Prior to commencing compliance checking activities such as audits or security reviews of production systems the CIO, and the Information Owner must define, document and approve the activities. Among the items upon which they must agree are:

  1. the audit requirements and scope of the checks;
  2. audit personnel must be independent of the activities being audited;
  3. the checks must be limited to read-only access to software and data, except for isolated copies of system files, which must be erased or given appropriate protection if required when the audit is complete;
  4. the resources performing the checks must be explicitly identified;
  5. existing security metrics will be used where possible;
  6. all access must be monitored and logged and all procedures, requirements and responsibilities must be documented;
  7. audit tests that could affect system availability must be run outside business hours; and
  8. appropriate personnel must be notified in advance in order to be able to respond to any incidents resulting from the audit.

………………………………………..End of Example ……………………………….

Your Donation can make a difference

We have chosen to make our Resources freely and openly available on the web with the hope that it touches the lives of thousands of readers who visit us daily. We hope our blog has helped in enhancing the knowledge of our readers and added value to the organization and their implementers. We would request you to make donation large and small, so as to provide us with the resources needed to distribute, collect, digitize as it is becoming extremely difficult for us to afford the full cost of updating and enriching our site content. Your contribution will ensure that we can keep our blog up-to-date and add more of the rich resources — such as video — that make a difference for so many worldwide. Your donation will demonstrate your commitment to knowledge as a public good and is an important part of our overall sustainability plan. Your donation is also important in demonstrating to us how much you value the site and motivates us to devote more of our time to developing this blog.

Back to Home Page

If you need assistance or have any doubt and need to ask any question contact me at preteshbiswas@gmail.com or call Pretesh Biswas at +919923345531. You can also contribute to this discussion and I shall be happy to publish them. Your comment and suggestion are also welcome.

Advertisements

Leave a comment

Your email address will not be published. Required fields are marked *

Donation

Pretesh Biswas

Pretesh Biswas

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 955 other subscribers

Advertisements