Information security refers to the confidentiality, integrity, and availability of the information. Within the software security space, it refers to the confidentiality, integrity and availability of a specific software component or application.
Confidentiality describes who and how access to information can be used. Users provide information to the system and there’s a set of rules that define who can access the data. Even in a private message board application, the idea of a draft implies some level of confidentiality to the user. In its simplest form, unless a user says that the information they provide can be used the information they provide should not be used. Honoring the confidentiality of data increases the level of trust in the system and by extension the company.
Integrity is about the accuracy of the data at any given point in time. Nearly all systems have a period of time when the data will be inaccurate. An example will be when a transfer, withdraw, or deposit goes into a bank. The money isn’t there or withdrawn immediately. There is a process that occurs behind the scene. The shorter the time period, the better. This is why systems that rely on “eventual consistency” have become popular. The challenge becomes when the delays or accuracy is off frequently and by large quantities. Imagine if Amazon regularly accepted payments for orders only to find out that the orders could not be fulfilled due to stock issues.
Availability refers to the ability for the system to perform when it is required to. It is important to note that this doesn’t necessarily mean the system is up 100% of the time. It might not need to be. As long as the work expected by the user of the system is able to be done when the work is expected to be done, the system is available.
The security of an application can be weekend in two different ways. One is an architecture flaw and another is software bugs. Architecture flaws tend to show themselves as pervasive problems through out the software product. A software bug tends to be the result of a coding error.
An example of an architecture flaw is using a weak strategy for protecting customer data in a database. It could be improper configuration of the database, failing to use the appropriate data protection strategy (encryption, hashing, obfuscation), or some other architecture decision. While an example of software bug is where it is possible for a user to buy a product for free because some input validation was missed, a boundary check was not tested accurately, or something else.
Software security isn’t a one time thing, it is constantly occurring within the software development lifecycle (SDL) and the testing strategies will be repeated and updated as new security finings are found. The different testing strategies may be performed at different points in the SDL. Most of these tests can be performed manually or automated. We will dive into the type of tests when we talk about the current processes.
Information security has definitions for various aspects of these findings. The finding itself is a vulnerability. A vulnerability is a weaknesses in the system which can lead to a negative impact on confidentiality, integrity or availability within the application. The system or person who tries to take advantage of the vulnerability is called a threat. When the threat actually uses the weakness to weaken the system, it is called an exploit. The odds that an exploit is going to occur called the likelihood. The amount of damage likely to be caused by a vulnerability that is exploited is called the risk.
A team will have four options for handling the risk identified with a vulnerability. The team can remediate the vulnerability (“fix the issues”), mitigate the vulnerability (“put a work around that adds protection”), transfer the risk (“make someone else accountable for the consequences”) or accept the risk (“what will be will be”). The actions taken on the risk are called controls. These could be preventative, detective, or corrective controls.
Software security does not end when the software is deployed to production. Whether the software is deployed is a software as a service or software run on a customers machine, there are responsibilities that must be completed. There is logging and monitoring of the application, disaster recovery, and incident response. While the people performing these actions may or may not be part of the organization that built the software, they play an integral part of the security of the software.
Logging and monitoring refers to the ability to understand and evaluate the health of the software as it is live. The software should be sending relevant information to the logs in order for administrators to trace any potential issues while not disclosing customer specific information. The system utilization should be tracked in order to identify any unusual spikes of compute, memory, network, or disk usage.
Disaster recovery is the ability of a system to be restored to a usable state after coming to an unexpected end. In some cases, this could be an automatic restart. In other cases it may require a set of steps to switch to different s ervers that are located in different geographical regions. It all depends on what caused the system to stop functioning and how segmented the system is.
Incident response deals with a vulnerability being exploited. Once a vulnerability has been exploited, there is a process that must be followed (and it will almost certainly include lawyers for the organizations own protection). This process will include bringing in people from the necessary teams, identifying as much information about the impact of the exploit, recovering from the exploit, and communicating with customers.