The last of the three pillars of information security is availability. Availability simple means that the data is available when the system says it will. In some cases that will be always available and on-demand. In other cases it may be in a few minutes. And in other cases it may be days. All that matters is that expectations of when the data is going to be there is relayed to the consumer and the system meets or exceeds that expectation.
Availability has to be managed at the network, server, and application layer. At a network layer, it is ensuring that the software can handle the number of requests expected. From a server perspective, it means that the processing power, memory, and disk space is configured to keep the software running at an optimum performing level. The application can also improve the availability for itself.
Implementing the right protections at the right place will effectively implement a fail fast design technique. The fail fast methodology limits the amount of processing a system has to do. A network protection prevents the request from reaching the server. A server protection protects the application.
The network helps ensure availability through the use of firewalls, controlling load balancing to different servers, and the size of the network itself. How much load is on a network will impact how that network performs. Distributed denial of service attacks intentionally flood both the network and the server with so many requests that the system is unable to respond. From our educational software, perspective, designing the network connectivity can be driven by what components are able to talk to what components. For example, if it is known that only certain servers can talk to a database, then the network can prevent other servers from doing so.
What processes are running on a server will impact the performance of the server. It is extremely important to understand what processes are needed in order to effectively and efficiently handle requests. If a server will only be used as a web server it could be reasonable to ensure there are no other services running that do not support that. This will allow the web server to leverage the system resources in the best way possible and minimizes the risk of system resources preventing efficient use of the server. The server should also be configured so that only those who need access to it can access the server.
Lastly, the application has many ways to ensure it itself is available when needed. The first is access controls, followed by input validation, and expected usage controls.
Access controls are preventive controls which enable applications to manage resource consumption. By not allowing a user or system to access a part of the application, the resources needed are freed up. This isn’t just on the user interface or network controls, this also applies to file systems and databases.
Input validation is the first line of defense for an application. While there is a bit of a nuance to how much defense provided, there is no doubt it provides some. The first strategy is to minimize the surface area of the amount of and types of input allowed. Anywhere that hard rules can be defined and validated for, they should be. And while trust boundaries will require multiple passes through validation, performing the validation where appropriate is important. For instance, in our educational software, having the browser side validate the number of courses a student can sign up for. Validating this on the client side helps with usability and potentially research management on the client side. This check, however, also needs to be performed when the user attempts to submit the data to the server.
Finally, managing resource consumption. It is important that software applications understand their intended contexts. One of the more popular examples here is allowing users to upload files. Typically, when files are uploaded there are restrictions on the type of file, the size of the file, and possibly how many times a file can be uploaded. Imagine a scenario where a user of our educational system is able to have multiple logins simultaneously and the user is able to upload gigabytes of images for a project without checks and balances. This could easily lead to a denial of service (not to mention a possible significant compute or storage cost).
As with confidentiality and integrity, there ensuring availability is maintained is a hard. There are a lot of moving parts. That said, it doesn’t need to be difficult. Fundamentally, it starts with understanding how the components will be used, who will be using it, and when will they be needed. By asking those questions, the product team can architect with certain strategies and put appropriate controls in place to meet those goals.