Testing Non Functional Requirements Information Technology Essay

Published: November 30, 2015 Words: 3202

Non-functional requirements form a crucial part of a software system. The testing of non-functional requirements is usually considered as a non-vital part of the testing cycle and not much attention is spent on testing them. Different types of non-functional testing exists that, if enforced, can ensure the success of a software system. Different tools are also available to perform non-functional testing.

Introduction

Characteristics such as performance, quality and security among others are vital to the success of software systems. In the information age we are living in, software systems are becoming a commodity that any business requires to function. When dealing with software, it is not only the functionality that plays a crucial role in the success of a software system, but also the non-functional requirements of the system. To ensure that a system is of the best quality, testing of the non-functional requirements are a necessity.

Defining non-functional requirements

According to Kotonya and Sommerville (1998) non-functional requirements are: Requirements which are not specifically concerned with the functionality of a system. Antón (1997) says non-functional requirements describe the non-behavioural aspects of a system, capturing the properties and constraints under which a system must operate.

If we take Antón, Kotonya and Sommerville's views into consideration, we can conclude that non-functional requirements are requirements that play a role in the success of a software system that the functional requirements cannot achieve.

Non-functional requirements can be divided into two main categories, namely: execution qualities and evolution qualities. Execution qualities are all those that are visible at run-time of a software system, like usability and security. Evolution qualities are embodied in the static structure of the software system like testability, maintainability, extensibility and scalability. (Wiegers 2003)

Execution qualities

The first quality of the execution qualities, usability, refers to the extent to which the product can be used for a specific purpose by the targeted audience. Usability of a software system is purely based on the target audience. Thus before a system can be developed, a series of questions have to be asked to determine the complexity of the user interface. Questions that need to be asked include: Who are the targeted audience? What skills does the average user have? Is the user able to learn?

The second execution quality, security, refers to the protection against criminal activity, damage, loss and danger involving data. Security consists of six basic concepts: confidentiality, integrity, authentication, availability, authorization and non-repudiation.

Confidentiality is a security measure which protects against the release of information to people other than the intended receiver.

Integrity is the degree to allow the receiver to control that the information which it is providing is accurate.

Authentication is a type of security testing in which one will enter different combinations of usernames and passwords and will check whether only the lawful people are able to access it or not.

Availability is assuring information and communications services will be ready for use when needed.

Authorization is the process of determining that a asker is allowed to obtain a service or execute an operation.

Non-repudiation is a measure intended to avoid the later repudiation that an action happened, or a communication that took place etc.

Evolution qualities

The first evolution quality, testability, It can be defined as the property that measures the easiness of testing a functionality. A testable artefact ensures complete implementation of the test scripts. Assuming that good test coverage is applied, most of the defects will be uncovered and fixed before the product is released.

Testing a requirement or a piece of functionality is dependent on observability and controllability. Observability refers to the ability of a user or tester to view the internal and external components of software. Controllability refers to the ability of how easily a user or tester can create complex test case to test the software under complex circumstances.

Creating testable software usually depends on the methodology used to create the software. In a traditional waterfall model, testability can be incorporated into various stages, performing some of these steps can lead to more advanced ways of addressing the non-functional requirement. During the Software Specifications phase of the waterfall model, a testing team should be explicitly cross-examined on their understanding of the requirements, and how the requirements form into numerous functionalities. In the Detailed Design phase inputs and expected outputs should clearly be stated. In the Coding phase coding and algorithms should be subjected to thorough unit testing. Test harnesses can also be generated with the purpose of the testing team starting some testing at the program and component level. In the Testing phase the test plan and the test scripts should be thoroughly tested along with other functional and non-functional requirements. Testability can be addressed at this phase by using specific queries, generating stubs and drivers for integration testing, and using test harnesses for specific modules or components.

Testability is not a very difficult property to incorporate in software or components. Testable software makes it easier to execute tests and ensures that the software is relatively error free.

The second evolution quality, maintainability, refers to the ease with which an artefact can be maintained in order to correct flaws, adhere to new requirements, make future maintenance easier and to handle a different environment. Correcting flaws is typically only a small part of the overall maintenance process. The majority of maintenance tasks are concerned with implementing new functionality and adapting to new environments.

Two ways of measuring software maintainability are: mean time to repair and mean time to modify. Mean time to repair is a basic measure of the maintainability of repairable items. It represents the average time necessary to repair a failed component. Communicated mathematically, it is the total corrective maintenance time divided by the total number of corrective maintenance actions during a given period of time. Mean time to modify is a basic measure of the maintainability of modifiable items.

The third evolution quality, extensibility, refers to a quality of design that takes probable future advances into consideration and attempts to accommodate them.

This quality is extremely difficult to test, to test extensibility; you need to ask the following questions: What possible features can be incorporated into the system? Can the current software incorporate different data input types? Can it handle new input types? Etc.

The fourth evolution quality, scalability, is a desirable property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner or its ability to be enlarged (Bondi 2000).

Scalability can be measured in various dimensions, such as: Load scalability, Geographic scalability, Administrative scalability, Functional scalability.

Load scalability refers to the ability of a distributed system to easily expand and contract its resource pool to accommodate heavier loads.

Geographic scalability refers to the ability to uphold performance, usefulness, or usability regardless of growth from concentration in a local area to a more distributed geographic pattern.

Administrative scalability is the ability for an increasing number of organizations to easily share a single distributed system.

Functional scalability refers to the ability to enhance the system by adding new functionality at minimal effort.

Non-functional testing

Non-functional testing is the testing of a software application for its non-functional requirements. Non-functional testing focuses on performance, dependability, operational aspects and readiness issues of the software application.

Objectives set out by non-functional testing include: improving the quality of the product in areas of stability, performance, resilience, operability etc. It also tries to reduce production risks and costs associated with the non-functional aspects of the software application. Non-functional testing tries to optimise the way the software application is installed, executed, configured, managed and monitored. In-depth knowledge of the software application's behaviour and the technologies in use can be improved and enhanced with non-functional testing.

Non-functional testing can be broken down into nine categories namely: performance, installation and upgrade, usability, security, availability and resilience, operability/inter-operability, stability, scalability, configuration and maintainability.

Testing for performance can be done in various ways. Volume testing refers to testing a software system with a specific amount of data. Volume testing is performed to find faults in the software application and to give creditable information about the state of the component, on which business decisions can be made. Volume testing needs two things. Firstly clear expected outcomes of how the software is to behave for a given level of data. Secondly, it needs a lot of data.

Another way to test the performance of software is load testing. Load testing refers to modelling the expected usage of a software application by simulating multiple users accessing the program at the same time. Load testing assesses software intended for a multi-user target group by exposing the software to different amounts of virtual and live users while monitoring performance measurements under these different loads. Load and performance testing is usually conducted in a test environment identical to the production environment before the software system is permitted to go live.

Stress testing can also be performed to test the performance of software. Stress testing is a form of testing that is used to determine the stability of a given system. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the outcome. Reasons to perform stress testing include: if the software being tested leads to failure, it can have disastrous consequences; stress testing can find flaws in concurrent operations of the software.

Installation testing is a kind of quality assurance work in the software industry that focuses on what users will need to do to install and set up the new software successfully. The testing process may involve full, partial or upgrades install/uninstall processes.

Usability testing refers to the technique of allowing users to evaluate the system. The goal of usability testing is to observe the users while they are testing the system and measuring how they respond in four major areas: Performance, accuracy, recall and their emotional response. In the performance area, users will be measured by the time they take to perform a task, accuracy will measure the amount of errors made by the user, recall measures the ability of the user to remember how to perform a specific task after a period of time and emotional response refers to how the user feels about completing a task in a specific manner.

Various methods for usability testing exists, which includes: hallway testing, remote testing and expert review. Hallway testing is when five or six random people are selected to interact with the system. Remote testing involves online users completing surveys; this is done to retrieve quantifiable data for analysis. Expert review is when an expert evaluates the usability of the software system.

Usability testing usually takes place at usability labs designed to capture every necessary detail needed to make conclusions on the usability of a software system. The usability lab contains video cameras to record the emotional response the user has to a certain task as well as screen capturing software to record exactly how the user uses the system. After the data is gathered at the labs, from the data conclusions about the usability of the software can be formulated and necessary changes can be made.

When performing security testing, vulnerability scanners, otherwise known as penetration testing tools, are used to automate the security testing of http request/responses; however, this is not a substitute for the need for actual source code review. Physical code reviews of an application's source code can be accomplished by hand or in a computerised manner. The size of applications can be so huge that the human brain cannot always find vulnerabilities in the application. Two types of testing exist; namely black-box testing and white-box testing. Black-box testing refers to a practice where an ethical hacker that has no knowledge of the system, tries to penetrate the system. White-box testing refers to the practice where an ethical hacker knows the system and tries to simulate an attack as an insider.

Availability testing is the system testing of an incorporated application against its working availability requirements. Availability testing is performed to determine if the application meets its working availability requirements, how steady the application is and whether idle time is required for maintenance purposes.

Interoperability testing is the steps taken to verify that end-to-end functionality between (at least) two communicating systems is as required by the standards of those systems. Interoperability testing requires that other systems and/or equipment are accessible for the testing phase. Should this equipment not be accessible then simulators of the missing equipment should have been built and tested and then used to represent these other units in the interoperability tests.

A stability test is basically a stress test for a software application. The idea is to stress the component to the extreme to determine how well it performs under pressure, and to establish performance parameters.

Scalability testing focuses on the ability of a system to meet future efficiency requirements, which may be beyond those currently required. The objective of the tests is to judge the system's ability to grow without exceeding agreed limits or failing. Once these limits are known, threshold values can be set and monitored in production to provide a warning of approaching problems.

Compatibility and configuration testing is performed to check that an application functions properly across various hardware and software environments. Often, the strategy is to run the functional acceptance tests or a subset of the task-oriented functional tests on a range of software and hardware configurations. Sometimes, another strategy is to create a specific test that takes into account the error risks related with configuration differences.

Various techniques to test maintainability exist and should be performed, namely: Dynamic Maintainability Testing, Analysability, Changeability, Stability and Testability and Portability Testing.

Dynamic maintainability testing focuses on the documented processes developed for maintaining a specific application. Selections of maintenance scenarios are used as test cases to guarantee the required service levels are attainable with the documented processes.

Analysability focuses on determining the time taken to identify and fix problems recognised within a system. A simple measure can be the average time taken to diagnose and fix a documented fault.

The maintainability of a system can also be defined in terms of the effort necessary to make changes to that system. Since the effort required is dependent on a number of factors such as software design methodology, coding standards etc., this form of maintainability testing may also be performed by analysis or review. Testability relates specifically to the work required to test the changes made. Stability relates specifically to the system's response to change.

Portability tests in general relate to the ease with which software can be transferred into its intended location, either initially or from an existing location.

Other forms of testing

Conformance testing is testing to conclude whether a system meets some identified standard that has been developed for efficiency or interoperability. To assist in this, many test procedures and test setups have been established, either by the standard's maintainers or external organizations, specifically for testing conformance to standards.

Soak testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under continued use.

Localization testing is a part of software testing focusing on the internationalization and localization features of software. Localization is the process of adjusting a globalized application to a particular milieu. Localizing an application requires a basic understanding of the character sets typically used in modern software development and an understanding of the issues associated with them.

Recovery testing is the activity of testing how well an application is able to recover from crashes, hardware failures and other similar problems. Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is accurately performed.

Tools for testing

During non-functional testing, a variety of testing tools can be used to make the testing phase easier and more thorough. These tools include debuggers, performance and load testing tools, profiling tools, memory management tools, monitoring tools, tuning tools, simulators, test automation and envelopes and data creation tools.

A debugger or debugging tool is a computer program that is used to test and debug other programs. Using a debugger can be useful in performance testing as most debuggers tells you the instruction path lengths, which can influence the performance of the application. Debuggers can also be used to identify concurrency issues that may occur in the application.

Load testing tools involves simulating real-life workload conditions for an application. It helps you to determine various characteristics of the application working under a enormous workload. Load tests can be run several times with different load levels to find out how various parts of your application react to the fluctuating load.

Profiling Tools refer to all those software tools that provide a brief overview on all related information about an input sequence. Profiling tools can be used to see the input sequences that a user can provide and according to the sequence, various steps can be taken to make the application more user-friendly and the information more accessible.

Memory management tools manage and monitor the memory usage of the application. This can assist performance testing by monitoring the memory the application uses and identifying functions that consumes too much memory.

Monitoring tools are used to observe software behaviour to determine whether it complies with its intended behaviour. Monitoring tools allows you to analyse and recover from detected faults, providing additional defence against disastrous failure.

Tuning tools assesses a component and tries to "tune" it to create more desirable results in terms of output and performance.

Simulators are used to represent a certain feature or requirement of the application to check if it is feasible to implement the system.

Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions (Kolawa et al. 2007).

Data creation tools are used to simulate and produce data for the application that can be used for testing different test cases of the application.

The purpose of testing

Testing usually has two main purposes: To find defects in the software so that they can be corrected, and to provide general assessment of quality.

Testing non-functional requirements leads to a better quality product that not just has the functional requirements, but also cater to the user's every need. Without non-functional testing major flaws in the software can go through the cracks unnoticed, which can cause a system to fail.

Conclusion

Non-functional testing plays a crucial role in the development and success of a software application. Non-functional testing tests all the non-functional requirements like, usability, maintainability, scalability, etc. Different tests are used to test different non-functional requirements, like: usability testing tests the usability of the software. Different tools exists to perform testing and to ensure the validity of the tests being performed.

Testing the non-functional requirements wont detect all the faults and limitations of the software, but it will identify the current flaws and limitations of the software and according to the test results, a better alternative way of implementing and producing the software can be identified.