Importance And Purpose Of System Testing Information Technology Essay

Published: November 30, 2015 Words: 3972

Abstract- The term paper will explain about the system testing, what is its importance , purpose and various many other system testing techniques.

I. INTRODUCTION

The process of performing a variety of tests on a system to explore functionality or to identify problems. System testing is usually required before and after a system is put in place. A series of systematic procedures are referred to while testing is being performed. These procedures tell the tester how the system should perform and where common mistakes may be found. Testers usually try to "break the system" by entering data that may cause the system to malfunction or return incorrect information. For example, a tester may put in a city in a search engine designed to only accept states, to see how the system will respond to the incorrect input.

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.[1][2]

II. Testing the whole system

System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS). System testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and tests not only the design, but also the behaviour and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).[2]

III. PURPOSE OF SYSTEM TESTING

A .Goal

The goal of custom application system testing should be to ensure that each work unit or module (those identified during unit testing) interacts with one another as designed. While unit testing procedures ensure that each component works independently of one another, the system test process ensures that interactions between those units have no unintended consequences. Specifically, the end-to-end business functionality, including both front and back-end components, will be tested. While sometimes broken out, the system test plan could include security testing and performance testing as well. These are just additional test cases in additional sections. The goal for each component should be to define quantifiable test cases that cover all interrelated functionality of the application.

B. Approach

The system test approach starts with a plan and test cases. The plan should include sections that relate to all application components. Those sections should include test cases that relate back to each and every functional requirement component. This assumes that all functional requirements have been written in a way that makes them testable, which should be the case. Your system test plan should not be a redo of the unit test. In many cases, this will add too much time to system testing and cloud the focus on the larger system. The system test plan should be written with an overview of the technical approach, but not based on coding specifics. For reports it might mean watching a trend over simulated time or comparing different reports to verify continuity. The key is that individual pieces of code logic are not the focus; the overall system function as a whole is the key.

C.Execution

System test execution takes on many forms, from packaged applications designed specifically for testing, recording and notifying, to the old standby of Excel and email. The goal and approach are the keys, but the execution can pull it all together and make the difference between success and acceptance test disappointment. From the get-go, develop your execution plan so that you won't need to reinvent the wheel each time you need to execute a system test for your application. Create scripts or use a tool to verify that you have all necessary data and scenarios to perform all tests. If you don't have that data, develop a reusable process to mock up the data entry or source system, as opposed to the data in your application. Create a process that will execute the scripts in a batch if possible, and keep a record of historic results. Write a process document that explains how to execute the system test automation as part of the system test plan. These steps will help to ensure a robust and thorough regression test as well as new functionality validation.

System testing is the development team's conscience. The focus is on the entire application meeting the overall technical requirements. Independently verify that all the pieces fit together and find the holes before users have their first look. First impressions are crucial to early user adoption. Users seem to forgive and forget delays and agree to scope cuts much quicker than they forgive (and maybe never forget) a bug-ridden application that misses service level agreements and causes problems. A good system test can make this difference, so don't cut time here when the project timeline is squeezed. I've never seen such a "shortcut" lead to a more successful, earlier deployment. In fact, it usually takes longer once the issues unfold in acceptance testing. Follow the process, be complete, and focus on reusability and automation for a system test process that will help make a successful application.[3]

IV. IMPORTANT POINTS

A. Prerequisites for System Testing

The prerequisites for System Testing are:

All the components should have been successfully Unit Tested.

All the components should have been successfully integrated and Integration Testing should be completed.

An Environment closely resembling the production environment should be created.

When necessary, several iterations of System Testing are done in multiple environments.

B. Steps in System Testing

The following steps are important to perform System Testing:

Step 1: Create a System Test Plan

Step 2: Create Test Cases

Step 3: Carefully Build Data used as Input for System Testing

Step 3: If applicable create scripts to build environment and to automate Execution of test cases.

Step 4: Execute the test cases

Step 5: Fix the bugs if any and re test the code

Step 6: Repeat the test cycle as necessary

C. System Test Plan

The following documents typically describes the following:

.........- The Testing Goals

.........- The key areas to be focused on while testing

.........- The Testing Deliverables

.........- How the tests will be carried out

.........- The list of things to be Tested

.........- Roles and Responsibilities

.........- Prerequisites to begin Testing

.........- Test Environment

.........- Assumptions

.........- What to do after a test is successfully carried out

.........- What to do if test fails

.........- Glossary

D. System Test Case

A Test Case describes exactly how the test should be carried out.

The System test cases help us verify and validate the system.

The System Test Cases are written such that:

........- They cover all the use cases and scenarios

........- The Test cases validate the technical Requirements and Specifications

........- The Test cases verify if the application/System meet the Business & Functional

Requirements specified

........- The Test cases may also verify if the System meets the performance standards.

Since a dedicated test team may execute the test cases it is necessary that System Test Cases. The detailed Test cases help the test executioners do the testing as specified without any ambiguity.

The format of the System Test Cases may be like all other Test cases as illustrated below:

Test Case ID

Test Case Description

--What to Test?

--How to Test?

Input Data

Expected Result

Actual Result

Sample Test Case Format:

Test Case ID

What To Test?

How to Test?

Input Data

Expected Result

Actual Result

Pass/Fail

.

.

.

.

.

.

.

Additionally the following information may also be captured:

........a) Test Suite Name

........b) Tested By

........c) Date

........d) Test Iteration [4]

V. Factors Affecting System Testing

Test Coverage

System Testing will be effective only to the extent of the coverage of Test Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered by the test cases are sufficient. The Test cases should "cover" all scenarios, use cases, Business Requirements, Technical Requirements, and Performance Requirements. The test cases should enable us to verify and validate that the system application meets the project goals and specifications.

Defect Tracking

The defects found during the process of testing should be tracked. Subsequent iterations of test cases verify if the defects have been fixed.

Execution

The Test cases should be executed in the manner specified. Failure to do so results in improper Test Results.

Build Process Automation

A Lot of errors occur due to an improper build. 'Build' is a compilation of the various components that make the application deployed in the appropriate environment. The Test results will not be accurate if the application is not 'built' correctly or if the environment is not set up as specified. Automating this process may help reduce manual errors.

E. Test Automation

Automating the Test process could help us in many ways:

The test can be repeated with fewer errors of omission or oversight.

Some scenarios can be simulated if the tests are automated for instance

simulating a large number of users or simulating increasing large amounts

of input/output data

F. Documentation

Proper Documentation helps keep track of Tests executed. It also helps create a knowledge base for current and future projects. Appropriate metrics/Statistics can be captured to validate or verify the efficiency of the technical design architecture.

VI. SYSTEM TESTING

Involves integrating components to create a system or sub-system.

May involve testing an increment to be delivered to the customer.

Two phases:

Integration testing - the test team have access to the system source code. The system is tested as components are integrated.

Release testing - the test team test the complete system to be delivered as a black-box.

A. Integration Testing

Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to test your modules with those of other groups. Eventually all the modules making up a process are tested together. Beyond that, if the program is composed of more than one process, they should be tested in pairs rather than all at once.

Integration testing identifies problems that occur when units are combined. By using a test plan that requires you to test each unit and ensure the viability of each before combining units, you know that any errors discovered when combining units are likely related to the interface between units. This method reduces the number of possibilities to a far simpler level of analysis.

You can do integration testing in a variety of ways but the following are three common strategies:

The top-down approach to integration testing requires the highest-level modules be test and integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers. However, the need for stubs complicates test management and low-level utilities are tested relatively late in the development cycle. Another disadvantage of top-down integration testing is its poor support for early release of limited functionality.

The bottom-up approach requires the lowest-level units be tested and integrated first. These units are frequently referred to as utility modules. By using this approach, utility modules are tested early in the development process and the need for stubs is minimized. The downside, however, is that the need for drivers complicates test management and high-level logic and data flow are tested late. Like the top-down approach, the bottom-up approach a

lso provides poor support for early release of limited functionality.

The third approach, sometimes referred to as the umbrella approach, requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-up pattern discussed above. The outputs for each function are then integrated in the top-down manner. The primary advantage of this approach is the degree of support for early release of limited functionality. It also helps minimize the need for stubs and drivers. The potential weaknesses of this approach are significant, however, in that it can be less systematic than the other two approaches, leading to the need for more regression testing.

To simplify error localisation, systems should be incrementally integrated.

B. Incremental Testing

Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application's functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers, software engineers, or test engineers.

C. Release Testing

The process of testing a release of a system that will be distributed to customers.

Primary goal is to increase the supplier's confidence that the system meets its requirements.

Release testing is usually black-box or functional testing

--Based on the system specification only;

--Testers do not have knowledge of the system implementation.

VII. APPROACHES TO TESTING

Architectural validation

--Top-down integration testing is better at discovering errors in the system architecture.

System demonstration

--Top-down integration testing allows a limited demonstration at an early stage in the development.

Test implementation

--Often easier with bottom-up integration testing.

Test observation

--Problems with both approaches. Extra code may be required to observe tests.

VIII. GUIDELINES TO TESTING

Testing guidelines are hints for the testing team to help them choose tests that will reveal defects in the system

Choose inputs that force the system to generate all error messages.

Design inputs that cause buffers to overflow;

Repeat the same input or input series several times;

Force invalid outputs to be generated;

Force computation results to be too large or too small.

IX. Types of tests to include in system testing

The following examples are different types of testing that should be considered during System testing:

GUI software testing

Usability testing

Performance testing

Compatibility testing

Error handling testing

Load testing

Volume testing

Stress testing

Security testing

Scalability testing

Sanity testing

Smoke testing

Exploratory testing

Ad hoc testing

Regression testing

Reliability testing

Installation testing

Maintenance testing

Recovery testing and failover testing.[2]

X. Different Testing Techniques

a. Black-box Testing

Black-box testing is a method of testing software that tests the functionality of an application as opposed to its internal structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and design to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output. There is no knowledge of the test object's internal structure.

Fig. 1

This method of test can be applied to all levels of system testing: unit, integration, functional, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.

B. White-Box Testing

White-box testing is a method of testing software that tests internal structures or workings of an application as opposed to its functionality (black-box testing). An internal perspective of the system, as well as programming skills, are required and used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. It is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the system testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.

White-box test design techniques include:

Control flow testing

Data flow testing

Branch testing

Path testing

C. performance Testing

Performance Testing covers a broad range of engineering or functional evaluations where a material, product, system, or person is not specified by detailed material or component specifications: rather, emphasis is on the final measurable performance characteristics. Testing can be a qualitative or quantitative procedure.

Performance testing can refer to the assessment of the performance of a human examinee. For example, a behind-the-wheel driving test is a performance test of whether a person is able to perform the functions of a competent driver of an automobile.

In the computer industry, software performance testing is used to determine the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions.

D. Stress Testing

Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries, such as fatigue testing for materials.

E. Regression Testing

Regression testing is type of system testing that seeks to uncover software errors after changes to the program. For example e.g. bug-fixes or new functionality, have been made, by retesting the program. The intent of regression testing is to assure that a change, such as a bug-fix, did not introduce new bugs. Regression testing can be used to test the system efficiently by systematically selecting the appropriate minimum suite of tests needed to adequately cover the affected change. Common methods of regression testing include rerunning previously run tests and checking whether program behaviour has changed and whether previously fixed faults have re-emerged. "One of the main reasons for regression testing is that it's often extremely difficult for a programmer to figure out how a change in one part of the software will echo in other parts of the software." This is done by comparing results of previous tests to results of the current tests being run.[2]

Volume Testing

Volume Testing belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it. Another example could be when there is a requirement for your application to interact with an interface file; this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application's functionality with that file in order to test the performance.

Security Testing

Security testing is a process to determine that an information system protects data and maintains functionality as intended.

The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from.

H. Recovery Testing

In software testing, recovery testing is the activity of testing how well an application is able to recover from crashes, hardware failures and other similar problems.

Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs.Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems

Examples of recovery testing:

While an application is running, suddenly restart the computer, and afterwards check the validness of the application's data integrity.

While an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application's ability to continue receiving data from the point at which the network connection disappeared.

Restart the system while a browser has a definite number of sessions. Afterwards, check that the browser is able to recover all of them.

I. Component Testing

Component or unit testing is the process of testing individual components in isolation. It is a defect testing process. Components may be:

Individual functions or methods within an object

Object classes with several attributes and

methods

Composite components with defined interfaces used to access their functionality.

J. Object-Class Testing

Complete test coverage of a class involves

Testing all operations associated with an object;

Setting and interrogating all object attributes;

Exercising the object in all possible states.

Inheritance makes it more difficult to design object class tests as the information to be tested is not localised.

K. Interface Testing

Objectives are to detect faults due to interface errors or invalid assumptions about interfaces. Particularly important for object-oriented development as objects are defined by their interfaces.

L. Load Testing

Load testing is required at every stage in the life cycle of an application, in particular web applications. Load tests help identify performance problems during system design and development stages. For existing applications, the simulation becomes an aid to maintain, fine-tune and forecast future system needs.

Load testing is the process of putting demand on a system or device and measuring its response. Load testing is performed to determine a system's behavior under anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. When the load placed on the system is raised beyond normal usage patterns, in order to test the system's response at unusually high or peak loads, it is known as stress testing. The load is usually so great that error conditions are the expected result, although no clear boundary exists when an activity ceases to be a load test and becomes a stress test.

There little agreement on what the specific goals of load testing are. The term is often used synonymously with software performance testing, reliability testing, and volume testing. Load testing is a type of non-functional testing.[2]

I. Maintenance Testing

Maintenance testing iss that testing which is performed to either identify equipment problems, diagnose equipment problems or to confirm that repair measures have been effective. It can be performed at either the system level, the equipment level , or the component level .[2]