Testing and Accepting a System edit

Testing Responsibilities edit

The primary purpose of testing is to determine and communicate the quality of a system under test. Testing is generally completed by a developer, quality assurance tester, user, trainer, and/or business analyst. On a project team, a resource may be assigned to the Test Coordinator role. This role is expected to oversee and coordinate the scope of testing for the project. Responsibilities of the Test Coordinator may include but are not limited to gathering business requirements, defining the Test Strategy, developing Test Plans, executing the testing, managing system defects, and facilitating User Acceptance Testing (UAT).

NOTE: Although testing, test planning and the development of test cases can be performed by the Business Analyst, these functions will not earn experiential credit towards the IIBA CBAP or CCBA certifications. Review the certification requirements in the IIBA Handbooks at http://www.iiba.org.

Testing Types edit

Unit Testing edit

Unit testing is the first level of testing performed to verify that a specific unit of code works as expected. It is usually executed by the developer while developing the code.

Integration Testing edit

Integration testing is the second level of testing performed to verify that the interface between integrated components work as expected. It is generally executed by the developer once all components are fully developed.

System Testing edit

System testing is also known as end-to-end testing. It is the third level of testing performed to verify that the system meets the defined business requirements. It should be executed (generally by a quality assurance tester or developer) once the system is fully integrated and all integration testing is completed successfully.

Acceptance Testing edit

Acceptance testing is also known as User Acceptance Testing (UAT) and is the final level of testing performed. It is executed by the users of the system once system testing is completed successfully. It is a final validation that the requirements are met prior to taking ownership of the system.

See Also: Software Testing Wikipedia Site - http://en.wikipedia.org/wiki/Software_testing

Test Strategy edit

To assist with test coordination, a Test Strategy may be appropriate. A Test Strategy communicates to key stakeholders, such as the Project Manager and Technical Lead, what the overall test approach will be. A Test Strategy defines the scope of testing, systems to be tested, resources required for testing, desired types of testing, defect management details, testing tasks, and testing timelines. The Test Strategy should be completed once the business requirements are finalized. It is advisable to identify a Test Coordinator to oversee completion of the Test Strategy. It should then be approved by a Project Manager or Technical Lead to ensure its accuracy. If changes occur to the scope, requirements, timelines, or resources the Test Coordinator should review and update the Test Strategy as necessary.

See Also: Test Strategy Wikipedia Site - http://en.wikipedia.org/wiki/Test_strategy

Test Case and Test Plan edit

Once the business requirements are final, the test cases should be defined. Each business requirement should be covered by at least two test cases, a positive and a negative test case. A test case defines the information needed by the tester to determine whether a system is working as expected. Each test case should include:

  • test case identifier
  • reference to requirements being tested
  • conditions needed in order to execute the test
  • steps to perform the test
  • input test data
  • expected results of the test
  • actual results (completed after the test)

See Also: https://en.wikipedia.org/wiki/Test_case

The test plan is a formal document that lists all the test cases necessary to test a given system. A test plan is sometimes used to describe a Test Strategy. The Test Coordinator will work with the Technical Lead to coordinate development of the test cases and test plan.

See Also: http://en.wikipedia.org/wiki/Test_plan

Defect Management edit

Defect management is the process of managing system defects. It begins when a defect is identified and ends when it is fixed and closed. Defect management is also known as defect tracking.

A defect can be defined as “an error in coding, logic or the assembly of application components that causes a program to malfunction or to produce incorrect/unexpected results.” It is also described as “a condition in a software product which does not meet a software requirement (as stated in the requirement specifications) or end-user expectations.” A defect is also known as a bug.

Source: http://softwaretestingfundamentals.com/defect/

Important data regarding the defect should be logged; see the below chart of data that can be captured when recording a defect and the data’s definition.

Field Description
Summary A short description of the defect.
Applications The specific application(s) affected by the defect.
Environment Found The environment in which the defect was originally found.
Testing Type The type of testing performed when the defect was found.
Severity The business impact of the defect as determined by the business users/owners, or the tester.
Priority The priority of addressing the application’s defect as it relates to other defects that exist for that application.
Description A detailed description of all pertinent information, including record/transaction examples. Should describe when and how the defect was detected, and what the result was. It should also indicate what the expected result should have been.