Applied Programming/Testing

Overview

edit

Software Testing

edit

Testing Approach

edit

Static, dynamic, and passive testing

edit

There are many approaches available in software testing. Reviews, walkthrough, or inspections are referred to as static testing, whereas executing programmed code with a given set of test cases is referred to as dynamic testing.[1][2]

Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is running. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules.[1][2] Typical techniques for these are either using stubs/drivers or execution from a debugger environment.[2]

Static testing involves verification, whereas dynamic testing also involves validation.[2]

Passive testing means verifying the system behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions.[3] This is related to offline runtime verification and log analysis.

Exploratory approach

edit

Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design, and test execution. Cem Kaner, who coined the term in 1984,[4]:2 defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."[4]:36

The "box" approach

edit

Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box testing may also be applied to software testing methodology.[5][6] With the concept of grey-box testing—which develops tests from specific design elements—gaining prominence, this "arbitrary distinction" between black- and white-box testing has faded somewhat.[7]

White-box testing

edit
 
White Box Testing Diagram

White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs.[5][6] This is analogous to testing nodes in a circuit, e.g., in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration, and system levels of the software testing process, it is usually done at the unit level.[7] It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.

Techniques used in white-box testing include:[6][8]

  • API testing – testing of the application using public and private APIs (application programming interfaces)
  • Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)
  • Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
  • Mutation testing methods
  • Static testing methods

Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[9] Code coverage as a software metric can be reported as a percentage for:[5][9][10]

  • Function coverage, which reports on functions executed
  • Statement coverage, which reports on the number of lines executed to complete the test
  • Decision coverage, which reports on whether both the True and the False branch of a given test has been executed

100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.[11] Pseudo-tested functions and methods are those that are covered but not specified (it is possible to remove their body without breaking any test case).[12]

Black-box testing

edit
 
Black box diagram

Black-box testing (also known as functional testing) treats the software as a "black box," examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it.[13] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing, and specification-based testing.[5][6][10]

Specification-based testing aims to test the functionality of software according to the applicable requirements.[14] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.

Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[15]

One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[16] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case or leaves some parts of the program untested.

This method of test can be applied to all levels of software testing: unit, integration, system and acceptance.[7] It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.

Component interface testing

Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component.[17] The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[18][19] The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[18] Unusual data values in an interface can help explain unexpected performance in the next unit.

Visual testing
edit

The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information she or he requires, and the information is expressed clearly.[20][21]

At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.

Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence she or he requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.

Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly.[22] In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in more rigorous examination of defect fixes.[22] However, unless strict documentation of the procedures are maintained, one of the limits of ad hoc testing is lack of repeatability.[22]


Grey-box testing

edit

Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary."[23] Grey-box testing may also include reverse engineering (using dynamic code analysis) to determine, for instance, boundary values or error messages.[23] Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for the test.

By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.[24]

Testing Levels

edit

There are mainly 4 levels of testing in software testing:[25]

1. Unit Testing - This checks if software components are fulfilling functionalities or not. It's the smallest testable portion of system or application which can be compiled, liked, loaded and executed. This kind of testing helps to test each module separately. The aim is to test each part of the software by separating it. It checks that component are fulfilling functionalities or not. This kind of testing is performed by developers.

2. Integration Testing - This checks the data flow from one module to other modules. Integration means combining. For Example, In this testing phase, different software modules are combined and tested as a group to make sure that integrated system is ready for system testing. Integrating testing checks the data flow from one module to other modules. This kind of testing is performed by testers.

3. System Testing - This evaluates both functional and non-functional needs for the testing. System testing is performed on a complete, integrated system. It allows checking system's compliance as per the requirements. It tests the overall interaction of components. It involves load, performance, reliability and security testing. System testing most often the final test to verify that the system meets the specification. It evaluates both functional and non-functional need for the testing.

4. Acceptance Testing - This checks the requirements of a specification or contract are met as per its delivery. Acceptance testing is a test conducted to find if the requirements of a specification or contract are met as per its delivery. Acceptance testing is basically done by the user or customer. However, other stockholders can be involved in this process.

Testing Techniques and Tactics

edit

There are 5 important software testing techniques and tactics:[26]

1. Boundary Value Analysis (BVA) - It's based on testing at the boundaries between partitions. It includes maximum, minimum, inside or outside boundaries, typical values and error values. It is generally seen that a large number of errors occur at the boundaries of the defined input values rather than the center. It is also known as BVA and gives a selection of test cases which exercise bounding values. This black box testing technique complements equivalence partitioning. This software testing technique base on the principle that, if a system works well for these particular values then it will work perfectly well for all values which comes between the two boundary values

Guidelines for Boundary Value Analysis:

  • If an input condition is restricted between values x and y, then the test cases should be designed with values x and y as well as values which are above and below x and y.
  • If an input condition is a large number of values, the test case should be developed which need to exercise the minimum and maximum numbers. Here, values above and below the minimum and maximum values are also tested.
  • Apply guidelines 1 and 2 to output conditions. It gives an output which reflects the minimum and the maximum values expected. It also tests the below or above values.

2. Equivalence Class Partitioning - This allows you to divide set of test condition into a partition which should be considered the same. This software testing method divides the input domain of a program into classes of data from which test cases should be designed. The concept behind this technique is that test case of a representative value of each class is equal to a test of any other value of the same class. It allows you to Identify valid as well as invalid equivalence classes.

3. Decision Table Based Testing - It's also known as to Cause-Effect table. This software testing technique is used for functions which respond to a combination of inputs or events. For example, a submit button should be enabled if the user has entered all required fields. The first task is to identify functionalities where the output depends on a combination of inputs. If there are large input set of combinations, then divide it into smaller subsets which are helpful for managing a decision table. For every function, you need to create a table and list down all types of combinations of inputs and its respective outputs. This helps to identify a condition that is overlooked by the tester.

Following are steps to create a decision table:

  • Enlist the input in rows.
  • Enter all the rules in the column.
  • Fill the table with the different combination of inputs.
  • In the last row, note down the output against the input combination.

4. State Transition - This technique changes in input conditions change the state of the Application Under Test (AUT). This testing technique allows the tester to test the behavior of an AUT. The tester can perform this action by entering various input conditions in a sequence. In State transition technique, the testing team provides positive as well as negative input test values for evaluating the system behavior.

Guideline for State Transition:

  • State transition should be used when a testing team is testing the application for a limited set of input values.
  • The technique should be used when the testing team wants to test sequence of events which happen in the application under test.

5. Error Guessing - It's a software testing technique based on guessing the error which can prevail in the code. The technique is heavily based on the experience where the test analysts use their experience to guess the problematic part of the testing application. Hence, the test analysts must be skilled and experienced for better error guessing. The technique counts a list of possible errors or error-prone situations. Then tester writes a test case to expose those errors. To design test cases based on this software testing technique, the analyst can use the past experiences to identify the conditions.

Guideline for Error Guessing:

  • The test should use the previous experience of testing similar applications.
  • Understanding of the system under test.
  • Knowledge of typical implementation errors.
  • Remember previously troubled areas.
  • Evaluate Historical data & Test results.

Testing Process

edit

Unit Testing

edit

Benefits and Limitations

edit

Here are the benefits or advantages of using unit testing:[27][28]

  • Validate your work - By writing a test, you double-check what you did. Unit testing finds problems early in the development cycle. This includes both bugs in the programmer's implementation and flaws or missing parts of the specification for the unit. The process of writing a thorough set of tests forces the author to think through inputs, outputs, and error conditions, and thus more crisply define the unit's desired behavior.
  • Separate concerns in your code - Unit testing your code requires you make it testable. Making code testable often means declaring dependencies upfront. This kind of structuring of the code usually leads to a cleaner design and better separation of concerns. With code that is not tested, it's easier to miss implicit dependencies and classes silently taking on multiple responsibilities.
  • An always up-to-date documentation - While documentation will get out of date, unless you update it, tests that run and pass with the code will not get out of date. If you write clean tests, these tests can serve as evergreen documentation for the code. Unit testing provides a sort of living documentation of the system. Developers looking to learn what functionality is provided by a unit, and how to use it, can look at the unit tests to gain a basic understanding of the unit's interface (API).
  • Fewer regressions - Unit tests do a remarkable job of catching regressions as long as they are in the codebase.
  • A safety net for refactoring - While unit tests can catch regressions for small changes, they shine with large refactoring. On codebases with high test coverage and good tests, refactoring is much higher confidence work. Unit testing allows the programmer to refactor code or upgrade system libraries at a later date, and make sure the module still works correctly (e.g., in regression testing). The procedure is to write test cases for all functions and methods so that whenever a change causes a fault, it can be quickly identified. Unit tests detect changes which may break a design contract.

Here are the limitations or disadvantages of using unit testing:[29]

  • With unit testing, you have to increase the amount of code that needs to be written. You usually have to write one or more unit tests depending on how complex things are. It is suggested to have at least three so you don’t just get a yes and a no that contradicts each other. For every line of code written, programmers often need 3 to 5 lines of test code. This obviously takes time and its investment may not be worth the effort. While the test code should be fairly simple, this testing method is still more work and more code which means more hours and more cost.
  • Unit testing cannot and will not catch all errors in a program. There is no way it can test every execution path or find integration errors and full system issues. Unit testing should be done in conjunction with other software testing activities, as they can only show the presence or absence of particular errors; they cannot prove a complete absence of errors. To guarantee correct behavior for every execution path and every possible input, and ensure the absence of errors, other techniques are required, namely the application of formal methods to proving that a software component has no unexpected behavior.
  • Unit tests have to be realistic. You want the unit you’re testing to act as it will as part of the full system. If this doesn’t happen, the test value and accuracy are compromised.

Applications

edit

Test-driven Development

edit

Test-Driven Development (TDD) uses tests as a way to design code by creating the test first before any actual production code is written. You then try to make the test pass by creating production code that fulfills the test. This is usually a five-step process:

  • Write a test (some also call this a specification).
  • Run the test and show that it fails. (red)
  • Write the smallest amount of production code possible that meets the needs of the test.
  • Run the test until it passes. (green)
  • Refactor.

This process is sometimes called red-green-refactor. Red symbolizes fail and green represent pass. [[30]]

Benefits and Limitations

edit

Activities

edit

Key Terms

edit

Acceptance Testing - Checks the requirements of a specification or contract are met as per its delivery.[31]

Alpha Testing - Tests performed near the end of development focused on simulating real user experience.

Assertion - A predicate…connected to a point in the program, that always should evaluate to true at that point in code execution.[32]

Beta Testing - Performed by "real users" of the software application in "real environment"[33]

Buddy Testing - Two buddies mutually work on identifying defects in the same module. Mostly one buddy will be from development team and another person will be from testing team. Buddy testing helps the testers develop better test cases and development team can also make design changes early. This testing usually happens after Unit Testing completion [34]

Coverage - A measure used to describe the degree to which the source code of a program is executed when a particular test suite runs.[35]

Integration Testing - Checks the data flow from one module to other modules.[31]

Manual Testing - Type of software testing in which test cases are executed manually by a tester without using any automated tools.[36]

Pytest - A testing framework that allows users to write test codes using Python programming language.[37]

Pytest Coverage - A coverage plugin for PyTest used for measuring code coverage of Python programs.

Refactoring - To restructure existing code.

Regression Testing - Re-running functional and non-functional tests to ensure that previously developed and tested software still performs after a change [38]

System Testing - Evaluates both functional and non-functional needs for the testing.[31]

Unit Testing - Checks if software components are fulfilling functionalities or not.[31]

References

edit
  1. a b Graham, D.; Van Veenendaal, E.; Evans, I. (2008). Foundations of Software Testing. Cengage Learning. pp. 57–58. ISBN 9781844809899.
  2. a b c d Oberkampf, W.L.; Roy, C.J. (2010). Verification and Validation in Scientific Computing. Cambridge University Press. pp. 154–5. ISBN 9781139491761.
  3. Lee, D.; Netravali, A.N.; Sabnani, K.K.; Sugla, B.; John, A. (1997). "Passive testing and applications to network management". Proceedings 1997 International Conference on Network Protocols. IEEE Comput. Soc: 113–122. doi:10.1109/icnp.1997.643699. ISBN 081868061X. S2CID 42596126.
  4. a b Cem Kaner (2008), A Tutorial in Exploratory Testing (PDF)
  5. a b c d Limaye, M.G. (2009). Software Testing. Tata McGraw-Hill Education. pp. 108–11. ISBN 9780070139909.
  6. a b c d Saleh, K.A. (2009). Software Engineering. J. Ross Publishing. pp. 224–41. ISBN 9781932159943.
  7. a b c Ammann, P.; Offutt, J. (2016). Introduction to Software Testing. Cambridge University Press. p. 26. ISBN 9781316773123.
  8. Everatt, G.D.; McLeod Jr., R. (2007). "Chapter 7: Functional Testing". Software Testing: Testing Across the Entire Software Development Life Cycle. John Wiley & Sons. pp. 99–121. ISBN 9780470146347.
  9. a b Cornett, Steve (c. 1996). "Code Coverage Analysis". Bullseye Testing Technology. Introduction. Retrieved November 21, 2017.
  10. a b Black, R. (2011). Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional. John Wiley & Sons. pp. 44–6. ISBN 9781118079386.
  11. As a simple example, the C function int f(int x){return x*x-6*x+8;} consists of only one statement. All tests against a specification f(x)>=0 will succeed, except if x=3 happens to be chosen.
  12. Vera-Pérez, Oscar Luis; Danglot, Benjamin; Monperrus, Martin; Baudry, Benoit (2018). "A comprehensive study of pseudo-tested methods". Empirical Software Engineering. 24 (3): 1195–1225. arXiv:1807.05030. Bibcode:2018arXiv180705030V. doi:10.1007/s10664-018-9653-2. S2CID 49744829.
  13. Patton, Ron (2005). Software Testing (2nd ed.). Indianapolis: Sams Publishing. ISBN 978-0672327988.
  14. Laycock, Gilbert T. (1993). The Theory and Practice of Specification Based Software Testing (PDF) (dissertation). Department of Computer Science, University of Sheffield. Retrieved January 2, 2018.
  15. Bach, James (June 1999). "Risk and Requirements-Based Testing" (PDF). Computer. 32 (6): 113–114. Retrieved August 19, 2008.
  16. Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p. 159. ISBN 978-0-615-23372-7.
  17. Mathur, A.P. (2011). Foundations of Software Testing. Pearson Education India. p. 63. ISBN 9788131759080.
  18. a b Clapp, Judith A. (1995). Software Quality Control, Error Analysis, and Testing. p. 313. ISBN 978-0815513636. Retrieved January 5, 2018.
  19. Mathur, Aditya P. (2007). Foundations of Software Testing. Pearson Education India. p. 18. ISBN 978-8131716601.
  20. Lönnberg, Jan (October 7, 2003). Visual testing of software (PDF) (MSc). Helsinki University of Technology. Retrieved January 13, 2012.
  21. Chima, Raspal. "Visual testing". TEST Magazine. Archived from the original on July 24, 2012. Retrieved January 13, 2012.
  22. a b c Lewis, W.E. (2016). Software Testing and Continuous Quality Improvement (3rd ed.). CRC Press. pp. 68–73. ISBN 9781439834367.
  23. a b Ransome, J.; Misra, A. (2013). Core Software Security: Security at the Source. CRC Press. pp. 140–3. ISBN 9781466560956.
  24. "SOA Testing Tools for Black, White and Gray Box" (white paper). Crosscheck Networks. Archived from the original on 1 October 2018. Retrieved December 10, 2012.
  25. https://www.guru99.com/levels-of-testing.html
  26. https://www.guru99.com/software-testing-techniques.html
  27. https://blog.pragmaticengineer.com/unit-testing-benefits-pyramid/
  28. Unit Testing
  29. https://theqalead.com/topics/unit-testing/
  30. https://testguild.com/unit-tdd-and-bdd-testing-whats-the-difference/
  31. a b c d Guru99 - Levels of Testing
  32. w:Assertion_(software_development)
  33. Guru99 - Alpha/Beta Testing
  34. Guru99 - Adhoc Testing
  35. w:Code_coverage
  36. Guru99 - Manual testing
  37. Guru99 - PyTest
  38. w:Regression_testing