Security Testing: The Process to determine that an IS (Information System) protects data and maintains functionality as intended.
The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorization, availability and non-repudiation.
1 Confidentiality
2 Integrity
3 Authentication
4 Authorization
5 Availability
6 Non-repudiation
7 See also
Confidentiality
A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.
Integrity
A measure intended to allow the receiver to determine that the information which it is providing is correct.
Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.
Authentication
A measure designed to establish the validity of a transmission, message, or originator.
Allows a receiver to have confidence that information it receives originated from a specific known source.
Authorization
The process of determining that a requester is allowed to receive a service or perform an operation.
Access control is an example of authorization..
Availability
Assuring information and communications services will be ready for use when expected.
Information must be kept available to authorized persons when they need it.
Non-repudiation
A measure intended to prevent the later denial that an action happened, or a communication that took place etc.
In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp.
Friday, November 27, 2009
Agile Testing
Agile testing
Agile testing is a software testing practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. Agile testing does not emphasize rigidly defined testing procedures, but rather focuses on testing iteratively against newly developed code until quality is achieved from an end customer's perspective. In other words, the emphasis is shifted from "testers as quality police" to something more like "entire project team working toward demonstrable quality."
Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.
Since working increments of the software are released often in agile software development, there is also a need to test often. This is commonly done by using automated acceptance testing to minimize the amount of manual labor involved. Doing only manual testing in agile development may result in either buggy software or slipping schedules because it may not be possible to test the entire build manually before each release.
Agile testing is a software testing practice that follows the principles of the agile manifesto, emphasizing testing from the perspective of customers who will utilize the system. Agile testing does not emphasize rigidly defined testing procedures, but rather focuses on testing iteratively against newly developed code until quality is achieved from an end customer's perspective. In other words, the emphasis is shifted from "testers as quality police" to something more like "entire project team working toward demonstrable quality."
Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.
Since working increments of the software are released often in agile software development, there is also a need to test often. This is commonly done by using automated acceptance testing to minimize the amount of manual labor involved. Doing only manual testing in agile development may result in either buggy software or slipping schedules because it may not be possible to test the entire build manually before each release.
Thursday, November 26, 2009
Difference between BRS and FRS
The main difference between brs and frs is that a brs tells the whole
requirement(story) whereas the frs tells the sequence of operations to
be perfored by a single process.
BRS is actually a document that covers the business aspect of a requirement on a broad level. For eg: lets consider that you want develop a new website. Your BRS would address what business is your website being built for. Lets say it is a website like ebay and it allows people to shop online. This would be your business requirement covered in the BRS.
Now the FRS would actually address each function that the website provides in order to make the shopping experience of the people visiting the website efficient and easy. Not just this it would also address issues of security etc that may need to be built into this wedsite.
Both the BR and FR can actually be addressed in the same document. However, this depends on the organization.
Both BRS and FRS are made by the BA who captures the requirements from the end user. A developer would be involved in making a technical document which would address the technical design of the website which the BA may or may not concern himself with.
requirement(story) whereas the frs tells the sequence of operations to
be perfored by a single process.
BRS is actually a document that covers the business aspect of a requirement on a broad level. For eg: lets consider that you want develop a new website. Your BRS would address what business is your website being built for. Lets say it is a website like ebay and it allows people to shop online. This would be your business requirement covered in the BRS.
Now the FRS would actually address each function that the website provides in order to make the shopping experience of the people visiting the website efficient and easy. Not just this it would also address issues of security etc that may need to be built into this wedsite.
Both the BR and FR can actually be addressed in the same document. However, this depends on the organization.
Both BRS and FRS are made by the BA who captures the requirements from the end user. A developer would be involved in making a technical document which would address the technical design of the website which the BA may or may not concern himself with.
Difference between SRS and FRS
Depending upon Function Requirements specifications(FRS)...... the software requirements specifications(SRS) will be built.
The Functional Requrements Specifications deals with the Client Requirements.
The Sofware Requirements Specifications deals with the Company resources.
The Functional Requrements Specifications deals with the Client Requirements.
The Sofware Requirements Specifications deals with the Company resources.
Functional testing
Functional Testing is a methodology under which Functionality testing is a type.
Ex. GUI testing is a type of Functional testing
Ex. GUI testing is a type of Functional testing
Difference between SRS and BRS
SRS is a Software /System Requirement specification it is a MS word doc. which defines the complete business functionalities of the particular application
SRS designed by System Anallyst
BRS is a Business Requirement Specification Initially client will give the req's in their own format then it will be converted in to Standard format by which s/w people can understand.
In BRS the req's are defined in general format. where as in SRS the req's will be divided in to modules and each module contains How many interfaces and screens like that
BRS is developed by Business Analyst
SRS designed by System Anallyst
BRS is a Business Requirement Specification Initially client will give the req's in their own format then it will be converted in to Standard format by which s/w people can understand.
In BRS the req's are defined in general format. where as in SRS the req's will be divided in to modules and each module contains How many interfaces and screens like that
BRS is developed by Business Analyst
System testing
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. [1]
As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.
Testing the whole system
System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS). System testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and tests not only the design, but also the behaviour and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).
Types of system testing
The following examples are different types of testing that should be considered during System testing:
1. GUI software testing
2. Usability testing
3. Performance testing
4. Compatibility testing
5. Error handling testing
6. Load testing
7. Volume testing
8. Stress testing
9. User help testing
10. Security testing
11. Scalability testing
12. Capacity testing
13. Sanity testing
14. Smoke testing
15. Exploratory testing
16. Ad hoc testing
17. Regression testing
18. Reliability testing
19. Recovery testing
20. Installation testing
21. Idempotency testing
22. Maintenance testing
23. Recovery testing and failover testing.
24. Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Although different testing organizations may prescribe different tests as part of System testing, this list serves as a general framework or foundation to begin with.
As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.
Testing the whole system
System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS). System testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and tests not only the design, but also the behaviour and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).
Types of system testing
The following examples are different types of testing that should be considered during System testing:
1. GUI software testing
2. Usability testing
3. Performance testing
4. Compatibility testing
5. Error handling testing
6. Load testing
7. Volume testing
8. Stress testing
9. User help testing
10. Security testing
11. Scalability testing
12. Capacity testing
13. Sanity testing
14. Smoke testing
15. Exploratory testing
16. Ad hoc testing
17. Regression testing
18. Reliability testing
19. Recovery testing
20. Installation testing
21. Idempotency testing
22. Maintenance testing
23. Recovery testing and failover testing.
24. Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Although different testing organizations may prescribe different tests as part of System testing, this list serves as a general framework or foundation to begin with.
Friday, November 20, 2009
What is Baseline & configuration Management
Baseline:- Generally, a baseline may be a single work product, or set of work products that can be used as a logical basis for comparison
some times baseline can be the original requirements or specifications.
Configuration Management:-
configuration management is process of managing the changes in work product
and changes in documentation etc,
Configuration management is the process of managing change in hardware, software, firmware, documentation, measurements, etc. As change requires an initial state and next state, the marking of significant states within a series of several changes becomes important. The identification of significant states within the revision history of a configuration item is the central purpose of baseline identification.[1]
Typically, significant states are those that receive a formal approval status, either explicitly or implicitly (approval statuses may be marked individually, when such a marking has been defined, or signified merely by association to a certain baseline). Nevertheless, this approval status is usually recognized publicly. Thus, a baseline may also mark an approved configuration item, e.g. a project plan that has been signed off for execution. In a similar manner, associating multiple configuration items with such a baseline indicates those items as being approved.
Generally, a baseline may be a single work product, or set of work products that can be used as a logical basis for comparison. A baseline may also be established (whose work products meet certain criteria) as the basis for subsequent select activities. Such activities may be attributed with formal approval.
Conversely, the configuration of a project often includes one or more baselines, the status of the configuration, and any metrics collected. The current configuration refers to the current status, current audit, current metrics, and latest revision of all configuration items. Similarly, but less frequently, a baseline may refer to all items associated with a specific project. This may include all revisions of all items, or only the latest revision of all items in the project, depending upon context, e.g. "the baseline of the project is proceeding as planned."
A baseline may be specialized as a specific type of baseline.[2] Some examples include:
Functional Baseline: initial specifications established; contract, etc.
Allocated Baseline: state of work products once requirements are approved
Developmental Baseline: state of work products amid development
Product Baseline: contains the releasable contents of the project
some times baseline can be the original requirements or specifications.
Configuration Management:-
configuration management is process of managing the changes in work product
and changes in documentation etc,
Configuration management is the process of managing change in hardware, software, firmware, documentation, measurements, etc. As change requires an initial state and next state, the marking of significant states within a series of several changes becomes important. The identification of significant states within the revision history of a configuration item is the central purpose of baseline identification.[1]
Typically, significant states are those that receive a formal approval status, either explicitly or implicitly (approval statuses may be marked individually, when such a marking has been defined, or signified merely by association to a certain baseline). Nevertheless, this approval status is usually recognized publicly. Thus, a baseline may also mark an approved configuration item, e.g. a project plan that has been signed off for execution. In a similar manner, associating multiple configuration items with such a baseline indicates those items as being approved.
Generally, a baseline may be a single work product, or set of work products that can be used as a logical basis for comparison. A baseline may also be established (whose work products meet certain criteria) as the basis for subsequent select activities. Such activities may be attributed with formal approval.
Conversely, the configuration of a project often includes one or more baselines, the status of the configuration, and any metrics collected. The current configuration refers to the current status, current audit, current metrics, and latest revision of all configuration items. Similarly, but less frequently, a baseline may refer to all items associated with a specific project. This may include all revisions of all items, or only the latest revision of all items in the project, depending upon context, e.g. "the baseline of the project is proceeding as planned."
A baseline may be specialized as a specific type of baseline.[2] Some examples include:
Functional Baseline: initial specifications established; contract, etc.
Allocated Baseline: state of work products once requirements are approved
Developmental Baseline: state of work products amid development
Product Baseline: contains the releasable contents of the project
Monday, November 16, 2009
Software Risks
Are you developing any Test plan or test strategy for your project? Have you addressed all risks properly in your test plan or test strategy?
As testing is the last part of the project, it’s always under pressure and time constraint. To save time and money you should be able to prioritize your testing work. How will prioritize testing work? For this you should be able to judge more important and less important testing work. How will you decide which work is more or less important? Here comes need of risk-based testing.
What is Risk?
“Risk are future uncertain events with a probability of occurrence and a potential for loss”
Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.
In this article I will cover what are the “types of risks”. In next articles I will try to focus on risk identification, risk management and mitigation.
Risks are identified, classified and managed before actual execution of program. These risks are classified in different categories.
Categories of risks:
1. Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:
Wrong time estimation
Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
Failure to identify complex functionalities and time required to develop those functionalities.
Unexpected project scope expansions.
2. Budget Risk:
Wrong budget estimation.
Cost overruns
Project scope expansion
3. Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.
Causes of Operational risks:
Failure to address priority conflicts
Failure to resolve the responsibilities
Insufficient resources
No proper subject training
No resource planning
No communication in team.
4. Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:
Continuous changing requirements
No advanced technology available or the existing technology is in initial stages.
Product is complex to implement.
Difficult project modules integration.
5. Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.
These external events can be:
Running out of fund.
Market development
Changing customer product strategy and priority
Government rule changes.
These are all common categories in which software project risks can be classified. I will cover in detail “How to identify and manage risks” in next article.
As testing is the last part of the project, it’s always under pressure and time constraint. To save time and money you should be able to prioritize your testing work. How will prioritize testing work? For this you should be able to judge more important and less important testing work. How will you decide which work is more or less important? Here comes need of risk-based testing.
What is Risk?
“Risk are future uncertain events with a probability of occurrence and a potential for loss”
Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.
In this article I will cover what are the “types of risks”. In next articles I will try to focus on risk identification, risk management and mitigation.
Risks are identified, classified and managed before actual execution of program. These risks are classified in different categories.
Categories of risks:
1. Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:
Wrong time estimation
Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
Failure to identify complex functionalities and time required to develop those functionalities.
Unexpected project scope expansions.
2. Budget Risk:
Wrong budget estimation.
Cost overruns
Project scope expansion
3. Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.
Causes of Operational risks:
Failure to address priority conflicts
Failure to resolve the responsibilities
Insufficient resources
No proper subject training
No resource planning
No communication in team.
4. Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:
Continuous changing requirements
No advanced technology available or the existing technology is in initial stages.
Product is complex to implement.
Difficult project modules integration.
5. Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.
These external events can be:
Running out of fund.
Market development
Changing customer product strategy and priority
Government rule changes.
These are all common categories in which software project risks can be classified. I will cover in detail “How to identify and manage risks” in next article.
Tuesday, November 10, 2009
End to End Testing
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Fuzz Testing
Fuzz testing or fuzzing is a software testing technique that provides invalid, unexpected, or random data to the inputs of a program. If the program fails (for example, by crashing or failing built-in code assertions), the defects can be noted.
File formats and network protocols are the most common targets of fuzz testing, but any type of program input can be fuzzed. Interesting inputs include environment variables, keyboard and mouse events, and sequences of API calls. Even items not normally considered "input" can be fuzzed, such as the contents of databases, shared memory, or the precise interleaving of threads.
File formats and network protocols are the most common targets of fuzz testing, but any type of program input can be fuzzed. Interesting inputs include environment variables, keyboard and mouse events, and sequences of API calls. Even items not normally considered "input" can be fuzzed, such as the contents of databases, shared memory, or the precise interleaving of threads.
Monkey Testing
In computer science, a monkey test is a unit test that runs with no specific test in mind. The monkey in this case is the producer of any input. For example, a monkey test can enter random strings into text boxes to ensure handling of all possible user input or provide garbage files to check for loading routines that have blind faith in their data.
Testers use the term monkey when referring to a fully automated testing tool. This tool doesn’t know how to use any application, so it performs mouse clicks on the screen or keystrokes on the keyboard randomly. The test monkey is technically known to conduct stochastic testing, which is in the category of black-box testing. There are different types of monkey testing.[
Testers use the term monkey when referring to a fully automated testing tool. This tool doesn’t know how to use any application, so it performs mouse clicks on the screen or keystrokes on the keyboard randomly. The test monkey is technically known to conduct stochastic testing, which is in the category of black-box testing. There are different types of monkey testing.[
scope of Test Plan
The scope of testing also covers the type of testing one needs to do like load testing stress testing for load and stress how many users test one is going to simulate. also what hardware it is going to use.
if requred one can also provide the schedule and what are elimiated in the testing how many test cycle what is bug reporting and fixing work flow. etc.
if requred one can also provide the schedule and what are elimiated in the testing how many test cycle what is bug reporting and fixing work flow. etc.
Monday, November 9, 2009
Defect Density
Defect Density
Defect Density Definition
Defect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component.
Elaboration
The 'defects' are:
confirmed and agreed upon (not just reported).
Dropped defects are not counted.
The ‘period’ might be for one of the following:
for a duration (say, the first month, the quarter, or the year).
for each phase of the software life cycle.
for the whole of the software life cycle.
The ‘size’ is measured in one of the following:
Function Points (FP)
Source Lines of Code
Defect Density Formula
Defect Density Uses
For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them
For comparing software/products so that ‘quality’ of each software/product can be quantified and resources focused towards those with low quality.
Defect Density Definition
Defect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component.
Elaboration
The 'defects' are:
confirmed and agreed upon (not just reported).
Dropped defects are not counted.
The ‘period’ might be for one of the following:
for a duration (say, the first month, the quarter, or the year).
for each phase of the software life cycle.
for the whole of the software life cycle.
The ‘size’ is measured in one of the following:
Function Points (FP)
Source Lines of Code
Defect Density Formula
Defect Density Uses
For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them
For comparing software/products so that ‘quality’ of each software/product can be quantified and resources focused towards those with low quality.
Defect Injection / Defect Seeding
Defect Injection is also known as 'Defect Seeding', 'Fault injection'. The
Process of adding known defects to the existing ones is
called as Defect Seeding. The Idea is while detecting the
known bugs unknown bugs might be detected. The goal is to
determine the Bug Detection Rate.
What is Defect Seeding?
Answer
# 1 Defect Seeding : For identify the Capability of tester
team , One Group will insert Defect in appliaction ,This
Bug will found by another Group
Example : In Real Appliaction this group will find 650 bug
but in defect seeding software they will find 30 bug out of
50 Bug total Bug in real appliaction 50*650/30 = 1084 bug
is available in Appliaction
What is Defect Seeding?
Answer
# 2 In this method, intentionally the developer/lead will
introduce the bugs in to product... we dont know in which
module they will occur.. So we have to do regression
testing to identify that bugs as well as residual bugs(more
bugs). The main intention of this is to get more bugs.
Process of adding known defects to the existing ones is
called as Defect Seeding. The Idea is while detecting the
known bugs unknown bugs might be detected. The goal is to
determine the Bug Detection Rate.
What is Defect Seeding?
Answer
# 1 Defect Seeding : For identify the Capability of tester
team , One Group will insert Defect in appliaction ,This
Bug will found by another Group
Example : In Real Appliaction this group will find 650 bug
but in defect seeding software they will find 30 bug out of
50 Bug total Bug in real appliaction 50*650/30 = 1084 bug
is available in Appliaction
What is Defect Seeding?
Answer
# 2 In this method, intentionally the developer/lead will
introduce the bugs in to product... we dont know in which
module they will occur.. So we have to do regression
testing to identify that bugs as well as residual bugs(more
bugs). The main intention of this is to get more bugs.
Difference between test case and test scenario and test script
Test Scenario:- A test scenario is almost like a story like example "a user enters into the application from login window by entering valid user name and password. After entering he will click on module Payslip and clicks on latest payslip feature to view his latest payslip". Any test scenario will contain a specific goal.
Test case:- It is the set of test inputs, execution conditions and expected results developed to test a perticular functionality.
Test cases are often referred to as test scripts, particularly when written. Written test cases are usually collected into test suites.
A test case can be derived from a scenario .For the above scenario we can write a test case like :
Test Case # 1:
S.No Steps Expected
1 Open the login window Login window is open
2 Enter valid UN & Pwd Application should be open
3 Click on Payslip Features in payslip should be displayed
4 Click on latest payslip feature It should open latest payslip window
Above is a positive test case and a negative test case can also be prepared.A test case is prepared and executed with a goal to find the hidden defects with different possibilities.
Test Script:- Manually. These are more commonly called test cases.
Automated Short program written in a programming language used to test part of the functionality of a software system. Test scripts written as a short program can either be written using a special automated functional GUI test tool (such as HP QuickTest Professional, Borland SilkTest, and Rational Robot) or in a well-known programming language (such as C++, C#, Tcl, Expect, Java, PHP, Perl, Python, or Ruby).
Test Suite:- In software development, a test suite, less commonly known as a validation suite, is a collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing.
Occasionally, test suites are used to group similar test cases together. A system might have a smoke test suite that consists only of smoke tests or a test suite for some specific functionality in the system. It may also contain all tests and signify if a test should be used as a smoke test or for some specific functionality.
An executable test suite is a test suite that can be executed by a program. This usually means that a test harness, which is integrated with the suite, exists. The test suite and the test harness together can work on a sufficiently detailed level to correctly communicate with the system under test (SUT).
Use Case: A sequence of transactions in a dialogue between a user and the system with a tangible result.
Test Scenario: It is a document specifying a sequence of actions for the execution of a test.
Test Case: A set of input values,test execution preconditions,expected esults and execution postconditions developed for a particular objective or test condition,such as to verify compliance witha specific requirement
Test case:- It is the set of test inputs, execution conditions and expected results developed to test a perticular functionality.
Test cases are often referred to as test scripts, particularly when written. Written test cases are usually collected into test suites.
A test case can be derived from a scenario .For the above scenario we can write a test case like :
Test Case # 1:
S.No Steps Expected
1 Open the login window Login window is open
2 Enter valid UN & Pwd Application should be open
3 Click on Payslip Features in payslip should be displayed
4 Click on latest payslip feature It should open latest payslip window
Above is a positive test case and a negative test case can also be prepared.A test case is prepared and executed with a goal to find the hidden defects with different possibilities.
Test Script:- Manually. These are more commonly called test cases.
Automated Short program written in a programming language used to test part of the functionality of a software system. Test scripts written as a short program can either be written using a special automated functional GUI test tool (such as HP QuickTest Professional, Borland SilkTest, and Rational Robot) or in a well-known programming language (such as C++, C#, Tcl, Expect, Java, PHP, Perl, Python, or Ruby).
Test Suite:- In software development, a test suite, less commonly known as a validation suite, is a collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing.
Occasionally, test suites are used to group similar test cases together. A system might have a smoke test suite that consists only of smoke tests or a test suite for some specific functionality in the system. It may also contain all tests and signify if a test should be used as a smoke test or for some specific functionality.
An executable test suite is a test suite that can be executed by a program. This usually means that a test harness, which is integrated with the suite, exists. The test suite and the test harness together can work on a sufficiently detailed level to correctly communicate with the system under test (SUT).
Use Case: A sequence of transactions in a dialogue between a user and the system with a tangible result.
Test Scenario: It is a document specifying a sequence of actions for the execution of a test.
Test Case: A set of input values,test execution preconditions,expected esults and execution postconditions developed for a particular objective or test condition,such as to verify compliance witha specific requirement
Subscribe to:
Posts (Atom)