Monday, November 9, 2009

Difference between test case and test scenario and test script

Test Scenario:- A test scenario is almost like a story like example "a user enters into the application from login window by entering valid user name and password. After entering he will click on module Payslip and clicks on latest payslip feature to view his latest payslip". Any test scenario will contain a specific goal.

Test case:- It is the set of test inputs, execution conditions and expected results developed to test a perticular functionality.

Test cases are often referred to as test scripts, particularly when written. Written test cases are usually collected into test suites.

A test case can be derived from a scenario .For the above scenario we can write a test case like :

Test Case # 1:

S.No Steps Expected

1 Open the login window Login window is open

2 Enter valid UN & Pwd Application should be open

3 Click on Payslip Features in payslip should be displayed

4 Click on latest payslip feature It should open latest payslip window

Above is a positive test case and a negative test case can also be prepared.A test case is prepared and executed with a goal to find the hidden defects with different possibilities.

Test Script:- Manually. These are more commonly called test cases.
Automated Short program written in a programming language used to test part of the functionality of a software system. Test scripts written as a short program can either be written using a special automated functional GUI test tool (such as HP QuickTest Professional, Borland SilkTest, and Rational Robot) or in a well-known programming language (such as C++, C#, Tcl, Expect, Java, PHP, Perl, Python, or Ruby).

Test Suite:- In software development, a test suite, less commonly known as a validation suite, is a collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviours. A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing.

Occasionally, test suites are used to group similar test cases together. A system might have a smoke test suite that consists only of smoke tests or a test suite for some specific functionality in the system. It may also contain all tests and signify if a test should be used as a smoke test or for some specific functionality.

An executable test suite is a test suite that can be executed by a program. This usually means that a test harness, which is integrated with the suite, exists. The test suite and the test harness together can work on a sufficiently detailed level to correctly communicate with the system under test (SUT).


Use Case: A sequence of transactions in a dialogue between a user and the system with a tangible result.

Test Scenario: It is a document specifying a sequence of actions for the execution of a test.

Test Case:
A set of input values,test execution preconditions,expected esults and execution postconditions developed for a particular objective or test condition,such as to verify compliance witha specific requirement

Friday, September 25, 2009

Cookies- Persistent and Temporary Cookies

A cookie is used to store small piece of information on client machine.A cookie contains page-specific information that a web server sends to a client along with page output.Cookies are used for sending page specific information because HTTP is a stateless protocol and cannot indicate whether page request coming from the same or different client.You can use cookies to keep track of individual user who access a web page across HTTP connection.

Cookies are saved on the client computer.
Cookies are of two types
1. Temporary
2. Persistent

Temporary cookies also know as session cookies,exist in the memory space of browser.When the browser is closed,all the session cookies added to the browser are lost.
A persistent cookie is saved as a text file in the file system of the client computer.

Cookies enable you to store information about a client,session,or application.When a browser request a page,it sends the information in the cookie along with the request information.A web server reads the cookie and extracts its value.

Tuesday, July 28, 2009

Vocabulary

Chronological=sequential/in order
Intimacy=familiarity/closeness/relationship/understanding
Irked=annoyed/displeased/bothered
Apprehensive = anxious/uneasy/worried
Anxious=uneasy/worried/nervous
Obligation= compulsion/duty/responsibility

Monday, July 6, 2009

Financial terms

Remittance= payment/transfer of funds
Remit = submit/pay/forward/send
Debt= sum unpaid/money owing/debit
Debit= withdrawal/subtraction/deduction
Credit amount = the amount of money in an account after debts have been charged against it
Liability= legal responsibility
Assets = belongings
Owe = have a loan from
Lend= let somebody borrow
Borrow = have a loan of
Outstanding amount = yet to paid amount
At par= Balance/equivalence
Clearing= payment/reimbursement
Compensation= return
Refund= money back/reimbursement
Mortgage= advance/credit/finance
Fringe= extreme/trimming/edging

Tuesday, November 25, 2008

Testing FAQ’s

What is the difference between test strategy and test plan?

Test strategy and test plan both means that planning the testing activity.
But test strategy is an organizational level term, which can be used for all the projects in the Organization.
Test plan is a project level term which can be used only for that project.


Write a test case on web login for bank application and client server?

a) Enter the URL, check the login page open or not.
b) Default focus should be on user name.
c) Password should be in encrypted format or not
d) Password should not get copied (that is ctrl + c, mouse right click, edit have copy disable not work)
e) Input domain testing on user name (that is boundary value analysis)
f) Input domain testing on password (that is boundary value analysis)
g) Properties check on buttons (that is enable or disable)
h) Functionality check on the buttons (it works correct or not)
i) Authorization testing (that is valid or invalid values are allow or not).
j) Case sensitive check on the login page (that is allowed the cap or small letters).
k) After the sing out login, it should come back to your previous page, should not show the expired text.
l) Enter into the after login page copy the URL and click after on the logout, paste the URL, then same page should not allowed to open.
m) Check the assignation time with respect to client
n) Check the cockers
o) Check the single user or multi-user are use
p) Check the usability of the login page
q) Under usability look and fell, spelling mistakes, colures, font sizes and so on.

WHO ARE YOU?

I am future Project Manager/CEO of this company.





what is the difference b/w Priority and severity in Bug Report

Does the integration testing include non functional test?

How to Trace a Defect?

Tracing a Defect is nothing but Deviation of the Customer requirement in the application.

If Customer Requirement is not performing its function or not available in application, i.e. called Defect.

Defect Tracking means... Ensure that the Defect is related to which Test Case. You should able to find which defect is related to which Requirement, Tracking the defect with respect to Test Cases... you can maintain the documents with Hyperlinks to desired Documents

for e.g. Functional Requirement Specifications Document
Use Case document
Master Test Case Document
Detailed Test Case Document
Defect Profile Document
if you maintain according to your desired documents with hyperlink, you can easily Trace back Which Defect is related to which Requirement...

define with example high severity and low priority ? 2.low severity and high priority? 3.both are high? 4.both are low ?

Diff. between system test cases & UAT test cases?

System testing of software is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

How do u find duplicate test cases?

In Traceability requirement matrix (TRM) ,we develop the mapping between the functional req. & test cases. you can find out that whether any test case is missed or any Duplicate test case is generated with the help of TRM.


What is the big bang approach in integration testing?
All the modules are integrated first at once and then testing performed. Disadvantage of this is we cannot point out the origin of the bug, I mean to say if there is any
bug, then it is difficult to developer to find out which part of application is not working or whether there is any mistake while doing the integration of modules.

How to write test case for triangle and square?

What is B.V.A with brief example?

What do you know about ERP, INSURANCE DOMAINS?

Difference between XP and Vista?

To be honest, microsoft hasn't done much in the four year interval between the initial windows xp and windows vista
But security is one dramatic add on, especially as the days where there is a growing number of hackers and malware
We may have already had a taste of this if you downloaded windows xp sp2, which has generally the same security features in windows vista, for example the firewall, although windows vista's firewall is now able to monitor both inbound and outbound traffic
If you would like more features be sure to check out the official window vista features

What is suspension & resumption criteria in Test Plan please explain with example?

Suspension/Resumption Criteria in a Software Test Plan :
If any defects are found which seriously impact the test progress the test lead may choose to suspend testing.
The criteria which are considered for suspension or resumption are :
[a] hardware / software not available at the time indicated in the project schedule
[b] the build contains many serious defects which seriously prevent or limit testing progress
[c] Assigned test resources are not available when needed by the test team

Resumption Criteria :

If testing is suspended, resumption will only occur when the problem(s) that caused the suspension have been resolved. When a critical defect is the cause of the suspension, the “FIX” must be verified by the testing team before testing is resumed

what is difference between test strategies and test data ?

Suppose there are three modules A-B-C, the output of A is responsible for B and output of B is responsible for C and if A & C is not ready & B module is ready then how u can check Module B

What is SQL Index ?

Difference between web application and Client Server application ?

Please give me any example of High severity and Low priority type of bug ?

What is diff. between System Testing and Integration Testing ?

What is the difference between Performance testing and Load testing and stress testing (Negative testing)?

Performance testing

The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. For a Web application, testers will use
tools that simulate concurrent users/HTTP connections and measure response times.

Load testing

We have already seen load testing as part of the process of performance testing and tuning.it meant constantly increasing the load on the system via automated tools. For
a Web application, the load is defined in terms of concurrent users or HTTP connections.

Stress testing

Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing).

why need testing ????

what is Difference between entry criteria and exit criteria ?

Entry Criteria

Test Case completed
Resource are Identify
Test Data is available
Test Environment Setup

Exit Criteria

All Test Cases Executed
2. All Defects report and Retested
3. Test documents are updated with results
4. All reports are generated:
* Execution report
* Defect Report

What is the Test Driver used in Integration Testing. Anybody can explain in Detail. Thanks in advance.

what is the difference between test case and test scenario?

What is diff between Load testing and Stress testing?

what is sanity testing when do u start it?

Sanity testing is the initial level of testing. Before you accept the build all the major functionalities are working correctly. Its like a fitness of the product. After finishing the sanity testing we can do the system testing.

Sanity test determines while we are doing the testing Whether the test is reasonable to proceed with further testing or not (means the particular test is must or not).

entry and exit criteria for each phase in STLC and SDLC?

what is manual testing process?

Hi , We have urgent opening for Test Analyst to our UK client. So testing Professionals Please visit www.sohumsolutions.com & in career section you can get all requirement specification If you are interested onsite opportunity Please email me your CV & along with your current salary details & expectation, Visa Status. If you don't have any Visa want to take UK Visa then we can co operate you ! Thanks & Regards, Prabha Koshti Director Sohum STAFFING Solutions contact+91-(020)27610012 Mobile:+91-(0)-9822914878 Email :prabha.koshti@sohumsolutions.com

what are the metrics you are following?

Test Metrics is a mechanism to know the effectiveness of the testing that can be measured quantitatively. It is a feedback mechanism to improve the Testing Process that is followed currently.

What is Software Testing Methodologies

1.white box testing
2.black box testing
3.gray box testing

Why Should we prepare traceability matrix?

From TRM, we know how much requirements covered in test case.
By using TRM easily mapping between requirements and test case.
It’s very useful while change request time also

If the developer rejects that it’s not a bug means then what’s the next step we follow?

Retest the same test case, if u get same bug. Send it to Developer. If developer didn't accept it approaches team lead.

What is Deferred Bug? Explain it who allocates it?

Deferred means, when we r reporting a bug to particular developer, then the importance of the bug or by the lack of time they are not going to fix that bug at that time. So for that bug they assign DEFFERED STATE.

Deferred means postponed for future versions, Program manager is given by deferred status.

What is configuration testing?

During this testing tester validate how well our current project is able to supports on different types of hardware technologies like as different types of printers, n/w
Interface cord (NIC), topology etc. this testing is also called as hardware testing or portable testing

Configuration testing is also called "Hardware compatibility testing", during this testing
Tester will test whether the s/w build is supporting different hardware technologies or not
Ex: printer, Scanners and topologies etc.,

Testing the software against the hardware or testing the software against the software is called configuration testing


What is the difference between bug, defect, error, failure, fault & mistake?

Let me explain with an example.
If input is 2+2, then output is an error message, it is error(syntax error).
If input is 2+2, then output is 5, it is bug (logical error).
If the program is for addition and subtraction, and only addition is there and subtraction is left out, it is defect.
Usually the error is found by developer.
Usually the bug is found by tester.
Usually the defect is found by client or user.

100 glasses are there. A servant has to supply glasses to a person If he supplies the glasses without any damage he will get 3 paisa otherwise he will loose 3 paisa. At the end of supplying 100 glasses if he gets 270 paisa, how many glasses were supplied safely.

1GLASS=3PAISE
HE GETS 270 PAISE.
270/3=90 GLASS
SO ANSWER IS 90 GLASS ARE SUPPLIED SAFELY.


Can anybody help me what is the diff between CMM, SEI, PCMM and six sigma?

The CMM i.e. capability maturity model defines processes to be followed at each level (by experience) to achieve the desired result.
SEI - Software Engineering Institute that certifies the various companies by the help of their representatives in various countries.
PCMM-Peoples CMM i.e. there is about 199 people practices followed across the various companies. I.e. based on the feed back of the people about the company, if we make required changes that yield better results and healthy environment for the people working in it. Ex; Black box, e-mails, etc.
Six Sigma in brief is the standard process in moving towards total bug free Software(ex) like 99.9999%.Though it is similar to CMM5, it is very rigid in some aspects like
rejecting the defect SW component and producing a new one. No compromise in quality. Motorola company is one such Six sigma company.

When we enter valid user ID and passwords and unable to login then what do we test more?

Check your databases if your value is correct so check the value case sensitive or not.

Report the bug to development team because sometimes the developer made the mistake or the valid user id does not exist in your database table

If you enable to enter better report it to development team or discuss with Test team Lead or Developer about the particular functionality

Bug with High Severity and Low Priority

If user clicks on a button/link for 100 times then page gets corrupted.

When Ledgers in Ledger Master in Finance and Accounting are not getting created/showing wrong ledger code after each 1000 entries in client Group Master.


Bug with High Priority and Low Severity

Spelling mistake in the Company name on web page
Different Logos on different pages (which are not matching)


What is the difference between QA and QC?

Difference between QA and QC:

Quality Assurance (QA):
QA is a Verification process like reviews.
verification
process oriented
prevention based

Quality Control (QC):
QC is a Validation process like testing the product.
Validation
product oriented
detection based


What is difference between regression testing and re- testing?

Regression Testing: The re execution of selected test cases on modified build to ensure bug fix work without any side effects.
Testing the application whether the changes in the modified build will affect another functionalities i.e. other parts of the application

Retesting: Retesting means repeating the same test on same application build with multiple input values. It’s also known as data driven testing
Retesting: testing the application with multiple sets of data.


What is main difference between Testing Techniques and Testing Methodology?

Techniques....Black box and white box
Methodologies....Manual and automation
Types....static and dynamic (do not go in depth initially)

Testing techniques constitute of: Equivalent class Partitioning, Boundary Value Analysis, and Error Guessing for Black Box.
Statement Coverage and Condition Coverage for White Box testing.

Test Methodology: It is the way we are going to approach a project. It is nothing but method we are following in out project.


What are the different types, methodologies, approaches, Methods in software testing?

There are two types of testing types
1. Functional testing
2. Non functional testing


What is fish bone diagram?

Also called as Ishikawa Diagram, Named after Japanese Dr. Ishikawa. Generally used for Root Cause Analysis(RCA).
Fishbone chart/Diagram is also known as Ishikawa Diagram, it is primarily used to perform Root Cause Analysis (RCA). Where in first of all main braches are identified as problematic areas and then sub branches are used to find out causes to these problematic areas.

What is Quizilla Analysis?

What is associated bug and what is Ad-hoc bug?

Associated bug is one which is related to test case.
Ad-hoc bug one which is not associated with the test case, generally found depending upon the tester’s experience wherein no documentation is provided/prepared and no planning is done

What is difference between Interface testing and Integration testing?

Interface Testing – GUI testing/Usability Aspects/Cosmetic Bugs
Integration Testing – Combining all the modules/selected modules

What is Result of the “select 8 from table_name” query?

Will return those many rows with 8 as value, as many records/rows are there in the selected table.

What are types of the integration testing?

1. Incremental
Top down
Bottom up
Sandwich

2. Non-Incremental
Also Called Big Bang Method

What is Defect Density? How the Defect Density is measured?

This basically is the Consolidated Graph of Number of defects found in each module. This is helpful to locate the error and Help to concentrate on a particular area while making Analysis.

number of defects found for particular size of code

The Measure Defect-Density (DD) Measures the number of
defects in a particular size of code. it is measured as follows:
LOC (Lines of Code), Defects: number of defects in the code;

DD = Defects/(K)LOC

Wednesday, November 12, 2008

Test Plan

test plan

A test plan is a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be.

Test plans in software development


In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including
• Scope of testing
• Schedule
• Test Deliverables
• Release Criteria
• Risks and Contingencies

Test plan template, based on IEEE 829 format

1. Test Plan Identifier (TPI)
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Entry & Exit Criteria
11. Suspension Criteria and Resumption Requirements
12. Test Deliverables
13. Remaining Test Tasks
14. Environmental Needs
15. Staffing and Training Needs
16. Responsibilities
17. Schedule (section not covered in document - see Contents and paragraphs)
18. Planning Risks and Contingencies
19. Approvals
20. Glossary

Test plan identifier

For example: "Master plan for 3A USB Host Mass Storage Driver TP_3A1.0".
References

List all documents that support this test plan.

Documents that are referenced include:
• Project Plan
• System Requirements specifications.
• High Level design document.
• Detail design document.
• Development and Test process standards.
• Methodology

Introduction

State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that contain information relevant to this project/process.
Identify the objective of the plan or scope of the plan in relation to the Software Project plan that it relates to. Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possible the process to be used for change control and communication and coordination of key activities.
As this is the "Executive Summary" keep information brief and to the point.
Test items (functions)
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.
This can be controlled on a local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements.
Remember, what you are testing is what you intend to deliver to the Client.
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build.

Software risk issues


Identify what software is to be tested and what the critical areas are, such as:
1. Delivery of a third party product.
2. New version of interfacing software.
3. Ability to use and understand a new package/tool, etc.
4. Extremely complex functions.
5. Modifications to components with a past history of failure.
6. Poorly documented modules or change requests.
There are some inherent software risks such as complexity; these need to be identified.
1. Safety.
2. Multiple interfaces.
3. Impacts on Client.
4. Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several brainstorming sessions.
• Start with ideas, such as, what worries me about this project/application.

Features to be tested

This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical description of the software, but a USERS view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.
Sections 4 and 6 are very similar, and the only true difference is the point of view. Section 4 is a technical type description including version numbers and other technical information and Section 6 is from the User’s viewpoint. Users do not understand technical software terminology; they understand functions and processes as they relate to their jobs.

Features not to be tested

This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
• Not to be included in this release of the Software.
• Low risk, has been used before and was considered stable.
• Will be released but not tested or documented as a functional part of the release of this version of the software.
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.

Approach (strategy)

This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.
• Are any special tools to be used and what are they?
• Will the tool require special training?
• What metrics will be collected?
• Which level is each metric to be collected at?
• How is Configuration Management to be handled?
• How many different configurations will be tested?
• Hardware
• Software
• Combinations of HW, SW and other vendor packages
• What levels of regression testing will be done and how much at each test level?
• Will regression testing be based on severity of defects detected?
• How will elements in the requirements and design that do not make sense or are untestable be processed?
If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
Specify if there are special requirements for the testing.
• Only the full component will be tested.
• A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
• MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
• SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
How will meetings and other organizational processes be handled?

Item pass/fail criteria

Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed. Show Stopper severity requires definition within each testing context.

Entry & exit criteria

Specify the criteria to be used to start testing and how you know when to stop the testing process.

Suspension criteria & resumption requirements

Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities while resumption criteria specifies when testing can resume after it has been suspended.

Test deliverables

List documents, reports, charts, that will be presented to stakeholders on a regular basis during testing and when testing has been completed.

Remaining test tasks

If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This status needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan.
When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups.

Environmental needs

Are there any special requirements for this test plan, such as:
• Special hardware such as simulators, static generators etc.
• How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?
• How much testing will be done on each component of a multi-part feature?
• Special power requirements.
• Specific versions of other supporting software.
• Restricted use of the system during testing.

Staffing and training needs

Training on the application/system.
Training for any test tools to be used.
The Test Items and Responsibilities sections affect this section. What is to be tested and who is responsible for the testing and training.

Responsibilities

Who is in charge?
This issue includes all areas of the plan. Here are some examples:
• Setting risks.
• Selecting features to be tested and not tested.
• Setting overall strategy for this level of plan.
• Ensuring all required elements is in place for testing.
• Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.
• Who provides the required training?
• Who makes the critical go/no go decisions for items not covered in the test plans?

Planning risks and contingencies

What are the overall risks to the project with an emphasis on the testing process?
• Lack of personnel resources when testing is to begin.
• Lack of availability of required hardware, software, data or tools.
• Late delivery of the software, hardware or tools.
• Delays in training on the application and/or tools.
• Changes to the original requirements or designs.
• Complexities involved in testing the applications
Specify what will be done for various events, for example:
Requirements definition will be complete by January 1, 20XX, and, if the requirements change after that date, the following actions will be taken:
• The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates.
• The number of tests performed will be reduced.
• The number of acceptable defects will be increased.
o These two items could lower the overall quality of the delivered product.
• Resources will be added to the test team.
• The test team will work overtime (this could affect team morale).
• The scope of the plan may be changed.
• There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option.

Approvals

Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:
• The audience for a unit test level plan is different from that of an integration, system or master level plan.
• The levels and type of knowledge at the various levels will be different as well.
• Programmers are very technical but may not have a clear understanding of the overall business process driving the project.
• Users may have varying levels of business acumen and very little technical skills.
• Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess.

Glossary

Used to define terms and acronyms used in the document, and testing in general, to eliminate confusion and promote consistent communications.

Regional differences

There are often localised differences in the use of this term. In some locations, test plan can mean all of the tests that need to be run. Purists would suggest that a collection of tests or test cases is a Test suite.
Some locations would consider what is described above as a test strategy. This usage is generally localised to the Indian market.
Some state that a test strategy creation precedes the test plan creation (ISTQB among others) [1], others suggest that it follows the test plan creation.[2]

Criticism of the overuse of test plans

Cem Kaner, co-author of Testing Computer Software (ISBN 0-471-35846-0), has suggested that test plans are written for two very different purposes. Sometimes the test plan is a product; sometimes it's a tool. It's too easy, but also too expensive, to confuse these goals. In other words, a test plan is like a power tool. You should not utilize them if you don't know what you're doing with them. You are wasting both time and money.


Test plans in hardware development
IEEE 829-1998:
a) Test plan identifier;
b) Introduction;
c) Test items;
d) Features to be tested;
e) Features not to be tested;
f) Approach;
g) Item pass/fail criteria;
h) Suspension criteria and resumption requirements;
i) Test deliverables;
j) Testing tasks;
k) Environmental needs;
l) Responsibilities;
m) Staffing and training needs;
n) Schedule;
o) Risks and contingencies;
p) Approvals.

USB - Universal Serial Bus

Universal Serial Bus (USB) is a serial bus standard to interface devices. USB was designed to allow many peripherals to be connected using a single standardized interface socket and to improve the plug-and-play capabilities by allowing devices to be connected and disconnected without rebooting the computer (hot swapping). Other convenient features include providing power to low-consumption devices without the need for an external power supply and allowing many devices to be used without requiring manufacturer specific, individual device drivers to be installed.
USB is intended to help retire all legacy varieties of serial and parallel ports. USB can connect computer peripherals such as mice, keyboards, PDAs, gamepads and joysticks, scanners, digital cameras, printers, personal media players, and flash drives. For many of those devices USB has become the standard connection method. USB was originally designed for personal computers, but it has become commonplace on other devices such as PDAs and video game consoles. As of 2008, there are about 2 billion USB devices in the world.[1]
The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry standards body incorporating leading companies from the computer and electronics industries. Notable members have included Agere (now merged with LSI Corporation), Apple Inc., Hewlett-Packard, Intel, NEC, and Microsoft.