Tuesday, November 25, 2008

Testing FAQ’s

What is the difference between test strategy and test plan?

Test strategy and test plan both means that planning the testing activity.
But test strategy is an organizational level term, which can be used for all the projects in the Organization.
Test plan is a project level term which can be used only for that project.


Write a test case on web login for bank application and client server?

a) Enter the URL, check the login page open or not.
b) Default focus should be on user name.
c) Password should be in encrypted format or not
d) Password should not get copied (that is ctrl + c, mouse right click, edit have copy disable not work)
e) Input domain testing on user name (that is boundary value analysis)
f) Input domain testing on password (that is boundary value analysis)
g) Properties check on buttons (that is enable or disable)
h) Functionality check on the buttons (it works correct or not)
i) Authorization testing (that is valid or invalid values are allow or not).
j) Case sensitive check on the login page (that is allowed the cap or small letters).
k) After the sing out login, it should come back to your previous page, should not show the expired text.
l) Enter into the after login page copy the URL and click after on the logout, paste the URL, then same page should not allowed to open.
m) Check the assignation time with respect to client
n) Check the cockers
o) Check the single user or multi-user are use
p) Check the usability of the login page
q) Under usability look and fell, spelling mistakes, colures, font sizes and so on.

WHO ARE YOU?

I am future Project Manager/CEO of this company.





what is the difference b/w Priority and severity in Bug Report

Does the integration testing include non functional test?

How to Trace a Defect?

Tracing a Defect is nothing but Deviation of the Customer requirement in the application.

If Customer Requirement is not performing its function or not available in application, i.e. called Defect.

Defect Tracking means... Ensure that the Defect is related to which Test Case. You should able to find which defect is related to which Requirement, Tracking the defect with respect to Test Cases... you can maintain the documents with Hyperlinks to desired Documents

for e.g. Functional Requirement Specifications Document
Use Case document
Master Test Case Document
Detailed Test Case Document
Defect Profile Document
if you maintain according to your desired documents with hyperlink, you can easily Trace back Which Defect is related to which Requirement...

define with example high severity and low priority ? 2.low severity and high priority? 3.both are high? 4.both are low ?

Diff. between system test cases & UAT test cases?

System testing of software is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

How do u find duplicate test cases?

In Traceability requirement matrix (TRM) ,we develop the mapping between the functional req. & test cases. you can find out that whether any test case is missed or any Duplicate test case is generated with the help of TRM.


What is the big bang approach in integration testing?
All the modules are integrated first at once and then testing performed. Disadvantage of this is we cannot point out the origin of the bug, I mean to say if there is any
bug, then it is difficult to developer to find out which part of application is not working or whether there is any mistake while doing the integration of modules.

How to write test case for triangle and square?

What is B.V.A with brief example?

What do you know about ERP, INSURANCE DOMAINS?

Difference between XP and Vista?

To be honest, microsoft hasn't done much in the four year interval between the initial windows xp and windows vista
But security is one dramatic add on, especially as the days where there is a growing number of hackers and malware
We may have already had a taste of this if you downloaded windows xp sp2, which has generally the same security features in windows vista, for example the firewall, although windows vista's firewall is now able to monitor both inbound and outbound traffic
If you would like more features be sure to check out the official window vista features

What is suspension & resumption criteria in Test Plan please explain with example?

Suspension/Resumption Criteria in a Software Test Plan :
If any defects are found which seriously impact the test progress the test lead may choose to suspend testing.
The criteria which are considered for suspension or resumption are :
[a] hardware / software not available at the time indicated in the project schedule
[b] the build contains many serious defects which seriously prevent or limit testing progress
[c] Assigned test resources are not available when needed by the test team

Resumption Criteria :

If testing is suspended, resumption will only occur when the problem(s) that caused the suspension have been resolved. When a critical defect is the cause of the suspension, the “FIX” must be verified by the testing team before testing is resumed

what is difference between test strategies and test data ?

Suppose there are three modules A-B-C, the output of A is responsible for B and output of B is responsible for C and if A & C is not ready & B module is ready then how u can check Module B

What is SQL Index ?

Difference between web application and Client Server application ?

Please give me any example of High severity and Low priority type of bug ?

What is diff. between System Testing and Integration Testing ?

What is the difference between Performance testing and Load testing and stress testing (Negative testing)?

Performance testing

The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. For a Web application, testers will use
tools that simulate concurrent users/HTTP connections and measure response times.

Load testing

We have already seen load testing as part of the process of performance testing and tuning.it meant constantly increasing the load on the system via automated tools. For
a Web application, the load is defined in terms of concurrent users or HTTP connections.

Stress testing

Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing).

why need testing ????

what is Difference between entry criteria and exit criteria ?

Entry Criteria

Test Case completed
Resource are Identify
Test Data is available
Test Environment Setup

Exit Criteria

All Test Cases Executed
2. All Defects report and Retested
3. Test documents are updated with results
4. All reports are generated:
* Execution report
* Defect Report

What is the Test Driver used in Integration Testing. Anybody can explain in Detail. Thanks in advance.

what is the difference between test case and test scenario?

What is diff between Load testing and Stress testing?

what is sanity testing when do u start it?

Sanity testing is the initial level of testing. Before you accept the build all the major functionalities are working correctly. Its like a fitness of the product. After finishing the sanity testing we can do the system testing.

Sanity test determines while we are doing the testing Whether the test is reasonable to proceed with further testing or not (means the particular test is must or not).

entry and exit criteria for each phase in STLC and SDLC?

what is manual testing process?

Hi , We have urgent opening for Test Analyst to our UK client. So testing Professionals Please visit www.sohumsolutions.com & in career section you can get all requirement specification If you are interested onsite opportunity Please email me your CV & along with your current salary details & expectation, Visa Status. If you don't have any Visa want to take UK Visa then we can co operate you ! Thanks & Regards, Prabha Koshti Director Sohum STAFFING Solutions contact+91-(020)27610012 Mobile:+91-(0)-9822914878 Email :prabha.koshti@sohumsolutions.com

what are the metrics you are following?

Test Metrics is a mechanism to know the effectiveness of the testing that can be measured quantitatively. It is a feedback mechanism to improve the Testing Process that is followed currently.

What is Software Testing Methodologies

1.white box testing
2.black box testing
3.gray box testing

Why Should we prepare traceability matrix?

From TRM, we know how much requirements covered in test case.
By using TRM easily mapping between requirements and test case.
It’s very useful while change request time also

If the developer rejects that it’s not a bug means then what’s the next step we follow?

Retest the same test case, if u get same bug. Send it to Developer. If developer didn't accept it approaches team lead.

What is Deferred Bug? Explain it who allocates it?

Deferred means, when we r reporting a bug to particular developer, then the importance of the bug or by the lack of time they are not going to fix that bug at that time. So for that bug they assign DEFFERED STATE.

Deferred means postponed for future versions, Program manager is given by deferred status.

What is configuration testing?

During this testing tester validate how well our current project is able to supports on different types of hardware technologies like as different types of printers, n/w
Interface cord (NIC), topology etc. this testing is also called as hardware testing or portable testing

Configuration testing is also called "Hardware compatibility testing", during this testing
Tester will test whether the s/w build is supporting different hardware technologies or not
Ex: printer, Scanners and topologies etc.,

Testing the software against the hardware or testing the software against the software is called configuration testing


What is the difference between bug, defect, error, failure, fault & mistake?

Let me explain with an example.
If input is 2+2, then output is an error message, it is error(syntax error).
If input is 2+2, then output is 5, it is bug (logical error).
If the program is for addition and subtraction, and only addition is there and subtraction is left out, it is defect.
Usually the error is found by developer.
Usually the bug is found by tester.
Usually the defect is found by client or user.

100 glasses are there. A servant has to supply glasses to a person If he supplies the glasses without any damage he will get 3 paisa otherwise he will loose 3 paisa. At the end of supplying 100 glasses if he gets 270 paisa, how many glasses were supplied safely.

1GLASS=3PAISE
HE GETS 270 PAISE.
270/3=90 GLASS
SO ANSWER IS 90 GLASS ARE SUPPLIED SAFELY.


Can anybody help me what is the diff between CMM, SEI, PCMM and six sigma?

The CMM i.e. capability maturity model defines processes to be followed at each level (by experience) to achieve the desired result.
SEI - Software Engineering Institute that certifies the various companies by the help of their representatives in various countries.
PCMM-Peoples CMM i.e. there is about 199 people practices followed across the various companies. I.e. based on the feed back of the people about the company, if we make required changes that yield better results and healthy environment for the people working in it. Ex; Black box, e-mails, etc.
Six Sigma in brief is the standard process in moving towards total bug free Software(ex) like 99.9999%.Though it is similar to CMM5, it is very rigid in some aspects like
rejecting the defect SW component and producing a new one. No compromise in quality. Motorola company is one such Six sigma company.

When we enter valid user ID and passwords and unable to login then what do we test more?

Check your databases if your value is correct so check the value case sensitive or not.

Report the bug to development team because sometimes the developer made the mistake or the valid user id does not exist in your database table

If you enable to enter better report it to development team or discuss with Test team Lead or Developer about the particular functionality

Bug with High Severity and Low Priority

If user clicks on a button/link for 100 times then page gets corrupted.

When Ledgers in Ledger Master in Finance and Accounting are not getting created/showing wrong ledger code after each 1000 entries in client Group Master.


Bug with High Priority and Low Severity

Spelling mistake in the Company name on web page
Different Logos on different pages (which are not matching)


What is the difference between QA and QC?

Difference between QA and QC:

Quality Assurance (QA):
QA is a Verification process like reviews.
verification
process oriented
prevention based

Quality Control (QC):
QC is a Validation process like testing the product.
Validation
product oriented
detection based


What is difference between regression testing and re- testing?

Regression Testing: The re execution of selected test cases on modified build to ensure bug fix work without any side effects.
Testing the application whether the changes in the modified build will affect another functionalities i.e. other parts of the application

Retesting: Retesting means repeating the same test on same application build with multiple input values. It’s also known as data driven testing
Retesting: testing the application with multiple sets of data.


What is main difference between Testing Techniques and Testing Methodology?

Techniques....Black box and white box
Methodologies....Manual and automation
Types....static and dynamic (do not go in depth initially)

Testing techniques constitute of: Equivalent class Partitioning, Boundary Value Analysis, and Error Guessing for Black Box.
Statement Coverage and Condition Coverage for White Box testing.

Test Methodology: It is the way we are going to approach a project. It is nothing but method we are following in out project.


What are the different types, methodologies, approaches, Methods in software testing?

There are two types of testing types
1. Functional testing
2. Non functional testing


What is fish bone diagram?

Also called as Ishikawa Diagram, Named after Japanese Dr. Ishikawa. Generally used for Root Cause Analysis(RCA).
Fishbone chart/Diagram is also known as Ishikawa Diagram, it is primarily used to perform Root Cause Analysis (RCA). Where in first of all main braches are identified as problematic areas and then sub branches are used to find out causes to these problematic areas.

What is Quizilla Analysis?

What is associated bug and what is Ad-hoc bug?

Associated bug is one which is related to test case.
Ad-hoc bug one which is not associated with the test case, generally found depending upon the tester’s experience wherein no documentation is provided/prepared and no planning is done

What is difference between Interface testing and Integration testing?

Interface Testing – GUI testing/Usability Aspects/Cosmetic Bugs
Integration Testing – Combining all the modules/selected modules

What is Result of the “select 8 from table_name” query?

Will return those many rows with 8 as value, as many records/rows are there in the selected table.

What are types of the integration testing?

1. Incremental
Top down
Bottom up
Sandwich

2. Non-Incremental
Also Called Big Bang Method

What is Defect Density? How the Defect Density is measured?

This basically is the Consolidated Graph of Number of defects found in each module. This is helpful to locate the error and Help to concentrate on a particular area while making Analysis.

number of defects found for particular size of code

The Measure Defect-Density (DD) Measures the number of
defects in a particular size of code. it is measured as follows:
LOC (Lines of Code), Defects: number of defects in the code;

DD = Defects/(K)LOC

Wednesday, November 12, 2008

Test Plan

test plan

A test plan is a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be.

Test plans in software development


In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including
• Scope of testing
• Schedule
• Test Deliverables
• Release Criteria
• Risks and Contingencies

Test plan template, based on IEEE 829 format

1. Test Plan Identifier (TPI)
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Entry & Exit Criteria
11. Suspension Criteria and Resumption Requirements
12. Test Deliverables
13. Remaining Test Tasks
14. Environmental Needs
15. Staffing and Training Needs
16. Responsibilities
17. Schedule (section not covered in document - see Contents and paragraphs)
18. Planning Risks and Contingencies
19. Approvals
20. Glossary

Test plan identifier

For example: "Master plan for 3A USB Host Mass Storage Driver TP_3A1.0".
References

List all documents that support this test plan.

Documents that are referenced include:
• Project Plan
• System Requirements specifications.
• High Level design document.
• Detail design document.
• Development and Test process standards.
• Methodology

Introduction

State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that contain information relevant to this project/process.
Identify the objective of the plan or scope of the plan in relation to the Software Project plan that it relates to. Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possible the process to be used for change control and communication and coordination of key activities.
As this is the "Executive Summary" keep information brief and to the point.
Test items (functions)
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.
This can be controlled on a local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements.
Remember, what you are testing is what you intend to deliver to the Client.
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build.

Software risk issues


Identify what software is to be tested and what the critical areas are, such as:
1. Delivery of a third party product.
2. New version of interfacing software.
3. Ability to use and understand a new package/tool, etc.
4. Extremely complex functions.
5. Modifications to components with a past history of failure.
6. Poorly documented modules or change requests.
There are some inherent software risks such as complexity; these need to be identified.
1. Safety.
2. Multiple interfaces.
3. Impacts on Client.
4. Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several brainstorming sessions.
• Start with ideas, such as, what worries me about this project/application.

Features to be tested

This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical description of the software, but a USERS view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.
Sections 4 and 6 are very similar, and the only true difference is the point of view. Section 4 is a technical type description including version numbers and other technical information and Section 6 is from the User’s viewpoint. Users do not understand technical software terminology; they understand functions and processes as they relate to their jobs.

Features not to be tested

This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
• Not to be included in this release of the Software.
• Low risk, has been used before and was considered stable.
• Will be released but not tested or documented as a functional part of the release of this version of the software.
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.

Approach (strategy)

This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.
• Are any special tools to be used and what are they?
• Will the tool require special training?
• What metrics will be collected?
• Which level is each metric to be collected at?
• How is Configuration Management to be handled?
• How many different configurations will be tested?
• Hardware
• Software
• Combinations of HW, SW and other vendor packages
• What levels of regression testing will be done and how much at each test level?
• Will regression testing be based on severity of defects detected?
• How will elements in the requirements and design that do not make sense or are untestable be processed?
If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
Specify if there are special requirements for the testing.
• Only the full component will be tested.
• A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
• MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
• SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
How will meetings and other organizational processes be handled?

Item pass/fail criteria

Show stopper issues. Specify the criteria to be used to determine whether each test item has passed or failed. Show Stopper severity requires definition within each testing context.

Entry & exit criteria

Specify the criteria to be used to start testing and how you know when to stop the testing process.

Suspension criteria & resumption requirements

Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities while resumption criteria specifies when testing can resume after it has been suspended.

Test deliverables

List documents, reports, charts, that will be presented to stakeholders on a regular basis during testing and when testing has been completed.

Remaining test tasks

If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This status needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan.
When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups.

Environmental needs

Are there any special requirements for this test plan, such as:
• Special hardware such as simulators, static generators etc.
• How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?
• How much testing will be done on each component of a multi-part feature?
• Special power requirements.
• Specific versions of other supporting software.
• Restricted use of the system during testing.

Staffing and training needs

Training on the application/system.
Training for any test tools to be used.
The Test Items and Responsibilities sections affect this section. What is to be tested and who is responsible for the testing and training.

Responsibilities

Who is in charge?
This issue includes all areas of the plan. Here are some examples:
• Setting risks.
• Selecting features to be tested and not tested.
• Setting overall strategy for this level of plan.
• Ensuring all required elements is in place for testing.
• Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.
• Who provides the required training?
• Who makes the critical go/no go decisions for items not covered in the test plans?

Planning risks and contingencies

What are the overall risks to the project with an emphasis on the testing process?
• Lack of personnel resources when testing is to begin.
• Lack of availability of required hardware, software, data or tools.
• Late delivery of the software, hardware or tools.
• Delays in training on the application and/or tools.
• Changes to the original requirements or designs.
• Complexities involved in testing the applications
Specify what will be done for various events, for example:
Requirements definition will be complete by January 1, 20XX, and, if the requirements change after that date, the following actions will be taken:
• The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates.
• The number of tests performed will be reduced.
• The number of acceptable defects will be increased.
o These two items could lower the overall quality of the delivered product.
• Resources will be added to the test team.
• The test team will work overtime (this could affect team morale).
• The scope of the plan may be changed.
• There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option.

Approvals

Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:
• The audience for a unit test level plan is different from that of an integration, system or master level plan.
• The levels and type of knowledge at the various levels will be different as well.
• Programmers are very technical but may not have a clear understanding of the overall business process driving the project.
• Users may have varying levels of business acumen and very little technical skills.
• Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess.

Glossary

Used to define terms and acronyms used in the document, and testing in general, to eliminate confusion and promote consistent communications.

Regional differences

There are often localised differences in the use of this term. In some locations, test plan can mean all of the tests that need to be run. Purists would suggest that a collection of tests or test cases is a Test suite.
Some locations would consider what is described above as a test strategy. This usage is generally localised to the Indian market.
Some state that a test strategy creation precedes the test plan creation (ISTQB among others) [1], others suggest that it follows the test plan creation.[2]

Criticism of the overuse of test plans

Cem Kaner, co-author of Testing Computer Software (ISBN 0-471-35846-0), has suggested that test plans are written for two very different purposes. Sometimes the test plan is a product; sometimes it's a tool. It's too easy, but also too expensive, to confuse these goals. In other words, a test plan is like a power tool. You should not utilize them if you don't know what you're doing with them. You are wasting both time and money.


Test plans in hardware development
IEEE 829-1998:
a) Test plan identifier;
b) Introduction;
c) Test items;
d) Features to be tested;
e) Features not to be tested;
f) Approach;
g) Item pass/fail criteria;
h) Suspension criteria and resumption requirements;
i) Test deliverables;
j) Testing tasks;
k) Environmental needs;
l) Responsibilities;
m) Staffing and training needs;
n) Schedule;
o) Risks and contingencies;
p) Approvals.

USB - Universal Serial Bus

Universal Serial Bus (USB) is a serial bus standard to interface devices. USB was designed to allow many peripherals to be connected using a single standardized interface socket and to improve the plug-and-play capabilities by allowing devices to be connected and disconnected without rebooting the computer (hot swapping). Other convenient features include providing power to low-consumption devices without the need for an external power supply and allowing many devices to be used without requiring manufacturer specific, individual device drivers to be installed.
USB is intended to help retire all legacy varieties of serial and parallel ports. USB can connect computer peripherals such as mice, keyboards, PDAs, gamepads and joysticks, scanners, digital cameras, printers, personal media players, and flash drives. For many of those devices USB has become the standard connection method. USB was originally designed for personal computers, but it has become commonplace on other devices such as PDAs and video game consoles. As of 2008, there are about 2 billion USB devices in the world.[1]
The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry standards body incorporating leading companies from the computer and electronics industries. Notable members have included Agere (now merged with LSI Corporation), Apple Inc., Hewlett-Packard, Intel, NEC, and Microsoft.
Learning basics of QTP automation tool and preparation of QTP interview questions

This post is in continuation with QTP interview questions series. Following questions will help for preparing interview as well as learning the QTP basics.
Quick Test Professional: Interview Questions and answers.

1. What are the features and benefits of Quick Test Pro(QTP)?

1. Key word driven testing
2. Suitable for both client server and web based application
3. VB script as the script language
4. Better error handling mechanism
5. Excellent data driven testing features

2. How to handle the exceptions using recovery scenario manager in QTP?

You can instruct QTP to recover unexpected events or errors that occurred in your testing environment during test run. Recovery scenario manager provides a wizard that guides you through the defining recovery scenario. Recovery scenario has three steps
1. Triggered Events
2. Recovery steps
3. Post Recovery Test-Run

3. What is the use of Text output value in QTP?

Output values enable to view the values that the application talks during run time. When parameterized, the values change for each iteration. Thus by creating output values, we can capture the values that the application takes for each run and output them to the data table.

4. How to use the Object spy in QTP 8.0 version?

There are two ways to Spy the objects in QTP
1) Thru file toolbar: In the File ToolBar click on the last toolbar button (an icon showing a person with hat).
2) Thru Object repository Dialog: In Objectrepository dialog click on the button “object spy…” In the Object spy Dialog click on the button showing hand symbol. The pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object. If at all the object is not visible or window is minimized then hold the Ctrl button and activate the required window to and release the Ctrl button.

5. What is the file extension of the code file and object repository file in QTP?

File extension of
Per test object rep: filename.mtr
Shared Object rep: filename.tsr
Code file extension id: script.mts

6. Explain the concept of object repository and how QTP recognizes objects?

Object Repository: displays a tree of all objects in the current component or in the current action or entire test( depending on the object repository mode you selected).
we can view or modify the test object description of any test object in the repository or to add new objects to the repository.
Quicktest learns the default property values and determines in which test object class it fits. If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description. If no assistive properties are available, then it adds a special Ordianl identifier such as objects location on the page or in the source code.

7. What are the properties you would use for identifying a browser and page when using descriptive programming?

“name” would be another property apart from “title” that we can use. OR
We can also use the property “micClass”.
ex: Browser(”micClass:=browser”).page(”micClass:=page”)

8. What are the different scripting languages you could use when working with QTP?

You can write scripts using following languages:
Visual Basic (VB), XML, JavaScript, Java, HTML

9. Tell some commonly used Excel VBA functions.

Common functions are:
Coloring the cell, Auto fit cell, setting navigation from link in one cell to other saving

10. Explain the keyword createobject with an example.

Creates and returns a reference to an Automation object
syntax: CreateObject(servername.typename [, location])
Arguments
servername:Required. The name of the application providing the object.
typename : Required. The type or class of the object to create.
location : Optional. The name of the network server where the object is to be created.
11. Explain in brief about the QTP Automation Object Model.
Essentially all configuration and run functionality provided via the QuickTest interface is in some way represented in the QuickTest automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in QuickTest have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the QuickTest automation object model, along with standard programming elements such as loops and conditional statements to design your program.
12. How to handle dynamic objects in QTP?
QTP has a unique feature called Smart Object Identification/recognition. QTP generally identifies an object by matching its test object and run time object properties. QTP may fail to recognize the dynamic objects whose properties change during run time. Hence it has an option of enabling Smart Identification, wherein it can identify the objects even if their properties changes during run time.
Check out this:
If QuickTest is unable to find any object that matches the recorded object description, or if it finds more than one object that fits the description, then QuickTest ignores the recorded description, and uses the Smart Identification mechanism to try to identify the object.
While the Smart Identification mechanism is more complex, it is more flexible, and thus, if configured logically, a Smart Identification definition can probably help QuickTest identify an object, if it is present, even when the recorded description fails.
The Smart Identification mechanism uses two types of properties:
Base filter properties - The most fundamental properties of a particular test object class; those whose values cannot be changed without changing the essence of the original object. For example, if a Web link’s tag was changed from to any other value, you could no longer call it the same object. Optional filter properties - Other properties that can help identify objects of a particular class as they are unlikely to change on a regular basis, but which can be ignored if they are no longer applicable.
13. What is a Run-Time Data Table? Where can I find and view this table?
In QTP, there is data table used, which is used at runtime.
-In QTP, select the option View->Data table.
-This is basically an excel file, which is stored in the folder of the test created, its name is Default.xls by default.
14. How does Parameterization and Data-Driving relate to each other in QTP?
To data driven we have to parameterize. i.e. we have to make the constant value as parameter, so that in each interaction(cycle) it takes a value that is supplied in run-time data table. Through parameterization only we can drive a transaction (action) with different sets of data. You know running the script with the same set of data several times is not suggested, and it’s also of no use.
15. What is the difference between Call to Action and Copy Action.?
Call to Action: The changes made in Call to Action, will be reflected in the original action (from where the script is called). But where as in Copy Action , the changes made in the script ,will not effect the original script(Action)
16. Explain the concept of how QTP identifies object.
During recording qtp looks at the object and stores it as test object. For each test object QT learns a set of default properties called mandatory properties, and look at the rest of the objects to check whether this properties are enough to uniquely identify the object. During test run, QTP searches for the run time objects that matches with the test object it learned while recording.
17. Differentiate the two Object Repository Types of QTP.
Object repository is used to store all the objects in the application being tested.
Types of object repository: Per action and shared repository.
In shared repository only one centralized repository for all the tests. where as in per action for each test a separate per action repository is created.
18. What the differences are and best practical application of Object Repository?
Per Action: For Each Action, one Object Repository is created.
Shared: One Object Repository is used by entire application
19. Explain what the difference between Shared Repository and Per Action Repository
Shared Repository: Entire application uses one Object Repository , that similar to Global GUI Map file in WinRunner
Per Action: For each Action, one Object Repository is created, like GUI map file per test in WinRunner
20. Have you ever written a compiled module? If yes tell me about some of the functions that you wrote.
Sample answer (You can tell about modules you worked on. If your answer is Yes then You should expect more questions and should be able to explain those modules in later questions): I Used the functions for Capturing the dynamic data during runtime. Function used for Capturing Desktop, browser and pages.
21. Can you do more than just capture and playback?
Sample answer (Say Yes only if you worked on): I have done Dynamically capturing the objects during runtime in which no recording, no playback and no use of repository is done AT ALL.
-It was done by the windows scripting using the DOM(Document Object Model) of the windows.
22. How to do the scripting. Are there any inbuilt functions in QTP? What is the difference between them? How to handle script issues?
Yes, there’s an in-built functionality called “Step Generator” in Insert->Step->Step Generator -F7, which will generate the scripts as you enter the appropriate steps.
23. What is the difference between check point and output value?
An output value is a value captured during the test run and entered in the run-time but to a specified location.
EX:-Location in Data Table[Global sheet / local sheet]
24. How many types of Actions are there in QTP?
There are three kinds of actions:
Non-reusable action - An action that can be called only in the test with which it is stored, and can be called only once.
Reusable action - An action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests.
External action - A reusable action stored with another test. External actions are read-only in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action.
25. I want to open a Notepad window without recording a test and I do not want to use System utility Run command as well. How do I do this?
You can still make the notepad open without using the record or System utility script, just by mentioning the path of the notepad “( i.e. where the notepad.exe is stored in the system) in the “Windows Applications Tab” of the “Record and Run Settings window.

Important Unix Shell Questions

Important Unix Shell Questions
Some Important Unix Shell Questions
1. There can be multiple Kernels and shells running on your system. True or False?
2. Why shell is called Command Interpreter?
3. Two UNIX systems may or may not use the same system calls. True or False?
4. To obtain help on any feature of the system, what are the possible help sources available?
5. Why are the directories /bin and /usr/bin usually found first in the output of echo $PATH?
6. If two commands with the same filename exist in two directories in PATH, how can they be executed.
7. How is the Current directory is indicated in the value of the PATH?
8. Use the type command with the following arguments—cd, date, pwd and ls. Which are the internal commands in the list?
9. What is the difference between an argument and an option?
10. if the command ls –all works on your system, which flavor of UNIX could you be using?
11. What does the secondary prompt look like and when does it appear?
12. You located the string crontab in a man page by searching with /crontab [Enter]. How do you find out the other occurrences of this string in the page?
13. What is a pager? Name the two standard pagers used by man.
14. If a command doesn’t seem to complete, which key will you press to interrupt it?
15. Do you need to wait for a command to finish before entering the next one?
16. What do the | and the three dots in the SYNOPSIS section of these man pages indicate as shown below?
/usr/xpg4/bin/tail [ -f | -r ]
/usr/bin/ls [ -aAbcCdfFgilLmnopqrRstux1 ] [file .. ]
17. How do you direct man to use a specific pager, say less?
18. What is a whitespace? Explain the treatment the shell metes out to a command that contains a lot of whitespace.
19. A Program file named foo exists in the current directory, but when we try to execute it by entering foo, we see the message foo: command not found. Explain how that can happen?
20. What do multiprogramming, multiuser and multitasking mean?
21. Why are many UNIX commands designed to perform simple rather than complex tasks?

Tool Support for Testing

Tool Support for Testing

6.1 Overview

When people discuss testing tools they invariably think of automated testing tools and in particular capture/replay tools. However, the market changes all the time and this module is intended to give you a flavor of the many different types of testing tool available. There is also a discussion about how to select and implement a testing tool for your organization. Remember the golden rule, if you automate a mess, you'll get automated chaos; choose tools wisely!

6.2 Objectives

After completing this module you will be able to:

» Name up to thirteen different types of testing tools.

» Explain which tools is in common use today and why.

» Understand when test automation tools are appropriate and when they are not.

» Describe in outline a tool selection process.

6.3 Types of CAST tools

There are numerous types of computer-aided software testing (CAST) tool and these are briefly described below.

Requirements testing tools provide automated support for the verification and validation of requirements models, such as consistency checking and animation.

Static analysis tools provide information about the quality of the software by examining the code, rather than buy running test cases through the code. Static analysis tools usually give objective measurements of various characteristics of the software, such as the cyclomatic complexity measures and other quality metrics.

Test design tools generate test cases from a specification that must normally be held in a CASE tool repository or from formally specified requirements held in the tools itself. Some tools generate test cases from an analysis of the code.






Test data preparation tools enable data to be selected from existing databases or created, generated, manipulated and edited fro use in tests. The most sophisticated tools can deal with a range of file and database formats.

Character-based test running tools provide test capture and replay facilities for dumb-terminal based applications. The tools simulate user-entered terminal keystrokes and capture screen responses for later comparison. Test procedures are normally captured in a programmable script language; data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.

GUI test running tools provide test capture and replay facilities for WIMP interface based applications. The tools simulate mouse movement, button clicks and keyboard inputs and can recognize GUI objects such as windows, fields, buttons and other controls. Object states and bitmap images can be captured for later comparison. Test procedures are normally captured in a programmable script language; data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.

Test harnesses and drivers are used to execute software under test, which may not have a user interface, or to run groups of existing automated test scripts, which can be controlled by the tester. Some commercially available tools exist, but custom-written programs also fall into this category. Simulators are used to support tests where code or other systems are either unavailable or impracticable to use (e.g. testing software to cope with nuclear meltdowns).

Performance test tools have two main facilities: load generation and test transaction measurement. Load generation is done either by driving application using its user interface or by test drivers, which simulate load generated by application on architecture. Records of numbers of transactions executed are logged. Driving application using its user interface, response time measurements are taken for selected transactions and these are logged. Performance testing tools normally provide reports based on test logs, and graphs of load against response times.

Dynamic analysis tools provide run-time information on state of executing software. These tools are most commonly used to monitor allocation, use and de¬-allocation of memory, flag memory leaks, unassigned pointers, pointer arithmetic and other errors difficult to find 'statically'.

Debugging tools are mainly used by programmers to reproduce bugs and investigate the state of programs. Debuggers enable programmers to execute programs line by line, to halt program at any program statement and to set and examine program variables.





Comparison tools are used. to detect differences between actual results and expected results. Standalone comparison tools normally deal with a range of file or database formats. Test running tools usually have built-in comparators that deal with character screens, Gill objects or bitmap images. These tools often have filtering or masking capabilities, whereby they can 'ignore' rows or columns of data or areas on screens.

Test management tools may have several capabilities. Test ware management is concerned with creation, management and control of test documentation, e.g. test plans, specifications, and results. Some tools support project management aspects of testing, for example, scheduling of tests, logging of results and management of incidents raised during testing. Incident management tools may also have workflow-oriented facilities to track and control allocation, correction and retesting of incidents. Most test management tools provide extensive reporting and analysis facilities.

Coverage measurement (or analysis) tools provide objective measures of structural test coverage when test are executed. Programs to be tested are instrumented before compilation. Instrumentation code dynamically captures coverage data in a log file without affecting functionality of program under test. After execution, log file is analysed and coverage statistics generated. Most tools provide statistics on most common coverage measures such as statement or branch coverage.


6.4 Tool selection and implementation

There are many test activities, which can be automated, and test execution tools are not necessarily first or only choice. Identify your test activities where tool support could be of benefit and prioritize areas of most importance.

Fit with your test process may be more important than choosing tool with most features in deciding whether you need a tool, and which one you choose. Benefits of tools usually depend on a systematic and disciplined test process. If testing is chaotic, tools may not be useful and may hinder testing. You must have a good process now, or recognize that your process must improve in parallel with tool implementation. The ease by which CAST tools can be implemented might be called 'CAST readiness’.

Tools may have interesting features, but may not necessarily be available on your platforms, e.g., 'works on 15 flavors of Unix, but not yours...’ Some tools, e.g. performance testing tools, require their own hardware, so cost of procuring this hardware should be a consideration in your cost benefit analysis. If you already have tools, you may need to consider level and usefulness of integration with other tools, e.g., you may want test execution tool to integrate with your existing test management tool (or vice versa). Some vendors offer integrated toolkits, e.g. test execution, test management, performance-testing bundles. Integration between some tools may bring major benefits, in other cases; level of integration is cosmetic only.





Once automation requirements are agreed, selection process has four stages:

Creation of a candidate tool shortlist.
Arrange demos.
Evaluation(s) of selected tool(s).
Review and select tool.

Before making a commitment to implementing the tool across all projects, a pilot project is usually undertaken to ensure the benefits of using the tool can actually be achieved. The objectives of the pilot are to gain some experience in use of the tools, identify changes in the test process required and assess the actual costs and benefits of implementation. Roll out of the tool should be based on a successful result from the evaluation of the pilot. Roll -out normally requires strong commitment from tool users and new projects, as there is an initial overhead in using any tool in new projects.

Exercise

Incident management system

List some of your requirements for an incident management system.


6.5 Summary

In module six you have learnt that in particular you can now:

Understand there are many different types of testing tool to support the test process.

Understand what CAST stands for.

Understand that you must have a mature test process before embarking on test automation.

Know why you must define requirements for a tool prior to purchasing one.

Basic Testing Questions

Testing Questions
1. When an application is given for testing, with what initial testing the testing will be started and when are all the different types of testing done following the initial testing?
2. What is difference between test plan and use case?
3. In an application if I enter the delete button it should give an error message “Are you sure you want to delete” but the application gives the message as “Are you sure”. Is it a bug? And if it is how would you rate its severity?
4. Who are the three stake holders in testing?
5. What is meant by bucket testing?
6. What is test case analysis?
7. What is meant by test environment,… what is meant by DB installing and configuring and deploying skills?
8. What is logsheet? And what are the components in it?
9. What is the Difference between Project and Product testing? What difference you have observed while testing the Clint/Server application and web server application
10. What are the differences between interface and integration testing? Are system specification and functional specification the same? What are the differences between system and functional testing?
11. What is Multi Unit testing?
12. What are the different types, methodologies, approaches, methods in software testing
13. What is the difference between test techniques and test methodology?
14. What is Red Box testing? What is Yellow Box testing? What is Grey Box testing?
15. The recruiter asked if I have Experience in Pathways. What is this?
16. What is the difference between GUI testing and black box testing
17. What are the main things we have to keep in mind while writing the test cases? Explain with format by giving an example
18. What is business process in software testing?
19. What is the difference between Desktop application testing and Web testing?
20. Find the values of each of the alphabets. N O O N S O O N + M O O N J YOU N E
21. With multiple testers how does one know which test cases are assigned to them? > Folder structure > Test process
22. What kind of things does one need to know before starting an automation project?
23. What is difference between a Test Plan, a Test Strategy, A Test Scenario, and A Test Case? What’s is their order of succession in the STLC?
24. How many functional testing tools are available? What is the easiest scripting language used?
25. Which phase is called as the Blackout or Quite Phase in SDLC?
26. How we can write functional and integration test cases? Explain with format by giving examples.
27. Explain the water fall model and V- model of software development life cycles with block diagrams.
28. For notepad application can any one write the functional and system test cases?
29. Can you give me the exact answer for Test Bug?
30. What is the difference between Use Case and test case?
31. What is installation shield in testing
32. What’s main difference between smoke and sanity testing? When are these performed?
33. What Technical Environments have you worked with?
34. Have you ever converted Test Scenarios into Test Cases?
35. What is one key element of the test case?
36. What are the management tools we have in testing?
37. Can we write Functional test case based on only BRD or only Use case?
38. What is the ONE key element of ‘test case’?
39. What is the ONE key element of a Test Plan?
40. During the start of the project how will the company come to an conclusion that tool is required for testing or not?
41. Define Bug Life Cycle? What is Metrics
42. What is a Test procedure?
43. What is SQA testing? tell us steps of SQA testing
44. How do you promote the concept of phase containment and defect prevention?
45. Which Methodology you follow in your test case?
46. What are the test cases prepared by the testing team
47. What is the difference between SYSTEM testing and END-TO-END testing?
48. What is Traceability Matrix? Is there any interchangeable term for Traceability Matrix? Are Traceability Matrix and Test Matrix same or Different?
49. What are test bugs?
50. Define Quality - bug free, Functionality working or both?
51. What is the purpose of software testing’s - Bug removal, System’s functionality working, quality or all?
52. What is the major difference between Web services & client server environment?
53. What is the difference between an exception and an error?
54. Correct bug tracking process - Reporting, Re-testing, Debugging, …..?
55. What is the difference between bug and defect?
56. How much time is/should be allocated for testing out of total Development time based on industry standards?
57. Is there any tool to calculate how much time should be allocated for testing out of total development?
58. Cost of solving a bug from requirements phase to testing phase - increases slowly, decreases, increases steeply or remains constant?
59. What is scalability testing? What are the phases of the scalability testing?
60. What is the difference between end to end testing and system testing.
61. What kind of things does one need to know before starting an automation project?
62. What is Scalability testing? Which tool is used?
63. Define Reliability?
64. Best to solve defects - requirements, plan, design, code / testing phase?
65. What is the difference between a defect and an enhancement?
66. Project is completed. Completed means that UAT testing is going. In that situation as a tester what will you do?
67. Have you worked with data pools and what is your opinion on them? Give me an example as to how a script would handle the data pool.
68. What is difference between a Test Plan, a Test Strategy, A Test Scenario, and A Test Case? What’s is their order of succession in the STLC?
69. How many functional testing tools are available? What is the easiest scripting language used?
70. If we found the bug in SRS or FRS, how to categorize that bug?
71. What is the difference between end to end testing and system testing.

Traceability Matrix

In a software development process, a traceability matrix is a table that correlates any two baselined documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (sometimes known as marketing requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.
Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists and that one must be made. Large values imply that the item is too complex and should be simplified.
To ease with the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward traceability and forward traceability. In other words, when an item is changed in one baselined document, it's easy to see what needs to be changed in the other.
Sample traceability matrix
Requirement Identifiers reqs tested REQ1
UC
1.1 REQ1
UC
1.2 REQ1
UC
1.3 REQ1
UC
2.1 REQ1
UC
2.2 REQ1
UC
2.3.1 REQ1
UC
2.3.2 REQ1
UC
2.3.3 REQ1
UC
2.4 REQ1
UC
3.1 REQ1
UC
3.2 REQ1
TECH
1.1 REQ1
TECH
1.2 REQ1
TECH
1.3
Test Cases 321 3 2 3 1 1 1 1 1 1 2 3 1 1 1
tested implicitly 77
1.1.1 1 x
1.1.2 2 x x
1.1.3 2 x x
1.1.4 1 x
1.1.5 2 x x
1.1.6 1 x
1.1.7 1 x
1.2.1 2 x x
1.2.2 2 x x
1.2.3 2 x x
1.3.1 1 x
1.3.2 1 x
1.3.3 1 x
1.3.4 1 x
1.3.5 1 x
etc…
5.6.2 1

Dictionary - Some useful words and their synonyms

Dwarves = dwarf (plural)
Sue = take to court
Discrimination = bias/prejudice
Bitches = female dog/difficult situation
Slang= vernacular/colloquial speech/informal speech
Shit = an offensive term used for something unpleasant
Humility = humbleness/modesty/meekness/shyness/submissiveness/compliance
Temperament = nature/character/personality/disposition/spirit/temper/outlook
Squabble = argue/quarrel/bicker
Redemption = salvation/deliverance/rescue/recovery/revitalization
Symbolism = representation
Inadequacy = insufficiency
Tribute = compliment/honor/praise
Docile = passive/quiet
Blockade = barrier/obstruct/cordon
Regent = one who rules in absence of monarch
Monarch = sovereign
In Lieu of = in place of
Archive = stored
Docketing = pending cases in LAW court
Ounce = little
Quack = duck’s child
Scarce = of little supply
Flutterby = butterfly
Prick = pierce
Shed = drop/get rid of
Arbitration = adjudication/negotiation
Litigation = court case/proceedings
Meditation = consideration/deliberation/fore thought
Torrent = violent flow/flood/gush/downpour/surge
Blink = flicker/open and close the eyes
Lick = defeat/beat/conquer
Assassination = murder/killing/shooting
Bump = hit/knock/bang
Delivery
Decomposition = putrefaction
Ergonomic = The science of people-machine relationships. An ergonomically designed product implies that the device blends smoothly with a person's body or actions.
Depleted = exhausted
Migraine = severe headache
Adolescence = teen age years/puberty/youth
Headache = annoyance/nuisance
Numbness = lack of sensation
Tingling = itchy/tickly/prickly
Agitation = campaigning/demonstration/protest/confrontation
Statute = act/ law
Agile = lively/alert

Lump sum = Large payment of money received at one time instead of in periodic payments
Annuity = pension/allowance/income
Glossary = vocabulary/lexicon
Explicit = Open/clear/overt/precise
Implicit = ambiguous/hidden/embedded/unspoken




Embarrass = make uncomfortable/humiliate
Indispensable = crucial/vital/important
Judgment = decision/verdict/opinion
Liaison = link/connection/association
Occurrence =
Perseverance = insistence/urgency/firmness
Prerogative = privilege
Supersede = succeed
Aesthetic = artistic/visual
Indispensable = crucial/vital
Whopping = enormous/gigantic/monstrous
Consolidate = merge/combine/unite
Forgery and tampering = copy and paste
Repudiation = denial/negation/disclaimer
Debilitated = incapacitated/injured/harmed
Spruce up = tidy/neat/orderly
Brittle = fragile
Incarnation = personification/manifestation
Ritual = custom/habit/service
Weasel = sly or underhanded person
Sly = cunning
Forfeited = penalty for doing something wrong
Tomb = crypt/burial place/grave
Zoroastrian = Persian
Cite = name/quote/mention/refer to
Equestrianism = horseback riding
Exotic = glamorous/foreign/bizarre/out of ordinary
In accordance with = in keeping/in line/in reference to
Betrothed = engaged
Regent = substitute for monarch
Amour = illicit love affair
Illicit = illegal/unlawful/illegitimate
Purportedly = supposedly/allegedly
Weird = strange/odd/bizarre
Assassinate =kill/shoot/slay
Catastrophic = disastrous/shattering/calamitous
Premonition = forewarning/feeling/intuition/omen/sign/presentiment
Verdict = decision/judgment
Siege = cordon/blockade/barrier
Jolt =jerk/shock
Outfit = group/setup/team/company
Outfits= clothes/garments/attire
Patriarch = head of family/senior
Debacle = devastation/catastrophe/disaster
Unlikely = improbable/implausible/incredible
Disheartening = scary/discouraging/overwhelming/off-putting
Dodgy = corrupt/dishonest/unprincipled
Deceitful =falsehearted/fraudulent/untrustworthy
Incumbency = the period of time during which somebody occupies an official post
Anti-incumbency = expressing or holding an opposing view, especially with regard to a political issue or moral principle
Tenure = term/occupancy/possession
Suave = smooth/sophisticated
Overwhelm = overpower/devastate/crush/wreck
Ruin = destroy/mess up/damage
Mess = jumble/confusion
Allege = declare/claim/assert
Strained = stressed/tense
Anxious = nervous/worried/restless/apprehensive/uneasy
Comprehensive = complete
Apprehensive = worried/uneasy
Probe = search/investigate/look in/survey
Allegation = claim/charge/accusation
Convinced = persuaded/influenced/no doubt
Confront = to face/meet
Consortium = syndicate/group
Regress = revert/relapse/go back/set back
Accreditation = official approval
Endorsement = backing/support/approval
Distressed = upset/bothered/worried
Elapsed = beyond/onwards/over and done/gone
Smear = spread/coat/cover/wipe
Glacier = a large body of continuously accumulating ice and compacted snow, formed in mountain valleys or at the
poles, that deforms under its own weight and slowly moves
Curious = nosy/inquisitive
Curiosity = nosiness/interest
Lag = insulate/wrap/cover/delay
Summon = call/call upon/beckon
Upright = decent/honest/erect
Erection =composition/formation/creation
Ritual = rite/ceremony/formal procedure/sacrament
Sacrament = self punishment/forfeit
Reparation = compensation/reimbursement
Molestation = abuse somebody sexually
Violation = infringement/contravention/abuse
Abuse = mistreatment/ill-treatment
Somebody = Someone
Dissident = rebel
Rebel = revolutionary/rise up/revolt
Rebellion = revolt
Eminent = well known/renowned/prominent
Betray = be disloyal to/give up/deceive
Hold on = grasp/grip/clutch
Pensive =meditative/thinking/pondering/thoughtful/brooding
Glitter = sparkle/shine/dazzle/shimmer/glisten


Harness = tie together/wrap/yoke
Pioneer = lead the way/establish

Amenable = willing/agreeable/acquiescent
Bods = body
Provoke = incite/make somebody to feel angry/indignant
Expats = expatriate
Expatriate = somebody who has moved abroad
Contemporary = modern/up to date/fashionable
Coverage = reporting/exposure/treatment
Motel = a hotel intended to provide short-term lodging for traveling motorists, usually situated close to a highway and having rooms accessible from the parking area
Ritz = to make a show of wealth and extravagance
Rose = rise
Cope = manage/handle/deal with
Alliance = coalition/grouping/agreement
Sneeze = to suddenly, forcefully, and involuntarily expel air through the nose and mouth because of irritation of
the nasal passages
Rampage = run riot/run amok/go wild
Imminent = about to happen/looming/forthcoming
Fortnight = a period of 14 days(UK)
Burst = rupture/disintegrate
Indulge = spoil/treat
Sprint = hurry/run
Scrum =formalized contest for possession of the ball during a rugby game
Gross = total
Net = remaining
Repudiation = negation/denial
Integrity = Honesty
Authentication = verification/validation/endorsement
Reputation = status

Tuesday, November 11, 2008

Interview Questions - Very IMP

Testing FAQ’s
1. What is the difference between test strategy and test plan?

Test strategy and test plan both means that planning the testing activity.
But test strategy is an organizational level term, which can be used for all the projects in the Organization.
Test plan is a project level term which can be used only for that project.


2. Write a test case on web login for bank application and client server?

a) Enter the URL, check the login page open or not.
b) Default focus should be on user name.
c) Password should be in encrypted format or not
d) Password should not get copied (that is ctrl + c, mouse right click, edit have copy disable not work)
e) Input domain testing on user name (that is boundary value analysis)
f) Input domain testing on password (that is boundary value analysis)
g) Properties check on buttons (that is enable or disable)
h) Functionality check on the buttons (it works correct or not)
i) Authorization testing (that is valid or invalid values are allow or not).
j) Case sensitive check on the login page (that is allowed the cap or small letters).
k) After the sing out login, it should come back to your previous page, should not show the expired text.
l) Enter into the after login page copy the URL and click after on the logout, paste the URL, then same page should not allowed to open.
m) Check the assignation time with respect to client
n) Check the cockers
o) Check the single user or multi-user are use
p) Check the usability of the login page
q) Under usability look and fell, spelling mistakes, colures, font sizes and so on.

3. WHO ARE YOU?
I am future Project Manager/CEO of this company.

4. what is the difference b/w Priority and severity in Bug Report

5. Does the integration testing include non functional test?6. How to Trace a Defect?

Tracing a Defect is nothing but Deviation of the Customer requirement in the application.

If Customer Requirement is not performing its function or not available in application, i.e. called Defect.

Defect Tracking means... Ensure that the Defect is related to which Test Case. You should able to find which defect is related to which Requirement, Tracking the defect with respect to Test Cases... you can maintain the documents with Hyperlinks to desired Documents

for e.g. Functional Requirement Specifications Document
Use Case document
Master Test Case Document
Detailed Test Case Document
Defect Profile Document
if you maintain according to your desired documents with hyperlink, you can easily Trace back Which Defect is related to which Requirement...

7. Define with example high severity and low priority ? 2.low severity and high priority? 3.both are high? 4.both are low ?

8. Diff. between system test cases & UAT test cases?
System testing of software is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

9. How do u find duplicate test cases?

In Traceability requirement matrix (TRM) ,we develop the mapping between the functional req. & test cases. you can find out that whether any test case is missed or any Duplicate test case is generated with the help of TRM.


10. What is the big bang approach in integration testing?
All the modules are integrated first at once and then testing performed. Disadvantage of this is we cannot point out the origin of the bug, I mean to say if there is any
bug, then it is difficult to developer to find out which part of application is not working or whether there is any mistake while doing the integration of modules.

11. How to write test case for triangle and square?
12. What is B.V.A with brief example?

13. What do you know about ERP, INSURANCE DOMAINS?

14. Difference between XP and Vista?

To be honest, microsoft hasn't done much in the four year interval between the initial windows xp and windows vista
But security is one dramatic add on, especially as the days where there is a growing number of hackers and malware
We may have already had a taste of this if you downloaded windows xp sp2, which has generally the same security features in windows vista, for example the firewall, although windows vista's firewall is now able to monitor both inbound and outbound traffic
If you would like more features be sure to check out the official window vista features

15. What is suspension & resumption criteria in Test Plan please explain with example?

Suspension/Resumption Criteria in a Software Test Plan :
If any defects are found which seriously impact the test progress the test lead may choose to suspend testing.
The criteria which are considered for suspension or resumption are :
[a] hardware / software not available at the time indicated in the project schedule
[b] the build contains many serious defects which seriously prevent or limit testing progress
[c] Assigned test resources are not available when needed by the test team

Resumption Criteria :

If testing is suspended, resumption will only occur when the problem(s) that caused the suspension have been resolved. When a critical defect is the cause of the suspension, the “FIX” must be verified by the testing team before testing is resumed

16. What is difference between test strategies and test data ?

Suppose there are three modules A-B-C, the output of A is responsible for B and output of B is responsible for C and if A & C is not ready & B module is ready then how u can check Module B

17. What is SQL Index ?

18. Difference between web application and Client Server application ?

19. Give me any example of High severity and Low priority type of bug ?

20. What is diff. between System Testing and Integration Testing ?

21. What is the difference between Performance testing and Load testing ?

22. why need testing ????

23. what is Difference between entry criteria and exit criteria ?

Entry Criteria
Test Case completed
Resource are Identify
Test Data is available
Test Environment Setup

Exit Criteria
All Test Cases Executed
2. All Defects report and Retested
3. Test documents are updated with results
4. All reports are generated:
* Execution report
* Defect Report

24. What is the Test Driver used in Integration Testing. Anybody can explain in Detail. Thanks in advance.

25. what is the difference between test case and test scenario?

26. What is diff between Load testing Stress testing?

27. what is sanity testing when do u start it?

Sanity testing is the initial level of testing. Before you accept the build all the major functionalities are working correctly. Its like a fitness of the product. After finishing the sanity testing we can do the system testing.

Sanity test determines while we are doing the testing Whether the test is reasonable to proceed with further testing or not (means the particular test is must or not).

28. entry and exit criteria for each phase in STLC and SDLC?

29. what is manual testing process?

30. what are the metrics you are following?

Test Metrics is a mechanism to know the effectiveness of the
testing that can be measured quantitatively. It is a
feedback mechanism to improve the Testing Process that is
followed currently.

31. What is Software Testing Methodologies

1.white box testing
2.black box testing
3.gray box testing

32. Why Should we prepare traceability matrix?

From TRM, we know how much requirements covered in test case.
by using TRM easily mapping between requirements and
test case.

It’s very useful while change request time also

33. If the developer rejects that it’s not a bug means then what’s the next step we follow?

Retest the same test case, if u get same bug. Send it to
Developer. If developer didn't accept it approach team lead

34. What is Deferred Bug? Explain it who allocates it?

Deferred means, when we r reporting a bug to particular developer, then the importance of the bug or by the lack of time they are not going to fix that bug at that time. So for that bug they assign DEFFERED STATE.

Deferred means postponed for future versions, Program manager is given by deferred status.

35. What is configuration testing?

During this testing tester validate how well our current project is able to supports on different types of hardware technologies like as different types of printers, n/w
Interface cord (NIC), topology etc. this testing is also called as hardware testing or portable testing

Configuration testing is also called "Hardware compatibility testing", during this testing
Tester will test whether the s/w build is supporting different hardware technologies or not
Ex: printer, Scanners and topologies etc.,

Testing the software against the hardware or testing the software against the software is called configuration testing


36. What is the difference between bug, defect, error, failure, fault & mistake?

Let me explain with an example.
If input is 2+2, then output is an error message, it is error(syntax error).
If input is 2+2, then output is 5, it is bug (logical error).
If the program is for addition and subtraction, and only addition is there and subtraction is left out, it is defect.
Usually the error is found by developer.
Usually the bug is found by tester.
Usually the defect is found by client or user.

37. 100 glasses are there. A servant has to supply glasses to a person If he supplies the glasses without any damage he will get 3 paisa otherwise he will loose 3 paisa. At the end of supplying 100 glasses if he gets 270 paisa, how many glasses were supplied safely.

1GLASS=3PAISE
HE GETS 270 PAISE.
270/3=90 GLASS
SO ANSWER IS 90 GLASS ARE SUPPLIED SAFELY.


38. what is the diff between CMM, SEI, PCMM and six sigma?

The CMM i.e. capability maturity model defines processes to be followed at each level (by experience) to achieve the desired result.
SEI - Software Engineering Institute that certifies the various companies by the help of their representatives in various countries.
PCMM-Peoples CMM i.e. there is about 199 people practices followed across the various companies. I.e. based on the feed back of the people about the company, if we make required changes that yield better results and healthy environment for the people working in it. Ex; Black box, e-mails, etc.
Six Sigma in brief is the standard process in moving towards total bug free Software(ex) like 99.9999%.Though it is similar to CMM5, it is very rigid in some aspects like
rejecting the defect SW component and producing a new one. No compromise in quality. Motorola company is one such Six sigma company.

39. When we enter valid user ID and passwords and unable to login then what do we test more?

Check your databases if your value is correct so check the value case sensitive or not.

Report the bug to development team because sometimes the developer made the mistake or the valid user id does not exist in your database table

If you unable to enter better report it to development team or discuss with Test team Lead or Developer about the particular functionality