Thursday, November 26, 2009

System testing

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. [1]

As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.



Testing the whole system

System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS). System testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and tests not only the design, but also the behaviour and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).

Types of system testing
The following examples are different types of testing that should be considered during System testing:

1. GUI software testing
2. Usability testing
3. Performance testing
4. Compatibility testing
5. Error handling testing
6. Load testing
7. Volume testing
8. Stress testing
9. User help testing
10. Security testing
11. Scalability testing
12. Capacity testing
13. Sanity testing
14. Smoke testing
15. Exploratory testing
16. Ad hoc testing
17. Regression testing
18. Reliability testing
19. Recovery testing
20. Installation testing
21. Idempotency testing
22. Maintenance testing
23. Recovery testing and failover testing.
24. Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Although different testing organizations may prescribe different tests as part of System testing, this list serves as a general framework or foundation to begin with.

Friday, November 20, 2009

What is Baseline & configuration Management

Baseline:- Generally, a baseline may be a single work product, or set of work products that can be used as a logical basis for comparison
some times baseline can be the original requirements or specifications.
Configuration Management:-
configuration management is process of managing the changes in work product
and changes in documentation etc,



Configuration management is the process of managing change in hardware, software, firmware, documentation, measurements, etc. As change requires an initial state and next state, the marking of significant states within a series of several changes becomes important. The identification of significant states within the revision history of a configuration item is the central purpose of baseline identification.[1]

Typically, significant states are those that receive a formal approval status, either explicitly or implicitly (approval statuses may be marked individually, when such a marking has been defined, or signified merely by association to a certain baseline). Nevertheless, this approval status is usually recognized publicly. Thus, a baseline may also mark an approved configuration item, e.g. a project plan that has been signed off for execution. In a similar manner, associating multiple configuration items with such a baseline indicates those items as being approved.

Generally, a baseline may be a single work product, or set of work products that can be used as a logical basis for comparison. A baseline may also be established (whose work products meet certain criteria) as the basis for subsequent select activities. Such activities may be attributed with formal approval.
Conversely, the configuration of a project often includes one or more baselines, the status of the configuration, and any metrics collected. The current configuration refers to the current status, current audit, current metrics, and latest revision of all configuration items. Similarly, but less frequently, a baseline may refer to all items associated with a specific project. This may include all revisions of all items, or only the latest revision of all items in the project, depending upon context, e.g. "the baseline of the project is proceeding as planned."

A baseline may be specialized as a specific type of baseline.[2] Some examples include:

Functional Baseline: initial specifications established; contract, etc.
Allocated Baseline: state of work products once requirements are approved
Developmental Baseline: state of work products amid development
Product Baseline: contains the releasable contents of the project

Monday, November 16, 2009

Software Risks

Are you developing any Test plan or test strategy for your project? Have you addressed all risks properly in your test plan or test strategy?

As testing is the last part of the project, it’s always under pressure and time constraint. To save time and money you should be able to prioritize your testing work. How will prioritize testing work? For this you should be able to judge more important and less important testing work. How will you decide which work is more or less important? Here comes need of risk-based testing.


What is Risk?
“Risk are future uncertain events with a probability of occurrence and a potential for loss”

Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.

In this article I will cover what are the “types of risks”. In next articles I will try to focus on risk identification, risk management and mitigation.

Risks are identified, classified and managed before actual execution of program. These risks are classified in different categories.

Categories of risks:

1. Schedule Risk:
Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.

Schedules often slip due to following reasons:
Wrong time estimation
Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
Failure to identify complex functionalities and time required to develop those functionalities.
Unexpected project scope expansions.

2. Budget Risk:
Wrong budget estimation.
Cost overruns
Project scope expansion

3. Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.

Causes of Operational risks:
Failure to address priority conflicts
Failure to resolve the responsibilities
Insufficient resources
No proper subject training
No resource planning
No communication in team.

4. Technical risks:
Technical risks generally leads to failure of functionality and performance.

Causes of technical risks are:
Continuous changing requirements
No advanced technology available or the existing technology is in initial stages.
Product is complex to implement.
Difficult project modules integration.

5. Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.

These external events can be:
Running out of fund.
Market development
Changing customer product strategy and priority
Government rule changes.
These are all common categories in which software project risks can be classified. I will cover in detail “How to identify and manage risks” in next article.

Tuesday, November 10, 2009

End to End Testing

Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Fuzz Testing

Fuzz testing or fuzzing is a software testing technique that provides invalid, unexpected, or random data to the inputs of a program. If the program fails (for example, by crashing or failing built-in code assertions), the defects can be noted.

File formats and network protocols are the most common targets of fuzz testing, but any type of program input can be fuzzed. Interesting inputs include environment variables, keyboard and mouse events, and sequences of API calls. Even items not normally considered "input" can be fuzzed, such as the contents of databases, shared memory, or the precise interleaving of threads.

Monkey Testing

In computer science, a monkey test is a unit test that runs with no specific test in mind. The monkey in this case is the producer of any input. For example, a monkey test can enter random strings into text boxes to ensure handling of all possible user input or provide garbage files to check for loading routines that have blind faith in their data.

Testers use the term monkey when referring to a fully automated testing tool. This tool doesn’t know how to use any application, so it performs mouse clicks on the screen or keystrokes on the keyboard randomly. The test monkey is technically known to conduct stochastic testing, which is in the category of black-box testing. There are different types of monkey testing.[

scope of Test Plan

The scope of testing also covers the type of testing one needs to do like load testing stress testing for load and stress how many users test one is going to simulate. also what hardware it is going to use.
if requred one can also provide the schedule and what are elimiated in the testing how many test cycle what is bug reporting and fixing work flow. etc.