Monday, August 31, 2009
ISTQB-Sample-Paper-4
NOTE: Only one answer per question
1.We split testing into distinct stages primarily because:
a) Each test stage has a different purpose.
b) It is easier to manage testing in stages.
c) We can run different tests in different environments.
d) The more stages we have, the better the testing.
2.Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities?
a) Regression testing
b) Integration testing
c) System testing
d) User acceptance testing
3.Which of the following statements is NOT correct?
a) A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branch coverage.
b) A minimal test set that achieves 100% path coverage will also achieve 100% statement coverage.
c) A minimal test set that achieves 100% path coverage will generally detect more faults than one that achieves 100% statement coverage.
d) A minimal test set that achieves 100% statement coverage will generally detect more faults
than one that achieves 100% branch coverage.
Formula : 100% statement coverage = 100% branch coverage
4.Which of the following requirements is testable?
a) The system shall be user friendly.
b) The safety-critical parts of the system shall contain 0 faults.
c) The response time shall be less than one second for the specified design load.
d) The system shall be built to be portable.
5.Analyse the following highly simplified procedure:
Ask: “What type of ticket do you require, single or return?”
IF the customer wants ‘return’
Ask: “What rate, Standard or Cheap-day?”
IF the customer replies ‘Cheap-day’
Say: “That will be £11:20”
ELSE
Say: “That will be £19:50”
ENDIF
ELSE
Say: “That will be £9:75”
ENDIF
Now decide the minimum number of tests that are needed to ensure that all
the questions have been asked, all combinations have occurred and all replies given.
a) 3
b) 4
c) 5
d) 6
6.Error guessing:
a) supplements formal test design techniques.
b) can only be used in component, integration and system testing.
c) is only performed in user acceptance testing.
d) is not repeatable and should not be used.
7.Which of the following is NOT true of test coverage criteria?
a) Test coverage criteria can be measured in terms of items exercised by a test suite.
b) A measure of test coverage criteria is the percentage of user requirements covered.
c) A measure of test coverage criteria is the percentage of faults found.
d) Test coverage criteria are often used when specifying test completion criteria.
8.In prioritising what to test, the most important objective is to:
a) find as many faults as possible.
b) test high risk areas.
c) obtain good test coverage.
d) test whatever is easiest to test.
9.Given the following sets of test management terms (v-z), and activity descriptions (1-5), which one of the following best pairs the two sets?
v – test control
w – test monitoring
x – test estimation
y – incident management
z – configuration control
1 – calculation of required test resources
2 – maintenance of record of test results
3 – re-allocation of resources when tests overrun
4 – report on deviation from test plan
5 – tracking of anomalous test results
a) v-3,w-2,x-1,y-5,z-4
b) v-2,w-5,x-1,y-4,z-3
c) v-3,w-4,x-1,y-5,z-2
d) v-2,w-1,x-4,y-3,z-5
10.Which one of the following statements about system testing is NOT true?
a) System tests are often performed by independent teams.
b) Functional testing is used more than structural testing.
c) Faults found during system tests can be very expensive to fix.
d) End-users should be involved in system tests.
11.Which of the following is false?
a) Incidents should always be fixed.
b) An incident occurs when expected and actual results differ.
c) Incidents can be analysed to assist in test process improvement.
d) An incident can be raised against documentation.
12.Enough testing has been performed when:
a) time runs out.
b) the required level of confidence has been achieved.
c) no more faults are found.
d) the users won’t find any serious faults.
13.Which of the following is NOT true of incidents?
a) Incident resolution is the responsibility of the author of the software under test.
b) Incidents may be raised against user requirements.
c) Incidents require investigation and/or correction.
d) Incidents are raised when expected and actual results differ.
14.Which of the following is not described in a unit test standard?
a) syntax testing
b) equivalence partitioning
c) stress testing
d) modified condition/decision coverage
15.Which of the following is false?
a) In a system two different failures may have different severities.
b) A system is necessarily more reliable after debugging for the removal of a fault.
c) A fault need not affect the reliability of a system.
d) Undetected errors may lead to faults and eventually to incorrect behaviour.
16.Which one of the following statements, about capture-replay tools, is NOT correct?
a) They are used to support multi-user testing.
b) They are used to capture and animate user requirements.
c) They are the most frequently purchased types of CAST tool.
d) They capture aspects of user behavior.
17.How would you estimate the amount of re-testing likely to be required?
a) Metrics from previous similar projects
b) Discussions with the development team
c) Time allocated for regression testing
d) a & b
18.Which of the following is true of the V-model ?
a) It states that modules are tested against user requirements.
b) It only models the testing phase.
c) It specifies the test techniques to be used.
d) It includes the verification of designs.
19.The oracle assumption:
a) is that there is some existing system against which test output may be checked.
b) is that the tester can routinely identify the correct outcome of a test.
c) is that the tester knows everything about the software under test.
d) is that the tests are reviewed by experienced testers.
20.Which of the following characterises the cost of faults?
a) They are cheapest to find in the early development phases and the most expensive to fix in
the latest test phases.
b) They are easiest to find during system testing but the most expensive to fix then.
c) Faults are cheapest to find in the early development phases but the most expensive to fix then.
d) Although faults are most expensive to find during early development phases, they are cheapest to fix then.
21.Which of the following should NOT normally be an objective for a test?
a) To find faults in the software.
b) To assess whether the software is ready for release.
c) To demonstrate that the software doesn’t work.
d) To prove that the software is correct.
22 Which of the following is a form of functional testing?
a) Boundary value analysis
b) Usability testing
c) Performance testing
d) Security testing
23 Which of the following would NOT normally form part of a test plan?
a) Features to be tested
b) Incident reports
c) Risks
d) Schedule
24 Which of these activities provides the biggest potential cost saving from the use of CAST?
a) Test management
b) Test design
c) Test execution
d) Test planning
25 Which of the following is NOT a white box technique?
a) Statement testing
b) Path testing
c) Data flow testing
d) State transition testing
26 Data flow analysis studies:
a) possible communications bottlenecks in a program.
b) the rate of change of data values as a program executes.
c) the use of data on paths through the code.
d) the intrinsic complexity of the code.
27 In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
The next £28000 is taxed at 22% Any further amount is taxed at 40%
To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?
a) £1500
b) £32001
c) £33501
d) £28000
28 An important benefit of code inspections is that they:
a) Enable the code to be tested before the execution environment is ready.
b) can be performed by the person who wrote the code.
c) can be performed by inexperienced staff.
d) are cheap to perform.
29 Which of the following is the best source of Expected Outcomes for User Acceptance Test scripts?
a) Actual results
b) Program specification
c) User requirements
d) System specification
30 What is the main difference between a walk through and an inspection?
a) An inspection is lead by the author, whilst a walk through is lead by a trained moderator.
b) An inspection has a trained leader, whilst a walk through has no leader.
c) Authors are not present during inspections, whilst they are during walk through.
d) A walk through is lead by the author, whilst an inspection is lead by a trained moderator.
31 Which one of the following describes the major benefit of verification early in the life cycle?
a) It allows the identification of changes in user requirements.
b) It facilitates timely set up of the test environment.
c) It reduces defect multiplication.
d) It allows testers to become involved early in the project.
32 Integration testing in the small:
a) tests the individual components that have been developed.
b) tests interactions between modules or subsystems.
c) only uses components that form part of the live system.
d) tests interfaces to other systems.
33 Static analysis is best described as:
a) the analysis of batch programs.
b) the reviewing of test plans.
c) the analysis of program code.
d) the use of black box testing.
34 Alpha testing is:
a) post-release testing by end user representatives at the developer’s site.
b) the first testing that is performed.
c) pre-release testing by end user representatives at the developer’s site.
d) pre-release testing by end user representatives at their sites.
35 A failure is:
a) found in the software; the result of an error.
b) departure from specified behaviour.
c) an incorrect step, process or data definition in a computer program.
d) a human action that produces an incorrect result.
36 In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
The next £28000 is taxed at 22% Any further amount is taxed at 40%
Which of these groups of numbers would fall into the same equivalence class?
a) £4800; £14000; £28000
b) £5200; £5500; £28000
c) £28001; £32000; £35000
d) £5800; £28000; £32000
37 The most important thing about early test design is that it:
a) makes test preparation easier.
b) means inspections are not required.
c) can prevent fault multiplication.
d) will find all faults.
38 Which of the following statements about reviews is true?
a) Reviews cannot be performed on user requirements specifications.
b) Reviews are the least effective way of testing code.
c) Reviews are unlikely to find faults in test plans.
d) Reviews should be performed on specifications, code, and test plans.
39 Test cases are designed during:
a) test recording.
b) test planning.
c) test configuration.
d) test specification.
40 A configuration management system would NOT normally provide:
a) linkage of customer requirements to version numbers.
b) facilities to compare test results with expected results.
c) the precise differences in versions of software component source code.
d) restricted access to the source code library.
**************************Answers******************************
1.A
2.A
3.D
4.C
5.A
6.A
7.C
8.B
9.C
10.D
11.A
12.B
13.A
14.C
15.B
16.B
17.D
18.D
19.B
20.A
21.D
22.A
23.B
24.C
25.D
26.C
27.C
28.A
29.C
30.D
31.C
32.B
33.C
34.C
35.B
36.D
37.C
38.D
39.D
40.B
Thursday, August 27, 2009
6.Tool Support For Testing
Tool Support::
ISEB/ISTQB Foundation Documents::
Below are some of the documents which can be referred for your ISEB/ISTQB foundation in software testing.Click on the link to download the file.
5.Test Management
Test Management::
Test Independence
Configuration Management
Incident Management
Test Monitoring And Reporting
ISEB/ISTQB Foundation Documents::
Below are some of the documents which can be referred for your ISEB/ISTQB foundation in software testing.Click on the link to download the file.
ISEB Foundation Chapter5
4.Test Design Techniques
Examples Questions based on Decision Tables::
decision-table-questions1
Examle Questions based Equivalence Partitions::
ep-boundary-questions-1
Example Questions bases on State Transition::
state-transition-examples2
Example Question based on Structure based Testing::
structure-based-testing
ISEB/ISTQB Foundation Documents::
Below are some of the documents which can be referred for your ISEB/ISTQB foundation in software testing.Click on the link to download the file.
ISEB Foundation Chapter4
Wednesday, August 26, 2009
2.Testing Through Out The Life Cycle
V-model (sequential development model)
Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels.
The four levels used in this syllabus are:
Component (unit) Testing.
Integration Testing.
System Testing.
Acceptance Testing.
In practice, a V-model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing, and system integration testing after system testing.
Software work products (such as business scenarios or use cases, requirements specifications, design documents and code) produced during development are often the basis of testing in one or more test levels. References for generic work products include Capability Maturity Model Integration (CMMI) or ‘Software life cycle processes’ (IEEE/IEC 12207).
Verification and validation (and early test design) can be carried out during the development of the software work products.
Iterative-incremental development models
Iterative-incremental development is the process of establishing requirements, designing, building and testing a system, done as a series of shorter development cycles.
Examples are:
Prototyping.
WaterFall Model.
Rapid application development (RAD).
Rational Unified Process (RUP).
Agile development models.
The resulting system produced by an iteration may be tested at several levels as part of its development. An increment, added to others developed previously, forms a growing partial system, which should also be tested. Regression testing is increasingly important on all iterations after the
first one. Verification and validation can be carried out on each increment.
Testing within a life cycle model
In any life cycle model, there are several characteristics of good testing:
For every development activity there is a corresponding testing activity.
Each test level has test objectives specific to that level.
The analysis and design of tests for a given test level should begin during the corresponding development activity.
Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle.
Test levels can be combined or reorganized depending on the nature of the project or the system architecture. For example, for the integration of a commercial off-the-shelf (COTS) software product into a system, the purchaser may perform integration testing at the system level (e.g. integration to the infrastructure and other systems, or system deployment) and acceptance testing (functional and/or non-functional, and user and/or operational testing).
Component testing
Component testing searches for defects in, and verifies the functioning of, software (e.g. modules, programs, objects, classes, etc.) that are separately testable. It may be done in isolation from the rest of the system, depending on the context of the development life cycle and the system. Stubs, drivers and simulators may be used.
Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behaviour (e.g. memory leaks) or robustness testing, as well as structural testing (e.g. branch coverage). Test cases are derived from work products such as a specification of the
component, the software design or the data model.
Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test framework or debugging tool, and, in practice, usually involves the programmer who wrote the code. Defects are typically fixed as soon as they
are found, without formally recording incidents.
One approach to component testing is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development. This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests until they pass.
Integration testing
Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems.
There may be more than one level of integration testing and it may be carried out on test objects of varying size.
For example:
1. Component integration testing tests the interactions between software components and is done after component testing;
2. System integration testing tests the interactions between different systems and may be done after system testing. In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as workflows ay involve a series of systems. Cross-platform issues may be significant. The greater the scope of integration, the more difficult it becomes to isolate failures to a specific component or system, which may lead to increased risk.
Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or component. In order to reduce the risk of late defect discovery, integration should normally be
incremental rather than “big bang”.
Testing of specific non-functional characteristics (e.g. performance) may be included in integration testing. At each stage of integration, testers concentrate solely on the integration itself. For example, if they are integrating module A with module B they are interested in testing the communication between the modules, not the functionality of either module. Both functional and structural approaches may be used.
Ideally, testers should understand the architecture and influence integration planning. If integration tests are planned before components or systems are built, they can be built in the order required for most efficient testing.
System testing
System testing is concerned with the behaviour of a whole system/product as defined by the scope of a development project or programme.
In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing.
System testing may include tests based on risks and/or on requirements specifications, business processes, use cases, or other high level descriptions of system behaviour, interactions with the operating system, and system resources.
System testing should investigate both functional and non-functional requirements of the system.
Requirements may exist as text and/or models. Testers also need to deal with incomplete or undocumented requirements. System testing of functional requirements starts by using the most appropriate specification-based (black-box) techniques for the aspect of the system to be tested.
For example, a decision table may be created for combinations of effects described in business rules. Structure-based techniques (white-box) may then be used to assess the thoroughness of the testing with respect to a structural element, such as menu structure or web page navigation.
An independent test team often carries out system testing.
Acceptance Testing
Acceptance testing is often the responsibility of the customers or users of a system; other stakeholders may be involved as well.
The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system. Finding defects is not the main focus in acceptance testing. Acceptance testing may assess the system’s readiness for deployment and use, although it is not necessarily the final level of testing. For example, a large-scale system integration test may come after the acceptance test for a system.
Acceptance testing may occur as more than just a single test level, for example:
A COTS software product may be acceptance tested when it is installed or integrated.
Acceptance testing of the usability of a component may be done during component testing.
Acceptance testing of a new functional enhancement may come before system testing.
Typical forms of acceptance testing include the following:
User acceptance testing
Typically verifies the fitness for use of the system by business users.
Operational (acceptance) testing
The acceptance of the system by the system administrators, including:
Testing of backup/restore.
Disaster recovery.
User management.
Maintenance tasks.
Periodic checks of security vulnerabilities.
Contract and regulation acceptance testing
Contract acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the contract is agreed. Regulation acceptance testing is performed against any regulations that must be adhered to, such
as governmental, legal or safety regulations.
Alpha and Beta (or field) Testing
Developers of market, or COTS, software often want to get feedback from potential or existing customers in their market before the software product is put up for sale commercially. Alpha testing is performed at the developing organization’s site. Beta testing, or field testing, is performed by people at their own locations. Both are performed by potential customers, not the developers of the product. Organizations may use other terms as well, such as factory acceptance testing and site acceptance testing for systems that are tested before and after being moved to a customer’s site.
Testing of function (functional testing)
The functions that a system, subsystem or component are to perform may be described in work products such as a requirements specification, use cases, or a functional specification, or they may be undocumented. The functions are “what” the system does.
Functional tests are based on functions and features (described in documents or understood by the testers) and their interoperability with specific systems, and may be performed at all test levels (e.g. tests for components may be based on a component specification).
Specification-based techniques may be used to derive test conditions and test cases from the functionality of the software or system. (See Chapter 4.) Functional testing considers the external behaviour of the software (black-box testing).
A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating to detection of threats, such as viruses, from malicious outsiders. Another type of functional testing, interoperability testing, evaluates the capability of the software product to interact with one or more
specified components or systems.
Testing of non-functional software characteristics (non-functional testing)
Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing. It is the testing of “how” the system works.
Non-functional testing may be performed at all test levels. The term non-functional testing describes the tests required to measure characteristics of systems and software that can be quantified on a varying scale, such as response times for performance testing. These tests can be referenced to a
quality model such as the one defined in ‘Software Engineering – Software Product Quality’ (ISO 9126).
Testing of software structure/architecture (structural testing)
Structural (white-box) testing may be performed at all test levels. Structural techniques are best used after specification-based techniques, in order to help measure the thoroughness of testing through assessment of coverage of a type of structure.
Coverage is the extent that a structure has been exercised by a test suite, expressed as a percentage of the items being covered. If coverage is not 100%, then more tests may be designed to test those items that were missed and, therefore, increase coverage. Coverage techniques are covered in Chapter 4.
At all test levels, but especially in component testing and component integration testing, tools can be used to measure the code coverage of elements, such as statements or decisions. Structural testing may be based on the architecture of the system, such as a calling hierarchy. Structural testing approaches can also be applied at system, system integration or acceptance
testing levels (e.g. to business models or menu structures).
Testing related to changes (confirmation testing (retesting) and regression testing)
After a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed. This is called onfirmation. Debugging (defect fixing) is a development activity, not a testing activity.
Regression testing is the repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the change(s). These defects may be either in the software being tested, or in another related or unrelated software component. It is performed when the software, or its environment, is changed. The extent of regression testing is
based on the risk of not finding defects in software that was working previously.
Tests should be repeatable if they are to be used for confirmation testing and to assist regression testing.
Regression testing may be performed at all test levels, and applies to functional, non-functional and structural testing. Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation.
Maintenance Testing
Maintenance testing for the retirement of a system may include the testing of data migration or archiving if long data-retention periods are required.
In addition to testing what has been changed, maintenance testing includes extensive regression testing to parts of the system that have not been changed. The scope of maintenance testing is related to the risk of the change, the size of the existing system and to the size of the change.
Depending on the changes, maintenance testing may be done at any or all test levels and for any or all test types.
Determining how the existing system may be affected by changes is called impact analysis, and is used to help decide how much regression testing to do.
Maintenance testing can be difficult if specifications are out of date or missing
ISEB/ISTQB Foundation Documents::
Below are some of the documents which can be referred for your ISEB/ISTQB foundation in software testing.Click on the link to download the file.
ISEB Foundation Chapter2
Tuesday, August 25, 2009
1.Testing Fundamentals
Testing Fundamentals
Testing : Testing is to avoid effects of defects that might be Human mistakes or Environmental defects.
Testing is necessary::
To avoid effects of defects.
To avoid failures of Software
We have defects Because humans by nature make mistakes but certain conditions makes humans to make more mistakes.
Humans make mistakes because of
Time Pressure(Dead Lines)
Complexity of the Requirement/Technology
Lack of experience/skill
Lack of information.
Frequent changes
The effects of defects are::
Leads to Injury/Death
Leads to loss of Time
Leads to loss of money
Leads to Bad reputation
Environmental factors can result in mistakes
Pollution (ex: mobile)
Radiation(ex:electromagnetic radiation)
Error ::Detected at the same level/Stage
Bug/Fault/Defect:A deviation identified by another person at a different stage
Failure:Client or End Users identifies the defect
Testing Principles::
Principle 1 – Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but, even if no
defects are found, it is not a proof of correctness.
Principle 2 – Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial
cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing
efforts.
Principle 3 – Early testing
Testing activities should start as early as possible in the software or system development life
cycle, and should be focused on defined objectives.
Principle 4 – Defect clustering
A small number of modules contain most of the defects discovered during pre-release testing, or
are responsible for the most operational failures.
Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no
longer find any new defects. To overcome this “pesticide paradox”, the test cases need to be
regularly reviewed and revised, and new and different tests need to be written to exercise
different parts of the software or system to potentially find more defects.
Principle 6 – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested
differently from an e-commerce site.
Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the
users’ needs and expectations.
Fundamental test process::
The fundamental test process consists of the following main activities:
1)Planning and Control.
2)Analysis and Design.
3)Implementation and Execution.
4)Evaluating exit criteria and Reporting.
5)Test closure activities.
Although logically sequential, the activities in the process may overlap or take place
concurrently.
Test planning and control
Test planning is the activity of verifying the mission of testing, defining the objectives of testing
and the specification of test activities in order to meet the objectives and mission.
Test control is the ongoing activity of comparing actual progress against the plan, and reporting
the status, including deviations from the plan. It involves taking actions necessary to meet the
mission and objectives of the project. In order to control testing, it should be monitored
throughout theproject. Test planning takes into account the feedback from monitoring and control
activities.
Test planning and control tasks are defined in Chapter 5 .
Test analysis and design
Test analysis and design is the activity where general testing objectives are transformed into
tangible test conditions and test cases.
Test analysis and design has the following major tasks:
Reviewing the test basis (such as requirements, architecture, design, interfaces).
Evaluating testability of the test basis and test objects.
Identifying and prioritizing test conditions based on analysis of test items, the specification,
behaviour and structure.
Designing and prioritizing test cases.
Identifying necessary test data to support the test conditions and test cases.
Designing the test environment set-up and identifying any required infrastructure and tools.
Test implementation and execution
Test implementation and execution is the activity where test procedures or scripts are specified
by
combining the test cases in a particular order and including any other information needed for test
execution, the environment is set up and the tests are run.
Test implementation and execution has the following major tasks:
Developing, implementing and prioritizing test cases.
Developing and prioritizing test procedures, creating test data and, optionally, preparing test
harnesses and writing automated test scripts.
Creating test suites from the test procedures for efficient test execution.
Verifying that the test environment has been set up correctly.
Executing test procedures either manually or by using test execution tools, according to the
planned sequence.
Logging the outcome of test execution and recording the identities and versions of the software
under test, test tools and testware.
Comparing actual results with expected results.
Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g. a
defect in the code, in specified test data, in the test document, or a mistake in the way the test
was executed).
Repeating test activities as a result of action taken for each discrepancy.
For example, reexecution of a test that previously failed in order to confirm a fix (confirmation
testing), execution of a corrected test and/or execution of tests in order to ensure that defects
have not been introduced in unchanged areas of the software or that defect fixing did not uncover
other defects (regression testing)
Evaluating exit criteria and reporting
Evaluating exit criteria is the activity where test execution is assessed against the defined
objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:
Checking test logs against the exit criteria specified in test planning.
Assessing if more tests are needed or if the exit criteria specified should be changed.
Writing a test summary report for stakeholders.
Test closure activities
Test closure activities collect data from completed test activities to consolidate experience,
testware, facts and numbers. For example, when a software system is released, a test project is
completed (or cancelled), a milestone has been achieved, or a maintenance release has
beencompleted.
Test closure activities include the following major tasks:
Checking which planned deliverables have been delivered, the closure of incident reports or raising
of change records for any that remain open, and the documentation of the acceptance of the
system.
Finalizing and archiving testware, the test environment and the test infrastructure for later reuse.
Handover of testware to the maintenance organization.
Analyzing lessons learned for future releases and projects, and the improvement of test maturity.
Independent level of Testing
Tests designed by the person(s) who wrote the software under test (low level of independence).
Tests designed by another person(s) (e.g. from the development team).
Tests designed by a person(s) from a different organizational group (e.g. an independent test
team) or test specialists (e.g. usability or performance test specialists).
Tests designed by a person(s) from a different organization or company (i.e. outsourcing or
Certification by an external body)
ISEB/ISTQB Foundation Documents::
Below are some of the documents which can be referred for your ISEB/ISTQB foundation in software testing.Click on the link to download the file.
ISEB Foundation Chapter1
Monday, August 24, 2009
ISTQB Certification Details
Hi,
This Blog gives the clear information about ISTQB(International Software Testing Qualifications Board)
ISTQB Certification Exam would be conducted by ITB(Indian Testing Board). This Certificate is valid all over the world through out the life.
It has two levels.
1)Foundation Level
2)Advanced Level
Foundation Level
Foundation Level exam is written exam, Paper contains 40 questions, all are multiple choice questions and no negative marking for wrong answers, you will be given 90 minutes of time and passing percentage is 65% i.e., 26 marks to get through the exam.
Advanced Level
The Advance Level is aimed at experienced test professionals. This level shows that the person has detailed knowledge regarding testing and skills to apply this knowledge in practice. The requirements for this level are available through the detailed syllabus.
The prerequisites for applying for the exam are as follows:
* Clearing the Foundation Level exam of ISTQB* Two years of experience as a Software Tester
The exam has three sublevels:
- Technical Tester
- Functional Tester
- Test Manager
A single syllabus exists for all the three sublevels. There is an partioning File available which describes the partition of the Syllabus as per the sub-levels. There is an separate paper for each sub-level and each Paper is of 90 Minutes Duration. Sub-Levels can be taken up in any No. & in any order. An individual Certificate is granted on sucessful completion of each sub-level.
You will be informed of your result within two weeks and passing candidates will receive the certificate within eight weeks.
For ISTQB Site::
For Foundation Level Syallbus material ::
For Glossary ::
http://208.116.30.129/glossary-current_v2.pdf
For Exam Dates and Centres ::
For Enrollment Form::
http://www.istqb.in/enrollment_form.php
Enquiry ::
http://www.istqb.in/enquiry.php
************************************************************
Foundation Level Syllabus
There are 6 chapters
1) Testing Fundamentals—–7 Marks
2)Testing through out the life cycle——6Marks
3)Static Testing——3Marks
4)Test Design Techniques—–12Marks
5)Test Management—-8Marks
6)Tool Support for Testing—-4Marks
Thursday, August 20, 2009
3.Static Techniques
A typical formal review has the following main phases:
1. Planning: selecting the personnel, allocating roles; defining the entry and exit criteria for more formal review types (e.g. inspection); and selecting which parts of documents to look at.
2. Kick-off: distributing documents; explaining the objectives, process and documents to the participants; and checking entry criteria (for more formal review types).
3. Individual preparation: work done by each of the participants on their own before the review meeting, noting potential defects, questions and comments.
4. Review meeting: discussion or logging, with documented results or minutes (for more formal review types). The meeting participants may simply note defects, make recommendations for handling the defects, or make decisions about the defects.
5. Rework: fixing defects found, typically done by the author.
6. Follow-up: checking that defects have been addressed, gathering metrics and checking on exit criteria (for more formal review types).
Roles and responsibilities
A typical formal review will include the roles below:
Manager: decides on the execution of reviews, allocates time in project schedules and determines if the review objectives have been met.
Moderator: the person who leads the review of the document or set of documents, including planning the review, running the meeting, and follow-up after the meeting. If necessary, the moderator may mediate between the various points of view and is often the person upon whom the success of the review rests.
Author: the writer or person with chief responsibility for the document(s) to be reviewed.
Reviewers: individuals with a specific technical or business background (also called checkers or inspectors) who, after the necessary preparation, identify and describe findings (e.g. defects) in the product under review. Reviewers should be chosen to represent different perspectives and roles in the review process, and should take part in any review meetings.
Scribe (or recorder): documents all the issues, problems and open points that were identified during the meeting. Looking at documents from different perspectives and using checklists can make reviews more effective and efficient, for example, a checklist based on perspectives such as user, maintainer, tester or operations, or a checklist of typical requirements problems.
Types of review::
A single document may be the subject of more than one review. If more than one type of review is used, the order may vary. For example, an informal review may be carried out before a technical review, or an inspection may be carried out on a requirements specification before a walkthroughwith customers. The main characteristics, options and purposes of common review types are:
Informal review::
Key characteristics:
- No formal process.
- There may be pair programming or a technical lead reviewing designs and code.
- Optionally may be documented; may vary in usefulness depending on the reviewer.
Main purpose:
- Inexpensive way to get some benefit.
Walkthrough::
Key characteristics:
- Meeting led by author;
- Scenarios, dry runs, peer group;
- Open-ended sessions;
- Optionally a pre-meeting preparation of reviewers, review report, list of findings scribe (who is not the author);
- May vary in practice from quite informal to very formal;
- Learning
- Gaining understanding
- Defect finding.
Key characteristics:
- Documented, defined defect-detection process that includes peers and technical experts;
- May be performed as a peer review without management participation;
- Ideally led by trained moderator (not the author);
- Pre-meeting preparation;
- Optionally the use of checklists, review report, list of findings and management participation;
- May vary in practice from quite informal to very formal;
- Discuss
- Make decisions
- Evaluate Alternatives
- Find Defects
- Solve Technical Problems
- Check conformance to specifications and standards.
Inspection::
Key characteristics:
- Led by trained moderator (not the author)
- Usually peer examination
- Defined roles
- Includes metrics
- Formal process based on rules and checklists with entry and exit criteria
- Pre-Meeting preparation
- Inspection report, list of findings
- Formal follow-up process
- Optionally, process improvement and reader
- Find defects.
Walkthroughs, technical reviews and inspections can be performed within a peer group –colleagues at the same organizational level. This type of review is called a “peer review”.
Success factors for reviews::
Success factors for reviews include:
- Each review has a clear predefined objective.
- The right people for the review objectives are involved.
- Defects found are welcomed, and expressed objectively.
- People issues and psychological aspects are dealt with (e.g. making it a positive experience for he author).
- Review techniques are applied that are suitable to the type and level of software work products and reviewers.
- Checklists or roles are used if appropriate to increase effectiveness of defect identification.
- Training is given in review techniques, especially the more formal techniques, such as Inspection.
- Management supports a good review process (e.g. by incorporating adequate time for review activities in project schedules).
- There is an emphasis on learning and process improvement.
Static analysis by tools::
The value of static analysis is:
- Early detection of defects prior to test execution.
- Early warning about suspicious aspects of the code or design, by the calculation of metrics, such as a high complexity measure.
- Identification of defects not easily found by dynamic testing.
- Detecting dependencies and inconsistencies in software models, such as links.
- Improved maintainability of code and design.
- Prevention of defects, if lessons are learned in development.
Typical defects discovered by static analysis tools include:
- Referencing a variable with an undefined value;
- Inconsistent interface between modules and components;
- Variables that are never used;
- Unreachable (dead) code;
- Programming standards violations;
- Security vulnerabilities;
- Syntax violations of code and software models.
Static analysis tools are typically used by developers (checking against predefined rules or programming standards) before and during component and integration testing, and by designers uring software modeling. Static analysis tools may produce a large number of warning messages, which need to be well managed to allow the most effective use of the tool. Compilers may offer some support for static analysis, including the calculation of metrics.
ISEB/ISTQB Foundation Documents::
Below are some of the documents which can be referred for your ISEB/ISTQB foundation in software testing.Click on the link to download the file.
ISEB Foundation Chapter3