Tab Pages

Tuesday, March 1, 2011

CMMI Process Areas


CMMI   :Capability maturity Model
PCMMI
:People Capability maturity Model-

CMMI Process Areas
Level 1(Initial)
Level 2(Managed)
1. Requirements Management (Engg)
Manage the requirements of the project's products and product components
Identify inconsistencies between those requirements and the project's plans and work products.
2. Supplier Agreement Management (Prj mgnt)
To manage the acquisition of products from suppliers.
3. Project Monitoring and Control (Prj mgnt)
provide an understanding of the project’s progress so that appropriate corrective actions can be taken when the project’s performance deviates significantly from the plan.
4. Project Planning (Prj mgnt)
Establish and maintain plans that define project activities
5. Configuration Management (Support)
To establish and maintain the integrity of work products using configuration identification configuration control configuration status accounting and configuration audits.
6. Process & Product QA (Support)
To provide staff and management with objective insight into processes and associated work products.
7. Measurement & Analysis (Support)
To develop and sustain a measurement capability that is used to support management information needs.
Level 3(Defined)
8. Validation (Engg)
To demonstrate that a product or product component fulfills its intended use when placed in its intended environment.
9. Verification (Engg)
To ensure that selected work products meet their specified requirements.
10. Product Integration (Engg)
To assemble the product from the product components ensure that the product as integrated functions properly and deliver the product.
11. Technical Solution (Engg)
To design develop and implement solutions to requirements. Solutions designs and implementations encompass products product components and product-related lifecycle processes either singly or in combinations as appropriate
12. Requirements Development (Engg)
To produce and analyze customer product and product-component requirements
13. Risk Management (Prj mgnt)
To identify potential problems before they occur so that risk-handling activities may be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives.
14. Integrated Project Management + IPPD (Prj mgnt)
To establish and manage the project and the involvement of the relevant stakeholders according to an integrated and defined process that is tailored from the organization's set of standard processes
15. Organizational Training (ProcessM)
To develop the skills and knowledge of people so they can perform their roles effectively and efficiently.
16. Organizational Process Definition+ IPPD (ProcessM)
To establish and maintain a usable set of organizational process assets
17. Organizational Process Focus (ProcessM)
To plan and implement organizational process improvement based on a thorough understanding of the current strengths and weaknesses of the organization’s processes and process assets
18. Decision Analysis & Resolution (Support)
To analyze possible decisions using a formal evaluation process that evaluates identified alternatives against established criteria.
Level 4(Quantitatively Managed)
19. Quantitative Project Management (Prj mgnt)
To quantitatively manage the project’s defined process to achieve the project’s established quality and process-performance objectives
20. Organizational Process Performance (ProcessM)
To establish and maintain a quantitative understanding of the performance of the organization’s set of standard processes in support of quality and process-performance objectives.
Level 5(Innovative)
21. Organizational Innovation & Deployment (ProcessM)
To select and deploy incremental and innovative improvements that measurably improve the organization's processes and technologies.
22. Causal Analysis & Resolution (Support)
To identify causes of defects and other problems and take action to prevent them from occurring in the future.
Level 2(Managed)
23. Requirements Management (Engg)
Manage the requirements of the project's products and product components
Identify inconsistencies between those requirements and the project's plans and work products.
24. Supplier Agreement Management (Prj mgnt)
To manage the acquisition of products from suppliers.
25. Project Monitoring and Control (Prj mgnt)
provide an understanding of the project’s progress so that appropriate corrective actions can be taken when the project’s performance deviates significantly from the plan.
26. Project Planning (Prj mgnt)
Establish and maintain plans that define project activities
27. Configuration Management (Support)
To establish and maintain the integrity of work products using configuration identification configuration control configuration status accounting and configuration audits.
28. Process & Product QA (Support)
To provide staff and management with objective insight into processes and associated work products.
29. Measurement & Analysis (Support)
To develop and sustain a measurement capability that is used to support management information needs.

Data Warehouse Testing


Data Warehouse Testing is Different
All works in Data Warehouse population are mostly through batch runs. Therefore the testing is different from what is done in transaction systems.
Unlike a typical transaction system, data warehouse testing is different on the following counts:
User-Triggered vs. System triggered
Most of the production/Source system testing is the processing of individual transactions, which are driven by some input from the users (Application Form, Servicing Request.). There are very few test cycles, which cover the system-triggered scenarios (Like billing, Valuation.)
In data Warehouse, most of the testing is system triggered as per the scripts for ETL ('Extraction, Transformation and Loading'), the view refresh scripts etc.
Therefore typically Data-Warehouse testing is divided into two parts--> 'Back-end' testing where the source systems data is compared to the end-result data in Loaded area, and 'Front-end' testing where the user checks the data by comparing their MIS with the data displayed by the end-user tools like OLAP.
Batch vs. online gratification
This is something, which makes it a challenge to retain users interest.
A transaction system will provide instant OR at least overnight gratification to the users, when they enter a transaction, which either is processed online OR maximum via overnight batch. In the case of data- warehouse, most of the action is happening in the back-end and users have to trace the individual transactions to the MIS and views produced by the OLAP tools. This is the same challenge, when you ask users to test the month-end mammoth reports/financial statements churned out by the transaction systems.
Volume of Test Data
The test data in a transaction system is a very small sample of the overall production data. Typically to keep the matters simple, we include as many test cases as are needed to comprehensively include all possible test scenarios, in a limited set of test data..
Data Warehouse has typically large test data as one does try to fill-up maximum possible combination and permutations of dimensions and facts.
For example, if you are testing the location dimension, you would like the location-wise sales revenue report to have some revenue figures for most of the 100 cities and the 44 states. This would mean that you have to have thousands of sales transaction data at sales office level (assuming that sales office is lowest level of granularity for location dimension).
Possible scenarios/ Test Cases
If a transaction system has hundred (say) different scenarios, the valid and possible combination of those scenarios will not be unlimited. However, in case of Data Warehouse, the permutations and combinations one can possibly test is virtually unlimited due to the core objective of Data Warehouse is to allow all possible views of Data. In other words, 'You can never fully test a data Warehouse'
Therefore one has to be creative in designing the test scenarios to gain a high level of confidence.
Test Data Preparation
This is linked to the point of possible test scenarios and volume of data. Given that a data- warehouse needs lots of both, the effort required to prepare the same is much more.
Programming for testing challenge
In case of transaction systems, users/business analysts typically test the output of the system. However, in case of data warehouse, as most of the action is happening at the back-end, most of the 'Data Warehouse data Quality testing' and 'Extraction, Transformation and Loading' testing is done by running separate stand-alone scripts. These scripts compare pre-Transformation to post Transformation (say) comparison of aggregates and throw out the pilferages. Users roles come in play, when their help is needed to analyze the same (if designers OR business analysts are not able to figure it out).

 Data Warehouse Testing Categories

Categories of Data Warehouse testing includes different stages of the process. The testing is done on individual and end to end basis.
Good part of the testing of data warehouse testing can be linked to 'Data Warehouse Quality Assurance'. Data Warehouse Testing will include the following chapters:
Extraction Testing
This testing checks the following:
  • Data is able to extract the required fields.
  • The Extraction logic for each source system is working
  • Extraction scripts are granted security access to the source systems.
  • Updating of extract audit log and time stamping is happening.
  • Source to Extraction destination is working in terms of completeness and accuracy.
  • Extraction is getting completed with in the expected window.
Transformation Testing
  • Transaction scripts are transforming the data as per the expected logic.
  • The one time Transformation for historical snap-shots are working.
  • Detailed and aggregated data sets are created and are matching.
  • Transaction Audit Log and time stamping is happening.
  • There is no pilferage of data during Transformation process.
  • Transformation is getting completed with in the given window
Loading Testing
  • There is no pilferage during the Loading process.
  • Any Transformations during Loading process is working.
  • Data sets in staging to Loading destination is working.
  • One time historical snap-shots are working.
  • Both incremental and total refresh are working.
  • Loading is happening with in the expected window.
End User Browsing and OLAP Testing
  • The Business views and dashboard are displaying the data as expected.
  • The scheduled reports are accurate and complete.
  • The scheduled reports and other batch operations like view refresh etc. is happening in the expected window.
  • 'Analysis Functions' and 'Data Analysis' are working.
  • There is no pilferage of data between the source systems and the views.
Ad-hoc Query Testing
  • Ad-hoc queries creation is as per the expected functionalities.
  • Ad-hoc queries output response time is as expected.
Down Stream Flow Testing
  • Data is extracted from the data warehouse and updated in the down-stream systems/data marts.
  • There is no pilferage.
One Time Population testing
  • The one time ETL for the production data is working
  • The production reports and the data warehouse reports are matching
  • T he time taken for one time processing will be manageable within the conversion weekend.
End-to-End Integrated Testing
  • End to end data flow from the source system to the down stream system is complete and accurate.
Stress and volume Testing
This part of testing will involve, placing maximum volume OR failure points to check the robustness and capacity of the system. The level of stress testing depends upon the configuration of the test environment and the level of capacity planning done. Here are some examples from the ideal world:
  • Server shutdown during batch process.
  • Extraction, Transformation and Loading with two to three times of maximum possible imagined data (for which the capacity is planned)
  • Having 2 to 3 times more users placing large numbers of ad-hoc queries.
  • Running large number of scheduled reports.
Parallel Testing
Parallel testing is done where the Data Warehouse is run on the production data as it would have done in real life and its outputs are compared with the existing set of reports to ensure that they are in synch OR have the explained mismatches.
Security Framework testing
Check all possible aspects of Security Framework.
Data Warehouse Implementation Deployment
Data Warehouse implementation will need installation of checking in the final version, productionizing, installing client applications, establishing service and support mechanism, user communications and week-end conversion of database.
After the testing phase is complete, the systems are now ready for implementation
List of Tasks for Data Warehouse Implementation and Deployment
Version the DW implementation scripts and environments & Freeze
  • All scripts related to Extraction, Transformation and Loading
  • All configurations of staging and Loading, and access environments
  • All Data in the source systems.
Run the DW productionization processing
  • Run the historical snap-shots (Some times they are done before hand to save time on the implementation weekend).
  • Run the ETL scripts
  • Run the refresh scripts for downstream data marts, systems.
Data Quality and DW implementation Verification
Business analysis teams mainly do this before they hand it over to users for their verification:
Data Warehouse End User Applications
End user applications need not be implemented along with the Data Warehouse. Typically Data Warehouse is implemented along with an OLAP tool and some end-user tools.
Same rigor is followed for end user applications as that of Data Warehouse, in terms of versioning, production processing and verification. Clients of the end user applications are installed and tested.
End User Training
  • The end user training material is ready.
  • A group of power users has already been taken through it (they themselves might be involved in creation of the training material). These power users will act as the trainers for their respective functions.
  • The training material is divided into different category of users- Power users/Designers, Ad-hoc users (typically they place the ad-hoc queries and use the reports generated by the designers) and recipient users. (The users who only view the reports created by designers)
End User Support mechanisms and communication
  • 'Support mechanism for users' is established and communicated (Help line numbers, with in department mentors/trainers, escalation matrix)
  • Support mechanism for technology staff (vendor help lines and escalation matrix.)
  • The reporting and communication framework on defects, and user feedback is established.
Data Warehouse Test Scenarios
Data Warehouse testing includes all typical testing like exception testing, boundary testing, stress testing etc..
Real Life Simple Scenarios
Simple scenarios are those, which are relatively straightforward and can be the first step to understand the health of the system. examples are:
  • Extraction– Complete table Extraction from a core system with robust DBMS.
  • Transformation – Creation of simple derived attributes (creating complete bill amount from individual billing items) OR creating aggregates.
  • Loading – Loading a dimension set with lesser attributes and without any Transformation during Loading.
  • OLAP – Testing using 'Basic Functions'
Real Life complex scenarios
  • Extraction – Data Extraction from an excel sheet involving filtering out the customers not matching the standard customer code.
  • Transformation – 'De-Dup', 'Integration'
  • Loading – Loading dimensions with large set of attributes.
  • OLAP – Testing population of OLAP population.
Boundary Testing
These are the conditions, which will test the extreme situations possible to be faced by Data warehouse. For example
  • Extraction – No data in the source system.
  • Transformation – Creating derived attribute with input figure being very large OR very small. (For example a % sales revenue figure for sales of USD 10 out of the total sales of USD one million)
Negative Testing
Checking on how the system handles the negative conditions:
  • Extraction – Wrong OR unexpected data in the table. (For example you place the wrong customer ID format, character fields in what should be numeric etc.)
  • Transformation- having negative sales numbers, age of 200 years etc. This is important, as the transformation logic should not only work on what it wants to do, but what all it could face.
  • Loading – Having wrong data sets. For example having data set of dimension 'location' has two columns less OR not existing OR having null values. There should be some fundamental checks, which need to be run by Loading system before it goes for bulk Loading.
  • OLAP- Users entering wrong formulae.
Full Production Simulation
This can be a full scale parallel testing, but is something more than that. Where-as parallel testing is done in synch with the production, the production simulation does not necessarily have to do the same. One takes the back up of the source systems from an earlier date and runs the complete ETL and 'end user tools' operations to look at the results. This typically is a step before the parallel testing is done. Production simulation is more of a lab test by technology before the system is released to full user view of parallel testing.

Software Testing interview questions



Software Testing interview questions

Explain the PDCA cycle.
PDCA cycle stands for Plan Do Check Act; commonly used for quality control.
Plan: Identify aim and procedure necessary to deliver the output.
Do: Implement the plan.
Check: Confirm if the result is as per plan.
Action: Take appropriate action to deceiver expected outcome. Which may also involve repeat the cycle.
What are white-box, black-box and gray-box testing?
White Box testing: white box testing involves thorough testing of the application. It requires knowledge of code and the test cases chosen verifies if the system is implemented as expected. It typically includes checking with the data flow, exceptions, and errors, how they are handled, comparing if the code produces the expected results.
E.g. In electrical appliances the internal circuit testing.
Black Box testing: Black box testing is done at an outer level of the system. Test cases merely check if the output is correct for the given input. User is not expected to the internal flow or design of the system.
Gray Box testing: Grey box testing is a combination of both black box and white box testing. This is because it involves access to the system; however, at an outer level. A little knowledge of the system is expected in Gray box testing.
Explain the difference between Latent and Masked Defect.
Latent defects are defects which remain in the system, however, identified later. They remain in the system for a long time. The defect is likely to be present in various versions of the software and may be detected after the release.
E.g. February has 28 days. The system could have not considered the leap year which results in a latent defect
Masked defect hides other defects in the system. E.g. there is a link to add employee in the system. On clicking this link you can also add a task for the employee. Let’s assume, both the functionalities have bugs. However, the first bug (Add an employee) goes unnoticed. Because of this the bug in the add task is masked.
What is Big-bang waterfall model?
The waterfall model is also known as the Big-bang model because all modules using waterfall module follows the cycle independently and then put together. Big Bang model follows a sequence to develop a software application. It slowly moves to the next phase starting from requirement analysis followed by design, implementation, testing and finally integration and maintenance.
What is configuration Management?
Configuration management aims to establish consistency in an enterprise. This is attained by continuously updating processes of the organization, maintaining versioning and handling the entire organization network, hardware and software components efficiently.
In software, Software Configuration management deals with controlling and tracking changes made to the software. This is necessary to allow easy accommodation of changes at any time.
What is Boundary value Analysis?
Test cases written for boundary value analysis are to detect errors or bugs which are likely to arise while testing for ranges of values at boundaries. This is to ensure that the application gives the desired output when tested for boundary values.
E.g. a text box can accept values from minimum 6 characters to 50 characters. Boundary value testing will test for 5 characters, 6 characters, 50 characters and 51 characters.
What is Equivalence Partitioning?
Equivalence partitioning is a technique used in software testing which aims to reduce the number of test cases and choose the right test cases. This is achieved by identifying the “classes” or “groups” of inputs in such a way that each input value under this class will give the same result.
E.g. a software application designed for an airline has special offer functionality. The offer is that first two member of every city booking the ticket for a particular route gets a discount. Here, the group of inputs can be “All cities in India”.

Explain Random testing.
Random testing as the name suggests has no particular approach to test. It is an ad hoc way of testing. The tester randomly picks modules to test by inputting random values.
E.g. an output is produced by a particular combination of inputs. Hence, different and random inputs are used.
What is Monkey testing?
Monkey testing is a type of random testing with no specific test case written. It has no fixed perspective for testing. E.g. input random and garbage values in an input box.
Explain Software Process.
A software process or software development process is a method or structure expected to be followed for the development of software. There are several tasks and activities that take place in this process. Different processes like waterfall and iterative exists. In these processes; tasks like analysis, coding, testing and maintenance play an important role.
What is Maturity level?
Maturity level of a process defines the nature and maturity present in the organization. These levels help to understand and set a benchmark for the organization.
Five levels that are identified are:
Level 1: Adhoc or initial
Level 2: Repeatable
Level 3: Defined
Level4: managed
Level 5: Optimized


What is process area in CMMI?
Process areas in Capabilty Maturity model describe the features of a products development. These process areas help to identify the level of maturity an organization has attained. These mainly include:
Project planning and monitoring
Risk Management
Requirements development
Process and Product quality assurance
Product integration
Requirement management
Product integration
Configuration management
Explain about tailoring.
Tailoring a software process means amending it to meet the needs of the project. It involves altering the processes in different environments, it’s an ongoing process. Factors like customer and end user relation ship, goals of business must be kept in mind while tailoring. Degree to which tailoring is required must be identified.
What are staged and continuous models in CMMI?
Staged models in CMMI, focus on process improvement using stages or maturity levels. In staged representation each process area has one specific goal. Achieving a goal would mean improvement in control and planning of the tasks associated with the process. Staged representation has 5 maturity levels.
Continuous model in CMMI follow a recommended order for approaching process improvement within each specified process area. It allows the user to select the order of improvement that best meets the organization’s business objectives. Continuous representation has 6 capability levels.
Explain capability levels in continuous representation.
There are 6 capability levels for Continuous representation:
Level 0: Not performed
Level 1: Performed
Level 2: Managed
Level 3: Defined
Level 4: Quantitatively managed
Level 5: Optimizing
Each level has process areas. Each process area has specific goals to achieve. These processes are continuously improved to achieve the goals in a recommended order.
What is SCAMPI process?
Standard CMMI Appraisal Method for Process Improvement provides a benchmark relative to Maturity Models. It describes requirements, activities and processes associated with each process area.  The SCAMPI appraisals identify the flaws of current processes. It gives an idea of area of improvement and determines capability and maturity levels.
What is the importance of PII in SCAMPI?
P II is Practice Implementation Indicator. As the name suggests, P II serves as an indicator or evidence that a certain practice that supports a goal has been implemented. P II could be a document and be served as a proof.