Load Runner
- VUGen.
- Controller.
- Analyzer.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it.
Silk Test
- Silk Test is a tool specifically designed for doing regression and functionality testing.
- It is developed by Segue Software Inc.
- Flexible and robust 4Test scripting language.
Tracking Defects
- New =>Rejected.
- Open.
- Fixed =>Reopened =>Open.
- Closed.
Excel Template to Replicate QC
- Requirements (Name, Priority, Type).
- Test Plan (Attachment, Step Name, Description, Expected Results).
- Test Lab (Attachment, Run Name, Status, Host, Duration, Execution Date, Execution Time, Tester).
- Defects (Summary, Category, Detected By, Project, Severity, Reproducible, Subject, Detected on Date, Detected in Version, Status, Regression, Description).
HP Quality Center
- HP Quality Center (earlier called Mercury Test Director).
- HP Quality Center Starter Edition.
- HP Quality Center Enterprise.
- HP Quality Center Premier Edition.
Link for Downloading QC and QTP from HP
https://h10078.www1.hp.com/cda/hpdc/display/main/register.jsp
Tools
- QC (earlier mercury Test Director) is the standard repository. Also, Clear Quest, Bugzilla.
- WinRunner/QTP.
- LoadRunner.
Models
- Collaborative Model.
- IV V Model (Independent Verification and Validation).
- Test Centre Model.
Is there a Build Verification Test or Build Acceptance Test?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues list, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed.
Internationalization is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes. Localization is the process of adapting internationalized software for a specific region or language by adding locale-specific components and translating text.
One method involves pseudo localization.
Salient points are
- Use larger sentences. usually twice in length – Pseudo localization.
- Use larger words to know how if they get cut off vertically – Pseudo localization.
- Characters of a target language renders.
- See if provision for different currency and date is available – Requires code change.
SAS 70 Type II Audit – Data Security
SAS (Statement on Auditing Standards)
- Compatibility testing – testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
- Exploratory testing – often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
- Ad-hoc testing – similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
- Context-driven testing – testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
- Comparison testing – comparing software weaknesses and strengths to competing products.
- Mutation testing – a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources.
Different Types of Testing
- Black box testing – not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
- White box testing – based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.
- Grey box testing – uses a combination of black box testing and white box testing.
- Unit testing – the most ‘micro’ scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
- Sanity testing or Smoke testing – typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.
- Incremental integration testing – continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
- Integration testing – testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
- Functional testing – black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing).
- System testing – black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
- End-to-end testing – similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
- Regression testing – re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing approaches can be especially useful for this type of testing.
- Acceptance testing – final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
- Alpha testing – testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
- Beta testing – testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
- User acceptance testing – determining if software is satisfactory to an end-user or customer.
- Load testing – testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
- Stress testing – term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
- Performance testing – term often used interchangeably with ‘stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.
- Load Testing =>Test under full load of users and transactions.
- Stress Testing => Double the load.
- Spike Testing =>Very sudden ramp up and ramp down in matter of minutes.
- Endurance Testing => Test over continuous periods of time. To check for memory leakages.
- Volume Testing => The database or interface file load is increased. Here we increase the data and keep the same number of users.
- Usability testing – testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
- Install/uninstall testing – testing of full, partial, or upgrade install/uninstall processes.
- Recovery testing – testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
- Failover testing – typically used interchangeably with ‘recovery testing’.
- Security testing – testing how well the system protects against unauthorized internal or external access, wilful damage, etc; may require sophisticated testing techniques.
- Application Security (Example: No hard coded username and password + Level of user Access).
- Physical Security (Example: Secure ODC-Access Card).
- Data Security (Example: Separate domain + Access Controlled Repository (SVN, VSS) + No external data transfer devices).
- Remote Desktop (Not Must).
- Client Security document information should be considered.