test basics

ARCHIVE FOR THE ‘TESTING BASICS’ CATEGORY

0

Difference between Re-testing and Regression testing:

Re-Testing:

  • When tester finds the bug and report to developers and developer fix that bug now if tester tests only that test case in which he found the bug with same or different data then it is known as retesting.
  • Re testing requires re-running of the failed test cases.
  • Re testing is plan based for bug fixes in build notes and docs.

 Regression Testing:

  • After modification or fixing the bug if tester test that test case in which he found the bug and test all the or specified test cases which he executes earlier then it is known as regression testing . The aim of this testing is that bug fixing is not affect the passed test cases.
  • It is an important activity, performed on modified software to provide confidence that changes are correct and do not affect the other functionality and components.
  • Regression testing is generic and may not be always specific to defect fixes.
0

Difference between System Testing and System Integration Testing:

1. Integration testing is a testing in which individual software modules are combined and tested as a group whileSystem testing is a testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements.

2. Integration testing is testing the interface between the modules; it can be top down, bottom up, big bang while System testing is testing the end to end business scenarios in the environment which is similar to production environment.

3. System testing will be conducting at final level while Integration testing will be done at each time of module binding or a new module need to bind with the system.

4. System testing is a high level testing while Integration testing is low level testing. In simple words on completion of integration testing, system testing started not vice versa.

5. Test cases Integration testing are created with the express purpose of exercising the interfaces between the components or modules while test cases for System testing are developed to simulated the real life scenarios.

6. For Example if an application has 8 modules. Testing the entire application with all 8 modules combined, we call it System testing and if application interacts with some other applications (External systems) to retrieve or send data, to test with other application and external system or any other module we call it Integration testing or system integration testing.

2

Difference between Sanity and Smoke Testing:

Smoke Testing:

  • When a build is received smoke testing is done to ensure that whether the build is ready or stable for further testing.
  • Smoke testing is a wide approach where all areas of software application are tested without getting into deeper.
  • Test Cases for smoke testing can be manual or automated.
  • A smoke test is basically designed to touch each and every part of an app in a cursory way.
  • Smoke testing is Shallow and wide.
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details.
  • Smoke testing is like General Health Check Up

Sanity Testing:

  • After receiving a software build, with minor changes in code, or functionality, Sanity testing is performed to ascertain that the bugs have been fixed and no further issues are introduced due to these changes. The goal is to determine that the proposed functionality works roughly as expected.
  • Sanity testing exercises only the particular component of the entire system.
  • A sanity test is usually unscripted and without test scripts or test cases.
  • Sanity Testing is narrow and deep
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first
  • Sanity Testing is like specialized health check up
0

Test Plan:

It is a high level document in which how to perform testing is described. The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.

The plan typically contains a detailed understanding of what the eventual workflow will be.

Master test plan: A test plan that typically addresses multiple test levels.

Phase test plan: A test plan that typically addresses one test phase.

Test Plan Template contains following components:

1. Introduction—

A brief summary of the product being tested. Outline all the functions at a high level.

  • Overview of This New System
  • Purpose of this Document
  • Objectives of System Test

2Resource Requirements

  • Hardware– List of hardware requirements
  • Software–List of software requirements: primary and secondary OS
  • Test Tools—List of tools that will be used for testing.
  • Staffing

 3. Responsibilities

List of QA team members and their responsibilities

4.      Scope—

  • In Scope
  • Out Scope

 5. Training—

List of training’s required

6. References—

List the related documents, with links to them if available, including the following:

  • Project Plan
  • Configuration Management Plan

7.      Features To Be Tested / Test Approach—

  • List the features of the software/product to be tested
  • Provide references to the Requirements and/or Design specifications of the features to be tested

8.      Features Not to Be Tested—

  • List the features of the software/product which will not be tested.
  • Specify the reasons these features won’t be tested.

9.      Test Deliverables—

  • List of the test cases/matrices or their location
  • List of the features to be automated

10.  Approach—

  • Mention the overall approach to testing.
  • Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box

11.Dependencies—

  • Personnel Dependencies
  • Software Dependencies
  • Hardware Dependencies
  • Test Data & Database

12.Test Environment—

  • Specify the properties of test environment: hardware, software, network etc.
  • List any testing or related tools.

13.APPROVALS—

  • Specify the names and titles of all persons who must approve this plan.
  • Provide space for signatures and dates.

14.Risks and Risk management plans—

  • List the risks that have been identified.
  • Specify the mitigation plan and the contingency plan for each risk.

15.Test Criteria—

  • Entry Criteria
  • Exit Criteria
  • Suspension Criteria

16.Estimate—

  • Size
  • Effort
  • Schedule
0
English: Blue ink on the inspection sheet indi...

English: Blue ink on the inspection sheet indicates to the students that they are a “go” during the sling load hands-on testing in the Camp Robertson Training Area Oct. 8. (Photo credit: Wikipedia)

Difference between Load Testing and Stress Testing:

  • Testing the app with maximum number of user and input is defined as load testing. While testing the app with more than maximum number of user and input is defined as stress testing.
  • In Load testing we measure the system performance based on a volume of users. While in Stress testing we measure the breakpoint of a system.
  • Load Testing is testing the application for a given load requirements which may include any of the following criteria:
    •  Total number of users.
    •  Response Time
    • Through Put
    • Some parameters to check State of servers/application.
  • While stress testing is testing the application for unexpected load. It includes
    • Vusers
    • Think-Time

Example:

If an app is build for 500 users, then for load testing we check up to 500 users and for stress testing we check greater than 500.

3

English: System Life Cycle as taught in the AQ...

Systems Development Life Cycle (SDLC):

The systems development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project. o manage this, a number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental etc.

Stages or different phases of SDLC are mentioned below:

  • Project planning, feasibility study
  • Systems analysis, requirements definition
  • Systems design
  • Implementation
  • Integration and testing
  • Acceptance, installation, deployment
  • Maintenance

Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals.

Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs.

Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudo code and other documentation.

Implementation: Coding is written by developers in this phase.

Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability.

Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business.

Maintenance: What happens during the rest of the software’s life: changes, correction, additions, and moves to a different computing platform and more? This, the least glamorous and perhaps most important step of all, goes on seemingly forever.

1

Waterfall Model:

The waterfall model is a sequential software development model in which development is seen as flowing steadily downwards (like a waterfall) through several phases.

Waterfall model was meant to function in a systematic way that takes the production of the software from the basic step going downwards towards detailing just like a Waterfall which begins at the top of the cliff and goes downwards but not backwards.

Different Phases of Waterfall Model:

Definition Study / Analysis: During this phase research is being conducted which includes brainstorming about the software, what it is going to be and what purpose is it going to fulfill.

Basic Design: If the first phase gets successfully completed and a well thought out plan for the software development has been laid then the next step involves formulating the basic design of the software on paper.

Technical Design / Detail Design:  After the basic design gets approved, then a more elaborated technical design can be planned. Here the functions of each of the part are decided and the engineering units are placed for example modules, programs etc.

Construction / Implementation: In this phase the source code of the programs is written.Testing: At this phase, the whole design and its construction is put under a test to check its functionality. If there are any errors then they will surface at this point of the process.

Integration: in the phase of Integration, the company puts it in use after the system has been successfully tested.Management and Maintenance: Maintenance and management is needed to ensure that the system will continue to perform as desired.

Advantages of Waterfall Model:

  • Waterfall model is simple to implement and also the amount of resources required for it are minimal.
  • This methodology is preferred in projects where quality is more important as compared to schedule or cost.
  • Documentation is produced at every stage of the software’s development. This makes understanding the product designing procedure, simpler.
  • After every major stage of software coding, testing is done to check the correct running of the code.

 Disadvantages of Waterfall Iterative Model:

  •  Real projects rarely follow the sequential flow and iterations in this model are handled indirectly. These changes can cause confusion as the project proceeds.
  •  In this model we freeze software and hardware. But as technology changes at a rapid pace, such freezing is not advisable especially in long-term projects.
  • Even a small change in any previous stage can cause big problem for subsequent phases as all phases are dependent on each-other.
  • Going back a phase or two can be a costly affair.

 

Bug Life cycle

Posted: September 3, 2012 in Testing Basics
Tags: buglife cyclelife og bugVarious Stages og bug
0

Bug Life cycle:

(Bug Life cycle) is the journey of a defect from its identification to its closure. The Life Cycle varies from organization to organization.

The different states of a bug can be summarized as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:

  1. New: Tester finds a defect and posts it with the status NEW. This means that the bug is not yet approved.
  2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
  3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
  4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
  5. Deferred: If a valid NEW or ASSIGNED defect is decided to be fixed in upcoming releases instead of the current release it is DEFERRED. This defect is ASSIGNED when the time comes.
  6. Rejected: if Found bug is not invalid, it is DROPPED / REJECTED. Note that the specific reason for this action needs to be given.
  7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
  8. Verified: If the Tester / Test Lead finds that the defect is indeed fixed and is no more of any concern, it is VERIFIED.
  9. Reopened: If the Tester finds that the ‘fixed’ bug is in fact not fixed or only partially fixed, it is reassigned to the Developer who ‘fixed’ it. A REASSIGNED/Reopen  bug needs to be COMPLETED again.
  10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

 

0

STLC (Software Test Life Cycle):

Software testing life cycle (STLC) identifies what test activities to carry out i.e. the process of testing software in a well planned and systematic way is known as software testing lifecycle (STLC).STLC consists of six different phases.

1.Requirements Analysis:

In this phase testers analyze the customer requirements and work with developers during the design phase to see which requirements are testable.

2.Test Planning:

In this phase all the planning about testing is done like what needs to be tested, how the testing will be done, test strategy to be followed, what will be the test environment, what test methodologies will be followed, hardware and software availability, resources, risks etc. During the planning stage, the team of senior level persons comes out with an outline of Testing Plan at High Level. Test plan describe the following:

o   Scope of testing

o   Identification of resources

o   Identification of test strategies

o   Identification of Risks

o   Time schedule

3. Test Case Development:

In this phase test cases are created by tester for testing. In case of automation testing test scripts are created by the tester. This phase also involves the following activities:

o   Revision & finalization of Matrix for Functional Validation.

o   Revision & finalization of testing environment.

o   Review and baseline test cases and scripts

4.     Test Environment setup:

This phase involves the following activities:

  • Understand the required architecture, environment set-up.
  • Prepare hardware and software requirement list
  • Finalize connectivity requirements
  • Setup test Environment and test data
  • Prepare environment setup checklist

5.   Test Execution and Bug Reporting:

In this phase test cases are executed and defects are reported in bug tracking tool, after the test execution is complete and all the defects are reported. After developers fixes the bugs which are reported by the tester, tester conduct the regression testing to ensure that bug has been fixed and not affected any other areas of software

6. Test Cycle closure:

This phase involves the following activities:

  • Track the defects to closure
  • Evaluate cycle completion criteria based on – Time, Test coverage , Cost , Software Quality , Critical  Business ObjectivesPrepare test metrics based on the above parameters.
  •  Prepare Test closure report
  • Test result analysis to find out the defect distribution by type and severity

Test Metrics

Posted: August 30, 2012 in Testing Basics
Tags: Fault coverageMetricsTestsVariance
0

Test Metrics:

The objective of Test Metrics is to capture the planned and actual quantities the effort, time and resources required to complete all the phases of Testing of the SW Project. It provides a measure of the percentage of the software tested at any point during testing.
Test metrics should cover basically 3 things:
1. Test coverage
2. Time for one test cycle
3. Convergence of testing

There are various types of test metrics. Different organization used different types of test metrics.

Functional test coverage:It can be calculated as:
FC=Number of test requirements that are covered by test cases/Total number of test requirements.

Schedule Variance:Schedule Variance indicates how much ahead or behind schedule the testing is. It can be calculated as:
SV = (Actual End Date-Planned End Date) / (Planned End Date-Planned Start Date+1)*100

A high value in schedule variance may signify poor estimation. A low value in schedule variance may signify correct estimation, clear and well understood requirements.

Effort Variance: Effort may be measured in person hours or person days or person months. Effort variance would be computed for all tasks completed during a period .It can be calculated as:
EV= (Actual effort-Estimated effort)/ (Estimated Effort) X 100%

A high positive value of effort variance may signify optimistic estimation, changing business processes, high learning curve, new technology and/or functional area.
A high negative value of effort variance may signify pessimistic estimation or excessive buffering an efficient and skilful project team, high level of componentization and re-usability, clear plans and schedules.
A low value of effort variance may signify accuracy in estimation, timely availability of resources, no creeping requirements.

Defect Age (In Time): Defect Age is used to calculate the time from Introduction to Detection.
Average Age = Phase (Detected – Introduced) / Number of Defects

On-Time delivery: This metrics sheds light on the ability to meet customer commitments. On time delivery may be tracked during the course of the project based on the actual delivery dates and planned commitments for the deliveries done during a period.
OTD= ((No. Of Delivery on time)/Total No of due Delivery)*100

A low value of %On time delivery may signify poor planning and tracking, delays on account of customer, , incorrect estimation, or may point to a project risk having occurred.
A large value of %on time delivery may signify good planning, tracking and foresight, with a high responsiveness for immediate corrective action; a receptive customer, high commitment of the team, and good estimation.

Test cost: It is used to find resources consumed in the testing.
TC= test cost Vs total system cost
This meets identifies the amount of resources used in testing process.

本实践项目深入研究了基于C#编程环境与Halcon图像处理工具包的条码检测技术实现。该原型系统具备静态图像解析与动态视频分析双重功能,通过具体案例展示了人工智能技术在自动化数据采集领域的集成方案。 C#作为微软研发的面向对象编程语言,在Windows生态系统中占据重要地位。其语法体系清晰规范,配合.NET框架提供的完备类库支持,能够有效构建各类企业级应用解决方案。在计算机视觉技术体系中,条码识别作为关键分支,通过机器自动解析商品编码信息,为仓储管理、物流追踪等业务场景提供技术支持。 Halcon工具包集成了工业级图像处理算法,其条码识别模块支持EAN-13、Code128、QR码等多种国际标准格式。通过合理配置检测算子参数,可在C#环境中实现高精度条码定位与解码功能。项目同时引入AForge.NET开源框架的视频处理组件,其中Video.DirectShow模块实现了对摄像设备的直接访问控制。 系统架构包含以下核心模块: 1. Halcon接口封装层:完成图像处理功能的跨平台调用 2. 视频采集模块:基于AForge框架实现实时视频流获取 3. 静态图像分析单元:处理预存图像文件的条码识别 4. 动态视频解析单元:实现实时视频流的连续帧分析 5. 主控程序:协调各模块工作流程 系统运行时可选择图像文件输入或实时视频采集两种工作模式。识别过程中将自动标注检测区域,并输出解码后的标准条码数据。该技术方案为零售业自动化管理、智能仓储系统等应用场景提供了可靠的技术实现路径,对拓展计算机视觉技术的实际应用具有重要参考价值。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
Java内存泄漏发现技术研究.pdf内容概要:本文围绕Java内存泄漏的发现技术展开研究,针对现有研究多集中于泄漏发生后的诊断与修复,而缺乏对泄漏现象早期发现方法的不足,提出了一套结合动态与静态分析的综合解决方案。动态方面,设计了一种面向泄漏的单元测试生成方法,通过识别高风险泄漏模块并生成具有泄漏检测能力的单元测试,实现早期泄漏发现;静态方面,提出基于模式的检测方法,重点识别因错误使用WeakHashMap等弱引用结构导致的内存泄漏,通过静态扫描源代码提前发现潜在缺陷。系统基于JUnit、CodePro Analytix和Soot等工具实现,实验验证了其在JDK等开源项目中发现已知泄漏缺陷的能力。; 适合人群:具备一定Java编程基础,从事软件开发、测试或质量保障工作1-3年的研发人员,以及对程序分析、内存管理感兴趣的研究生或技术人员。; 使用场景及目标:①帮助开发者在编码和测试阶段主动发现潜在内存泄漏,提升软件健壮性;②为构建自动化内存泄漏检测工具链提供理论与实践参考;③深入理解Java内存泄漏的常见模式(如WeakHashMap误用)及对应的动态测试生成与静态分析技术。; 阅读建议:建议结合Soot、JUnit等工具的实际操作进行学习,重点关注第三章和第四章提出的三类泄漏模块识别算法与基于模式的静态检测流程,并通过复现实验加深对溢出分析、指向分析等底层技术的理解。
本方案提供一套完整的锂离子电池健康状态评估系统,采用Python编程语言结合Jupyter交互式开发环境与MATLAB数值计算平台进行协同开发。该技术框架适用于高等教育阶段的毕业设计课题、专业课程实践任务以及工程研发项目。 系统核心算法基于多参数退化模型,通过分析电池循环充放电过程中的电压曲线特性、内阻变化趋势和容量衰减规律,构建健康状态评估指标体系。具体实现包含特征参数提取模块、容量回归预测模型和健康度评估单元三个主要组成部分。特征提取模块采用滑动窗口法处理时序数据,运用小波变换消除测量噪声;预测模型集成支持向量回归与高斯过程回归方法,通过交叉验证优化超参数;评估单元引入模糊逻辑判断机制,输出健康状态百分制评分。 开发过程中采用模块化架构设计,数据预处理、特征工程、模型训练与验证等环节均实现独立封装。代码结构遵循工程规范,配备完整注释文档和单元测试案例。经严格验证,该系统在标准数据集上的评估误差控制在3%以内,满足工业应用精度要求。 本方案提供的实现代码可作为研究基础,支持进一步功能扩展与性能优化,包括但不限于引入深度学习网络结构、增加多温度工况适配、开发在线更新机制等改进方向。所有核心函数均采用可配置参数设计,便于根据具体应用场景调整算法性能。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值