SlideShare a Scribd company logo
1 of 45
SOFTWARE TESTING
Mr.Jay Prakash Maurya
OUTLINES
• Introduction to Testing
• Debugging
• Purpose and goal of Testing
• Dichotomies
• Testing and Debugging
• Model for Testing
• Consequences of Bugs
• Taxonomy of Bugs
PREREQUISITE
TESTING
Testing is the process of exercising or evaluating a system or
system components by manual or automated means to verify
that it satisfies specified requirements.
Debugging
• Debugging is the process of finding and fixing errors
or bugs in the source code of any software. When
software does not work as expected, computer
programmers study the code to determine why any
errors occurred. They use debugging tools to run the
software in a controlled environment, check the code
step by step, and analyze and fix the issue.
• MYTH: Good
programmers write
code without bugs.
(It’s wrong!!!)
• History says that even
well written programs
still have 1-3 bugs per
hundred statements.
PHASES IN A TESTER'S
MENTAL LIFE:
Phase 0: (Until 1956: Debugging Oriented)
Phase 1: (1957-1978: Demonstration Oriented)
Phase 2: (1979-1982: Destruction Oriented)
Phase 3: (1983-1987: Evaluation Oriented)
Phase 4: (1988-2000: Prevention Oriented)
PURPOSE OF
TESTING
To identify and show
program has bugs.
To show program/ software
works.
To show program/software
doesn’t work.
Goal of Testing
• Bug Prevention
(Primary Goal)
• Bug Discovery
(Secondary)
• Test Design
Bug is manifested in deviation from Expected behaviour.
SOME DICHOTOMIES
TESTING V/S DEBUGGING
MODEL FOR TESTING
Environment Model: Hardware Software (OS, linkage editor,
loader, compiler, utility routines)
Program Model: In order to simplify the order to test.
Complicated enough to test unexpected behavior.
Bug Hypothesis:
 Benign Bug Hypothesis: bugs are nice, tame and logical
 Bug Locality Hypothesis: bug discovered with in a component affects only that
component's behavior
 Control Bug Dominance: errors in the control structures
 Code / Data Separation: bugs respect the separation of code and data
 Lingua Salvatore Est: language syntax and semantics eliminate bugs
 Corrections Abide: corrected bug remains corrected
 Silver Bullets: Language, Design method, representation, environment grants
immunity from bugs.
 Sadism Suffices: Tough bugs need methodology and techniques.
 Angelic Testers: testers are better at test design, Programmer for code design
TEST
Tests are formal procedures, Inputs must be prepared,
Outcomes should predict, tests should be documented,
commands need to be executed, and results are to be
observed. All these errors are subjected to error.
 Unit / Component Testing:
 Integration Testing:
 System Testing:
ROLE OF MODELS:
The art of testing consists of creating, selecting,
exploring, and revising models. Our ability to go through
this process depends on the number of different models
we have at hand and their ability to express a program's
behavior.
TASK-1 [CLASS ACTIVITY]
Focus on types of bugs in the software development process and how to handle these bugs.
https://web.cs.ucdavis.edu/~rubio/includes/ase17.pdf
Software bug prediction using object-oriented metrics (ias.ac.in)
CONSEQUENCE OF BUGS
Damage Depends on :
• Frequency
• Correction Cost
• Installation Cost
• Consequences
Importance= ($) = Frequency * (Correction cost + Installation cost +
Consequential cost)
Consequences of bugs:
• Mild
• Moderate
• Annoying
• Disturbing
• Serious
• Very Serious
• Extreme
• Intolerable
• Catastrophic
• Infectious
SOFTWARE TESTING METRICS
Process Metrics
Product Metrics
Project Metrics
Base Metrics
Calculated Metrics
TAXONOMY OF BUGS
There is no universally correct way to categorize bugs. The
taxonomy is not rigid.
A given bug can be put into one or another category depending
on its history and the programmer's state of mind.
The major categories are:
(1) Requirements, Features, and Functionality Bugs
(2) Structural Bugs
(3) Data Bugs
(4) Coding Bugs
(5) Interface, Integration, and System Bugs
(6) Test and Test Design Bugs.
REQUIREMENTS AND
SPECIFICATIONS BUGS:
Requirements and specifications developed from them can be
incomplete ambiguous, or self-contradictory. They can be
misunderstood or impossible to understand.
The specifications that don't have flaws in them may change while the
design is in progress. The features are added, modified and deleted.
Requirements, especially, as expressed in specifications are a major
source of expensive bugs.
The range is from a few percentages to more than 50%, depending
on the application 10 and environment.
What hurts most about the bugs is that they are the earliest to
invade the system and the last to leave.
FEATURE BUGS:
Specification problems usually create corresponding
feature problems.
A feature can be wrong, missing, or superfluous (serving
no useful purpose). A missing feature or case is easier to
detect and correct. A wrong feature could have deep
design implications.
Removing the features might complicate the software,
consume more resources, and foster more bugs.
FEATURE INTERACTION
BUGS:
Providing correct, clear, implementable and testable feature
specifications is not enough.
Features usually come in groups or related features. The features of
each group and the interaction of features within the group are
usually well tested.
The problem is unpredictable interactions between feature groups
or even between individual features. For example, your telephone is
provided with call holding and call forwarding. The interactions
between these two features may have bugs.
Every application has its peculiar set of features and a much bigger
set of unspecified feature interaction potentials and therefore result
in feature interaction bugs
CONTROL AND SEQUENCE
BUGS:
Control and sequence bugs include paths left out, unreachable
code, improper nesting of loops, loop-back or loop termination
criteria incorrect, missing process steps, duplicated processing,
unnecessary processing, rampaging, GOTO's, ill-conceived (not
properly planned) switches, spaghetti code, and worst of all,
pachinko code.
One reason for control flow bugs is that this area is amenable
(supportive) to theoretical treatment.
Most of the control flow bugs are easily tested and caught in unit
testing
LOGIC BUGS:
Bugs in logic, especially those related to misunderstanding how
case statements and logic operators behave singly and
combinations
Also includes evaluation of boolean expressions in deeply nested
IF-THEN-ELSE constructs.
If the bugs are parts of logical (i.e. boolean) processing not related
to control flow, they are characterized as processing bugs.
If the bugs are parts of a logical expression (i.e. control-flow
statement) which is used to direct the control flow, then they are
categorized as control-flow bugs.
PROCESSING BUGS:
Processing bugs include arithmetic bugs, algebraic,
mathematical function evaluation, algorithm selection
and general processing.
Examples of Processing bugs include: Incorrect
conversion from one data representation to other,
ignoring overflow, improper use of greater-than-or-
equal etc
Although these bugs are frequent (12%), they tend to be
caught in good unit testing.
INITIALIZATION BUGS:
Initialization bugs are common. Initialization bugs can be
improper and superfluous.
Superfluous bugs are generally less harmful but can
affect performance.
Typical initialization bugs include: Forgetting to initialize
the variables before first use, assuming that they are
initialized elsewhere, initializing to the wrong format,
representation or type etc
Explicit declaration of all variables, as in Pascal, can
reduce some initialization problems.
DATA-FLOW BUGS AND
ANOMALIES:
Most initialization bugs are special case of data flow
anomalies.
A data flow anomaly occurs where there is a path along
which we expect to do something unreasonable with
data, such as using an uninitialized variable, attempting
to use a variable before it exists, modifying and then not
storing or using the result, or initializing twice without
an intermediate use
DATA BUGS
Data bugs include all bugs that arise from the specification of
data objects, their formats, the number of such objects, and
their initial values.
Data Bugs are at least as common as bugs in code, but they are
often treated as if they did not exist at all.
Code migrates data: Software is evolving towards programs in
which more and more of the control and processing functions
are stored in tables.
Because of this, there is an increasing awareness that bugs in
code are only half the battle and the data problems should be
given equal attention.
CODING BUGS:
Coding errors of all kinds can create any of the other kind of
bugs.
Syntax errors are generally not important in the scheme of
things if the source language translator has adequate syntax
checking.
If a program has many syntax errors, then we should expect
many logic and coding bugs.
The documentation bugs are also considered as coding bugs
which may mislead the maintenance programmers
INTERFACE, INTEGRATION,
AND SYSTEM BUGS:
External Interface
Internal Interface
Hardware Architecture
O/S Bug
Software Architecture
Control and sequence bugs
Resourse management Problems
Integration bugs
System bugs
TEST AND TEST DESIGN
BUGS:
Testing: testers have no immunity to bugs. Tests require
complicated scenarios and databases.
They require code or the equivalent to execute and
consequently they can have bugs.
Test criteria: if the specification is correct, it is correctly
interpreted and implemented, and a proper test has been
designed;
TEST METRICS
Generation of Software Test Metrics is the most important responsibility of the
Software Test Lead/Manager.
Test Metrics are used to,
1.Take the decision for the next phase of activities such as, estimate the cost &
schedule of future projects.
2.Understand the kind of improvement required to success the project
3.Take a decision on the Process or Technology to be modified etc.
EXAMPLE OF TEST REPORT
How many test cases have been designed per requirement?
How many test cases are yet to design?
How many test cases are executed?
How many test cases are passed/failed/blocked?
How many test cases are not yet executed?
How many defects are identified & what is the severity of those
defects?
How many test cases are failed due to one particular defect? etc.
Example of Software Test Metrics Calculation
S No. Testing Metric
Data retrieved during test case
development
1 No. of requirements 5
2
The average number of test
cases written per requirement
40
3
Total no. of Test cases
written for all requirements
200
4
Total no. of Test cases
executed
164
5 No. of Test cases passed 100
6 No. of Test cases failed 60
7 No. of Test cases blocked 4
8 No. of Test cases unexecuted 36
9 Total no. of defects identified 20
10
Defects accepted as valid by
the dev team
15
11
Defects deferred for future
releases
5
12 Defects fixed 12
Percentage test cases execu
Test Case Effectiveness
Failed Test Cases Percentag
Blocked Test Cases Percenta
Fixed Defects Percentage
Accepted Defects Percentage
Defects Deferred Percentage
1. Percentage test cases executed = (No of test cases executed / Total no of test cases written) x 100
= (164 / 200) x 100
= 82
2. Test Case Effectiveness = (Number of defects detected
/ Number of test cases run) x 100
= (20 / 164) x 100
= 12.2
3. Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests executed) x 100
= (60 / 164) * 100
= 36.59
4. Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests executed) x 100
= (4 / 164) * 100
= 2.44
5. Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x 100
= (12 / 20) * 100
= 60
6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects Reported) x 100
= (15 / 20) * 100
= 75
7. Defects Deferred Percentage = (Defects deferred for future releases / Total Defects Reported) x 100
= (5 / 20) * 100
= 25
QUESTION
Calculate all previous parameters.
https://reqtest.com/try-reqtest/
HOW TO ESTIMATE?
Software Testing Estimation
Techniques
•Work Breakdown Structure
•3-Point Software Testing Estimation
Technique
•Wideband Delphi technique
•Function Point/Testing Point Analysis
•Use – Case Point Method
•Percentage distribution
•Ad-hoc method
WBS
Divide the whole project task into subtasks
Allocate each task to team member
Effort Estimation For Tasks
 Functional Point Method
 Three Point Estimation
FUNCTION POINT
METHOD
•Total Effort: The effort to
completely test all the
functions.
•Total Function Points: Total
modules.
•Estimate defined per
Function Points: The average
effort to complete one
function points. This value
depends on
the productivity of the
member who will take in
charge this task.
TOTAL EFFORT AND COST.
Weightage
# of
Function
Points
Total
Complex 5 3 15
Medium 3 5 15
Simple 1 4 4
Function Total Points 34
Estimate define per point 5
Total Estimated Effort (Person Hours) 170
Estimate the cost for the tasks: Suppose, on average your team salary
is $5 per hour. The time required for “Create Test Specs” task is 170
hours. Accordingly, the cost for the task is 5*170= $850.
THREE POINT
ESTIMATION
Test Manager needs to provide
three values, as specified above.
The three values identified, estimate
what happens in an optimal state,
what is the most likely, or what we
think it would be the worst
case scenario.
parameter E is known
as Weighted Average. It is the
estimation of the task “Create the test
specification”.
A possible and not a certain value, we
must know about the probability that
the estimation is correct.
THANKS

More Related Content

Similar to Software Testing overview jay prakash maurya.pptx

Embedded software static analysis_Polyspace-WhitePaper_final
Embedded software static analysis_Polyspace-WhitePaper_finalEmbedded software static analysis_Polyspace-WhitePaper_final
Embedded software static analysis_Polyspace-WhitePaper_finalTAMILMARAN C
 
Testing In Software Engineering
Testing In Software EngineeringTesting In Software Engineering
Testing In Software Engineeringkiansahafi
 
Industrial Training in Software Testing
Industrial Training in Software TestingIndustrial Training in Software Testing
Industrial Training in Software TestingArcadian Learning
 
A Novel Approach to Improve Software Defect Prediction Accuracy Using Machine...
A Novel Approach to Improve Software Defect Prediction Accuracy Using Machine...A Novel Approach to Improve Software Defect Prediction Accuracy Using Machine...
A Novel Approach to Improve Software Defect Prediction Accuracy Using Machine...Shakas Technologies
 
Fundamentals of Testing - Andika Dwi Ary Candra
Fundamentals of Testing - Andika Dwi Ary CandraFundamentals of Testing - Andika Dwi Ary Candra
Fundamentals of Testing - Andika Dwi Ary CandraAnd11ka
 
Basic software-testing-concepts
Basic software-testing-conceptsBasic software-testing-concepts
Basic software-testing-conceptsmedsherb
 
Software techniques
Software techniquesSoftware techniques
Software techniqueshome
 
Software testing
Software testingSoftware testing
Software testingEng Ibrahem
 
Different Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingDifferent Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingRachel Davis
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts pptRathna Priya
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts pptRathna Priya
 
Developers Border Line: Unit Testing
Developers Border Line: Unit TestingDevelopers Border Line: Unit Testing
Developers Border Line: Unit TestingSikandar Ahmed
 
Defect effort prediction models in software
Defect effort prediction models in softwareDefect effort prediction models in software
Defect effort prediction models in softwareIAEME Publication
 

Similar to Software Testing overview jay prakash maurya.pptx (20)

Embedded software static analysis_Polyspace-WhitePaper_final
Embedded software static analysis_Polyspace-WhitePaper_finalEmbedded software static analysis_Polyspace-WhitePaper_final
Embedded software static analysis_Polyspace-WhitePaper_final
 
Testing In Software Engineering
Testing In Software EngineeringTesting In Software Engineering
Testing In Software Engineering
 
Industrial Training in Software Testing
Industrial Training in Software TestingIndustrial Training in Software Testing
Industrial Training in Software Testing
 
A Novel Approach to Improve Software Defect Prediction Accuracy Using Machine...
A Novel Approach to Improve Software Defect Prediction Accuracy Using Machine...A Novel Approach to Improve Software Defect Prediction Accuracy Using Machine...
A Novel Approach to Improve Software Defect Prediction Accuracy Using Machine...
 
stm f.pdf
stm f.pdfstm f.pdf
stm f.pdf
 
Fundamentals of Testing - Andika Dwi Ary Candra
Fundamentals of Testing - Andika Dwi Ary CandraFundamentals of Testing - Andika Dwi Ary Candra
Fundamentals of Testing - Andika Dwi Ary Candra
 
Basic software-testing-concepts
Basic software-testing-conceptsBasic software-testing-concepts
Basic software-testing-concepts
 
Software techniques
Software techniquesSoftware techniques
Software techniques
 
Software testing
Software testingSoftware testing
Software testing
 
Different Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application TestingDifferent Methodologies For Testing Web Application Testing
Different Methodologies For Testing Web Application Testing
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts ppt
 
Testing concepts ppt
Testing concepts pptTesting concepts ppt
Testing concepts ppt
 
Debbuging
DebbugingDebbuging
Debbuging
 
Taxonomy for bugs
Taxonomy for bugsTaxonomy for bugs
Taxonomy for bugs
 
Software coding and testing
Software coding and testingSoftware coding and testing
Software coding and testing
 
Developers Border Line: Unit Testing
Developers Border Line: Unit TestingDevelopers Border Line: Unit Testing
Developers Border Line: Unit Testing
 
Software testing
Software testingSoftware testing
Software testing
 
Java Code Quality Tools
Java Code Quality ToolsJava Code Quality Tools
Java Code Quality Tools
 
Defect effort prediction models in software
Defect effort prediction models in softwareDefect effort prediction models in software
Defect effort prediction models in software
 
Software Testing
Software TestingSoftware Testing
Software Testing
 

Recently uploaded

Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceBrainSell Technologies
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaHanief Utama
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...stazi3110
 
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in NoidaBuds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in Noidabntitsolutionsrishis
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odishasmiwainfosol
 
Xen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfXen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfStefano Stabellini
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Andreas Granig
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Hr365.us smith
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)jennyeacort
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作qr0udbr0
 
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxKnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxTier1 app
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...Christina Lin
 

Recently uploaded (20)

Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. Salesforce
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief Utama
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
 
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in NoidaBuds n Tech IT Solutions: Top-Notch Web Services in Noida
Buds n Tech IT Solutions: Top-Notch Web Services in Noida
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
 
Xen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdfXen Safety Embedded OSS Summit April 2024 v4.pdf
Xen Safety Embedded OSS Summit April 2024 v4.pdf
 
2.pdf Ejercicios de programación competitiva
2.pdf Ejercicios de programación competitiva2.pdf Ejercicios de programación competitiva
2.pdf Ejercicios de programación competitiva
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作
 
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxKnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
 
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 

Software Testing overview jay prakash maurya.pptx

  • 2. OUTLINES • Introduction to Testing • Debugging • Purpose and goal of Testing • Dichotomies • Testing and Debugging • Model for Testing • Consequences of Bugs • Taxonomy of Bugs
  • 4. TESTING Testing is the process of exercising or evaluating a system or system components by manual or automated means to verify that it satisfies specified requirements. Debugging • Debugging is the process of finding and fixing errors or bugs in the source code of any software. When software does not work as expected, computer programmers study the code to determine why any errors occurred. They use debugging tools to run the software in a controlled environment, check the code step by step, and analyze and fix the issue.
  • 5. • MYTH: Good programmers write code without bugs. (It’s wrong!!!) • History says that even well written programs still have 1-3 bugs per hundred statements.
  • 6. PHASES IN A TESTER'S MENTAL LIFE: Phase 0: (Until 1956: Debugging Oriented) Phase 1: (1957-1978: Demonstration Oriented) Phase 2: (1979-1982: Destruction Oriented) Phase 3: (1983-1987: Evaluation Oriented) Phase 4: (1988-2000: Prevention Oriented)
  • 7. PURPOSE OF TESTING To identify and show program has bugs. To show program/ software works. To show program/software doesn’t work. Goal of Testing • Bug Prevention (Primary Goal) • Bug Discovery (Secondary) • Test Design Bug is manifested in deviation from Expected behaviour.
  • 10.
  • 11.
  • 13. Environment Model: Hardware Software (OS, linkage editor, loader, compiler, utility routines) Program Model: In order to simplify the order to test. Complicated enough to test unexpected behavior. Bug Hypothesis:  Benign Bug Hypothesis: bugs are nice, tame and logical  Bug Locality Hypothesis: bug discovered with in a component affects only that component's behavior  Control Bug Dominance: errors in the control structures  Code / Data Separation: bugs respect the separation of code and data  Lingua Salvatore Est: language syntax and semantics eliminate bugs  Corrections Abide: corrected bug remains corrected  Silver Bullets: Language, Design method, representation, environment grants immunity from bugs.  Sadism Suffices: Tough bugs need methodology and techniques.  Angelic Testers: testers are better at test design, Programmer for code design
  • 14. TEST Tests are formal procedures, Inputs must be prepared, Outcomes should predict, tests should be documented, commands need to be executed, and results are to be observed. All these errors are subjected to error.  Unit / Component Testing:  Integration Testing:  System Testing:
  • 15. ROLE OF MODELS: The art of testing consists of creating, selecting, exploring, and revising models. Our ability to go through this process depends on the number of different models we have at hand and their ability to express a program's behavior.
  • 16.
  • 17. TASK-1 [CLASS ACTIVITY] Focus on types of bugs in the software development process and how to handle these bugs. https://web.cs.ucdavis.edu/~rubio/includes/ase17.pdf Software bug prediction using object-oriented metrics (ias.ac.in)
  • 18. CONSEQUENCE OF BUGS Damage Depends on : • Frequency • Correction Cost • Installation Cost • Consequences Importance= ($) = Frequency * (Correction cost + Installation cost + Consequential cost) Consequences of bugs: • Mild • Moderate • Annoying • Disturbing • Serious • Very Serious • Extreme • Intolerable • Catastrophic • Infectious
  • 19. SOFTWARE TESTING METRICS Process Metrics Product Metrics Project Metrics Base Metrics Calculated Metrics
  • 20. TAXONOMY OF BUGS There is no universally correct way to categorize bugs. The taxonomy is not rigid. A given bug can be put into one or another category depending on its history and the programmer's state of mind. The major categories are: (1) Requirements, Features, and Functionality Bugs (2) Structural Bugs (3) Data Bugs (4) Coding Bugs (5) Interface, Integration, and System Bugs (6) Test and Test Design Bugs.
  • 21. REQUIREMENTS AND SPECIFICATIONS BUGS: Requirements and specifications developed from them can be incomplete ambiguous, or self-contradictory. They can be misunderstood or impossible to understand. The specifications that don't have flaws in them may change while the design is in progress. The features are added, modified and deleted. Requirements, especially, as expressed in specifications are a major source of expensive bugs. The range is from a few percentages to more than 50%, depending on the application 10 and environment. What hurts most about the bugs is that they are the earliest to invade the system and the last to leave.
  • 22. FEATURE BUGS: Specification problems usually create corresponding feature problems. A feature can be wrong, missing, or superfluous (serving no useful purpose). A missing feature or case is easier to detect and correct. A wrong feature could have deep design implications. Removing the features might complicate the software, consume more resources, and foster more bugs.
  • 23. FEATURE INTERACTION BUGS: Providing correct, clear, implementable and testable feature specifications is not enough. Features usually come in groups or related features. The features of each group and the interaction of features within the group are usually well tested. The problem is unpredictable interactions between feature groups or even between individual features. For example, your telephone is provided with call holding and call forwarding. The interactions between these two features may have bugs. Every application has its peculiar set of features and a much bigger set of unspecified feature interaction potentials and therefore result in feature interaction bugs
  • 24. CONTROL AND SEQUENCE BUGS: Control and sequence bugs include paths left out, unreachable code, improper nesting of loops, loop-back or loop termination criteria incorrect, missing process steps, duplicated processing, unnecessary processing, rampaging, GOTO's, ill-conceived (not properly planned) switches, spaghetti code, and worst of all, pachinko code. One reason for control flow bugs is that this area is amenable (supportive) to theoretical treatment. Most of the control flow bugs are easily tested and caught in unit testing
  • 25. LOGIC BUGS: Bugs in logic, especially those related to misunderstanding how case statements and logic operators behave singly and combinations Also includes evaluation of boolean expressions in deeply nested IF-THEN-ELSE constructs. If the bugs are parts of logical (i.e. boolean) processing not related to control flow, they are characterized as processing bugs. If the bugs are parts of a logical expression (i.e. control-flow statement) which is used to direct the control flow, then they are categorized as control-flow bugs.
  • 26. PROCESSING BUGS: Processing bugs include arithmetic bugs, algebraic, mathematical function evaluation, algorithm selection and general processing. Examples of Processing bugs include: Incorrect conversion from one data representation to other, ignoring overflow, improper use of greater-than-or- equal etc Although these bugs are frequent (12%), they tend to be caught in good unit testing.
  • 27. INITIALIZATION BUGS: Initialization bugs are common. Initialization bugs can be improper and superfluous. Superfluous bugs are generally less harmful but can affect performance. Typical initialization bugs include: Forgetting to initialize the variables before first use, assuming that they are initialized elsewhere, initializing to the wrong format, representation or type etc Explicit declaration of all variables, as in Pascal, can reduce some initialization problems.
  • 28. DATA-FLOW BUGS AND ANOMALIES: Most initialization bugs are special case of data flow anomalies. A data flow anomaly occurs where there is a path along which we expect to do something unreasonable with data, such as using an uninitialized variable, attempting to use a variable before it exists, modifying and then not storing or using the result, or initializing twice without an intermediate use
  • 29. DATA BUGS Data bugs include all bugs that arise from the specification of data objects, their formats, the number of such objects, and their initial values. Data Bugs are at least as common as bugs in code, but they are often treated as if they did not exist at all. Code migrates data: Software is evolving towards programs in which more and more of the control and processing functions are stored in tables. Because of this, there is an increasing awareness that bugs in code are only half the battle and the data problems should be given equal attention.
  • 30. CODING BUGS: Coding errors of all kinds can create any of the other kind of bugs. Syntax errors are generally not important in the scheme of things if the source language translator has adequate syntax checking. If a program has many syntax errors, then we should expect many logic and coding bugs. The documentation bugs are also considered as coding bugs which may mislead the maintenance programmers
  • 31. INTERFACE, INTEGRATION, AND SYSTEM BUGS: External Interface Internal Interface Hardware Architecture O/S Bug Software Architecture Control and sequence bugs Resourse management Problems Integration bugs System bugs
  • 32. TEST AND TEST DESIGN BUGS: Testing: testers have no immunity to bugs. Tests require complicated scenarios and databases. They require code or the equivalent to execute and consequently they can have bugs. Test criteria: if the specification is correct, it is correctly interpreted and implemented, and a proper test has been designed;
  • 33. TEST METRICS Generation of Software Test Metrics is the most important responsibility of the Software Test Lead/Manager. Test Metrics are used to, 1.Take the decision for the next phase of activities such as, estimate the cost & schedule of future projects. 2.Understand the kind of improvement required to success the project 3.Take a decision on the Process or Technology to be modified etc.
  • 34. EXAMPLE OF TEST REPORT How many test cases have been designed per requirement? How many test cases are yet to design? How many test cases are executed? How many test cases are passed/failed/blocked? How many test cases are not yet executed? How many defects are identified & what is the severity of those defects? How many test cases are failed due to one particular defect? etc.
  • 35. Example of Software Test Metrics Calculation S No. Testing Metric Data retrieved during test case development 1 No. of requirements 5 2 The average number of test cases written per requirement 40 3 Total no. of Test cases written for all requirements 200 4 Total no. of Test cases executed 164 5 No. of Test cases passed 100 6 No. of Test cases failed 60 7 No. of Test cases blocked 4 8 No. of Test cases unexecuted 36 9 Total no. of defects identified 20 10 Defects accepted as valid by the dev team 15 11 Defects deferred for future releases 5 12 Defects fixed 12 Percentage test cases execu Test Case Effectiveness Failed Test Cases Percentag Blocked Test Cases Percenta Fixed Defects Percentage Accepted Defects Percentage Defects Deferred Percentage
  • 36. 1. Percentage test cases executed = (No of test cases executed / Total no of test cases written) x 100 = (164 / 200) x 100 = 82 2. Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100 = (20 / 164) x 100 = 12.2 3. Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests executed) x 100 = (60 / 164) * 100 = 36.59 4. Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests executed) x 100 = (4 / 164) * 100 = 2.44 5. Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x 100 = (12 / 20) * 100 = 60 6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects Reported) x 100 = (15 / 20) * 100 = 75 7. Defects Deferred Percentage = (Defects deferred for future releases / Total Defects Reported) x 100 = (5 / 20) * 100 = 25
  • 39. HOW TO ESTIMATE? Software Testing Estimation Techniques •Work Breakdown Structure •3-Point Software Testing Estimation Technique •Wideband Delphi technique •Function Point/Testing Point Analysis •Use – Case Point Method •Percentage distribution •Ad-hoc method
  • 40. WBS
  • 41. Divide the whole project task into subtasks Allocate each task to team member Effort Estimation For Tasks  Functional Point Method  Three Point Estimation
  • 42. FUNCTION POINT METHOD •Total Effort: The effort to completely test all the functions. •Total Function Points: Total modules. •Estimate defined per Function Points: The average effort to complete one function points. This value depends on the productivity of the member who will take in charge this task.
  • 43. TOTAL EFFORT AND COST. Weightage # of Function Points Total Complex 5 3 15 Medium 3 5 15 Simple 1 4 4 Function Total Points 34 Estimate define per point 5 Total Estimated Effort (Person Hours) 170 Estimate the cost for the tasks: Suppose, on average your team salary is $5 per hour. The time required for “Create Test Specs” task is 170 hours. Accordingly, the cost for the task is 5*170= $850.
  • 44. THREE POINT ESTIMATION Test Manager needs to provide three values, as specified above. The three values identified, estimate what happens in an optimal state, what is the most likely, or what we think it would be the worst case scenario. parameter E is known as Weighted Average. It is the estimation of the task “Create the test specification”. A possible and not a certain value, we must know about the probability that the estimation is correct.