Paul Gerrard
Paul Gerrard

Welcome to Masterclass in:

Part 1 – Test Modelling and Coverage

  • Learn how modelling is central to all testing.
  • How to select and use models;
  • How to use models to explain and justify what you do.
  • How coverage derives from models and tools can support test design

Paul forklarer oss hvordan vi som testere kan benytte modeller for å forklare det komplekse innen software testing. Han vil gjennomgå og sette oss i stand til å ta i bruk test design og modeller for å gjøre testene mer automatiserte og enkle å forklare til blant annet ledere og andre interesserte.

Part 2 – Problem Solving for Testers

  • How to collect and analyse evidence to reach reliable conclusions.
  • Be a Sherlock Holmes: when you have eliminated the impossible, whatever remains, however improbable, must be the truth.
  • Learn from other’s mistakes – hear and discuss stories on how and why it is easy to waste time exploring dead-ends.

Paul forklarer og viser hvordan vi kan bli bedratt av våre funn, og hvordan vi kan forbedre vår søken etter å finne rotårsak til avvikene. Du vil blant annet lære hvordan man systematisk kan diagnostisere avvik, hvordan du kan få kontroll på det som lar seg kontrolleres, og hvordan man som team sammen kan løse problemer.

This masterclass is for:

– Testers and developers – all levels
– Testleads
– Test managers

Meld deg på her:

 

Full kursbeskrivelse

“Testing is a process in which we create mental models of the environment, the program, human nature and the tests themselves.” Boris Beizer said that in 1990, but the idea that testers use models is much older than that. For almost everything humans do – that involves complexity – we create models to simplify, to scope, to mechanise, to understand.

Models are an essential part of being human. To take a few steps requires us to understand the configuration of all of the larger bones and joints of our bodies and the tensions in around 100 muscles. Our brains must understand all this, send an orchestrated set of nerve impulses to all these muscles to take a single step and calculate, calibrate and recalculate second by second – but much faster than that.

It takes huge processing power to control mechanical robots that simulate human movement. Humans simply don’t have that power, so we must simplify, through mental models. Modelling is essential, innate and human. Our brain is a superb modelling engine. As developers and testers, let’s use it to advantage. In this tutorial, Paul explores how we think as testers and how we use models to simplify, scope and explain what we do. Consider how our understanding of a problem is in effect, a model. Our approaches to testing are models. We explain what we do as testers as meaningful models – to stakeholders.

Paul will demonstrate and explain how successful test design and execution automation is based on models too.
Some hard questions
- How much testing is enough?
- Who is responsible for testing?
- When should testing stop?
- What is the value of testing anyway?
- Using models to make your case

Exploration, Modelling and Testing
- Models Pub Quiz
- Where do you use models?
- A New Model for Testing
- Fallible sources of Knowledge
- All testing is exploratory
- Exploration and modelling

Characteristics of models - All models are wrong, some are useful
- Models simplify
- Models hide complexity
- Models and scope
- Models and perspective; relevance to stakeholders
- Value of models

Modelling and Coverage - Some Simple Models
- What does coverage mean?
- ‘Traditional’ test design techniques
- Diverse half-measures
- How do we choose our models?
- How do we choose our coverage target?

Practicalities
- What models do you currently use?
- What falls through gaps?
- What overlap/duplication exists?
- Problems you face in test selection?
- How do you decide how much testing is enough?
- How do you justify what you decided?
- How do you present your plans, test designs, results?
In some organisations, it is perfectly fine for testers to report failure as they experience them. To capture the details of behaviour that does not meet expectations, how to reproduce the problem and an assessment of severity and/or priority might provide enough information to allow developers to diagnose and debug the problem.

But in many situations, this simply does not work. For example, in a company that builds hardware and writes their own firmware and application software to diagnose the source of a problem can be a difficult task. Where a device has many, many configurations or connects to a range of other hardware, firmware or software applications it might be impossible to reproduce the problem outside the test lab.
In these situations – and they are increasingly common – the task of the tester is to look beyond the visible signs of failure and to investigate further: to narrow down possibilities; to identify and ignore misleading symptoms and to get to the bottom of the problem.

In this tutorial, Paul explores how we can be deceived by evidence and how we can improve our thinking to be more certain of conclusions. You’ll practice the design experiments, recognise what you can and cannot control, learn how to systematically diagnose the causes of failure and work as a team to problem solve more effectively.
Critical Thinking Problem-Solving and Testing
- A New Model for Testing and Thinking
- How are we deceived?
- What is critical thinking?
- Evidence, reasons and conclusions
- Credibility of sources
- Distinguishing causes and effects
- Systems thinking
Design of Experiments
- Purpose of experiments
- Errors in experiments
- All-Pairs testing
Debugging Rules (credit to David J Agans)
- Understand the system
- Make it fail
- Quit thinking and look
- Divide and conquer
- Change one thing at a time
- Keep an audit trail
- Check the plug
- Get a fresh view
- If you didn’t fix it, it isn’t fixed
Practicalities and other sundry topics
- Working in Pairs
- Dealing with Bugs from the Support Team
- When you’re in a hole … stop digging
- Psychology
- Idea Generation
- What works for you?

Paul Gerrard is a consultant, teacher, author, webmaster, programmer, tester, conference speaker, rowing coach and publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them.

Educated at the universities of Oxford and Imperial College London, he is a Principal of Gerrard Consulting Limited, the host of the UK Test Management Forum and a business coach for Enterprising Macclesfield. He was the Programme Chair for the 2014 EuroSTAR conference in Dublin and for the 2017 ExpoQA conference in Madrid.

In 2010 he won the EuroSTAR Testing Excellence Award and in 2013 he won the inaugural TESTA Lifetime Achievement Award.

He has been programming since the mid-1970s and loves using the Python programming language.

 


 

Tid: Tirsdag 19. september kl. 09:00
Sted: Dataforeningen, Møllergata 24, 0179 Oslo

 

Meld deg på her

Share This