Iced Tea in Crete 2012:Testing (general)

From WikiEducator
Jump to: navigation, search
Testing (general)
Convenor: Kerstin, Kon and Dmitry
Participants:
  • Marc
  • Andrew
  • Rabea
  • Thomas
  • Henri
  • Jesper
  • Sofia
  • Tasos
  • Ant
  • Ignasi
  • James
  • ...
Summary:
  • Why do we write (automated) tests?
    • Verify component design
    • Regression testing
    • Shorten release cycles
    • enable us to safely modify legacy code which didn't have tests
    • ensure requirements are met (acceptance testing)
    • enable us to perform safe refactoring
  • Testing scope
    • Business requirements
    • Low level (Unit) vs. higher level (end-to-end/integration) tests
    • GUI testing
    • User acceptance testing (must be understandable by end user/domain expert not just by the software engineer)
  • Code coverage
    • JaCoCo
    • Problematic if driven by the management only
    • code coverage shouldn't become a goal in itself
    • example: 50% coverage may cover all the code executed in the normal case, the rest of the code is for error handling
  • Testing priorities
    • „Important“ vs. „No so Important“ Code
    • how do we decide which code is important?
  • Updating Tests for new Requirements
    • Test Driven Design -> Test First
    • What if I do not understand existing tests?
    • ensure any tests are easily understandable/readable
    • Dedicated Testers (with domain knowledge)
    • Generating Unit Tests instead of manually creating them (Agitar, Quick Test)
    • if there is no specification other than the existing program, then the program is the specification; baseline the behaviour of the program to ensure you don't inadvertently change working features when making changes/adding code; TextTest may help with the baselining;
  • Tools
    • DDSteps for Excel driven Unit-Tests
    • [http:/www.fitnesee.org/ FitNesse] for table-based User Acceptance Tests
    • Use survey tools for describing and documenting manual regression tests
    • Eclipse Infinitest (commercial) supports running Unit Tests on Save
  • Size of unit tests:
    • What is the problem with unit tests having 50+ Assertions? May be hard to find out exactly which assertion failed
    • recommendation: only one assertion per test (hardliners), only a few (< 3) assertions per test: will ensure it is easy to pinpoint the failing assertion
  • Integration tests for persistence layers: Test Setup/Teardown?
    • Use symmetrical read/write methods
    • SQL Script for data setup
  • Training teams for testing
    • Code Kata: refactoring legacy code
    • Do Code Katas with team members (one person at a time)
    • respect team members current knowledge and working style and pick them up where they are when introducing new tools, workflows, methods, ...
    • take into account that Sometimes developers have very different Styles


This recent blog entry reflects some of the aspects discussed in the session.

Recommendations:

Recommendations go here

  • ...
  • ...