Continuous Deployment/Automated Integration Testing

From WikiEducator
Jump to: navigation, search


Continuous Deployment/Automated Integration Testing
Convenor: Giorgos Saslis, Kerstin Buchacker
Participants:
Summary:
Continuous Deployment/Automated Integration Testing - JCrete2014


Free content media streamed from Wikimedia Commons
Cc-sa.svg

Download:
Download: .ogg

The discussion was in general focused on Java and web-based relatively homogenous systems which did not seem to include a lot of off-the-shelf products. Continuous deployment or integration testing of large hetergeneous systems, such as found in banks or government agencies, were not addressed.

Continuous deployment is based on the pipeline principle. The pipeline has several stages and an artifact is moved from one stage to the next whenever it has passed the previous state. Stages may include build and unit testing, staging, pre-production and finally production. Some of the stages could be run in parallel, to speed up the build. In one example, the staging and pre-production stages of the pipeline differed in that in staging, the database was completely wiped out and repopulated every time, whereas in pre-production the database was only wiped out on request. The final output of the pipeline is some binary, what exactly depends on the product (could be an RPM package or an EC2 instance).

  • Pipeline good practices
    • Reproducible builds (it is best to use the same binary all the way through the pipeline and not recompile it at some stages)
    • pipeline should be constructed in such a way, that an artifact cannot be deployed to the next stage unless it has passed all previous stages.
    • Separate the configuration from the application, to allow easier installation of the same application into different environments
      • Environment variables (see The 12 Factor App )
      • AWS User Data (<-- Unsure - name correct?)
    • Traceability is key: know which versions are installed everywhere
    • automate deployment
      • netflix uses asgard (see for example [1] and [2])
      • turn any infrastructure or installation task into a chef script can speed up both recovery and deployment (if you're not into immutable servers); performing configuration changes by hand is error prone (humans are involved) and non-repeatable (humans are involved)
      • Chris uses docker
      • decoupling of components is important to allow fast deployments and upgrades over time (one component at a time)
  • rollback and recovery
    • failures will happen, and it is therefore important that you can fix them quickly, e.g. by rolling back to a previous version
    • this means also that you need to be able to discover failures quickly; the example given was the missing buy button of a web-shop application; near-realtime monitoring showed that buys dropped to zero on the servers where the update had been deployed, whereas numbers were normal on other servers;
    • immutable server concept, as described for example in [3] and [4], was briefly discussed
    • according to Chris, netflix is extremly good at recovering from failures


  • automated (acceptance) testing
    • in certain environments (such as banks and government agencies) it may be difficult to get the customer to go along with automated acceptance testing, whereas in other environments (for example web applications) fast deployment is crucial and automated acceptance testing a way to reduce time to deployment
    • for web applications this can be achieved with cucumber and selenium (a possible approach is described in [5]); cucumber can help to clarify requirements and develop ownership of requirements on the customer/user side
    • leading up to (automated) acceptance testing, other automated testing should be performed; this can be achieved using spock and junitparams.
    • calabash supports automated acceptance testing for mobile apps
Recommendations:

No recommendations provided.