Web service (API) automation

Cost of failure multiples if defects are present in backend web service code, and we don't take it seriously.

Why is it required?

Having good functional test coverage around the backend web service code is really important as the cost of failure/defects is significantly high when there are problems in web service code.

The effort of manually testing backend web services is high, when compared to manual frontend testing. Manual API testing is inaccurate and lacks repeatability therefore we favour 100% automated coverage.


We use an out-of-process approach, where the code is deployed on a server and exercised by different HTTP requests (catered by endpoints) to various Web Service endpoints. The response messages are verified against an expected result. Rather than working with pre-populated data, a better approach is that every test can populate its own data to avoid leaky scenarios where data left behind by one scenario affects the results of the other.

Framework and utilities

Our entire framework is orchestrated using Cucumber.

  • BDD test specification — Cucumber (Gherkin), plain English sentences.
  • Main programming language — Ruby.
  • Interaction with database — black box approach but if required use gems like pg, mysql2 and cql-rb (drivers used to connect and run queries).
  • JSON parserjson, a gem to parse JSON responses from the web service.
  • XML Parserxmlsimple, gem to parse XML responses from the web service.
  • HTTP Clienthttparty, an HTTP client gem.
  • Env Setup — Vagrant and VirtualBox using Ubuntu.


To increase repeatability and maintainability we have various sets of help methods located in the features/support folder. These methods are re-used across step definitions.

  • api_helpers — all API related verification and setups are collected in this file
  • db_helpers — contains methods to help us setup or fetch data from the database. This is used so that the same database queries are not repeated in various step definitions.
  • request_api_helper — all required parameters for various endpoints are defined as methods in this file. This helps increase maintainability if web service specifications change and changes are reflected across all tests.

Setup and teardown

Every test sets up its own set of data rather than seeding data up-front. This is to avoid any leaky scenarios and dependencies from one test to another. Tests should be independent and self-contained. Setup and teardown are mostly done through built-in hooks in Cucumber, like before, tagged or at_exit hooks.

Continuous integration

Every test pack contains a config.yaml sitting in the root folder of the project which is used to list the specific details of various environments, i.e., server_urls, database IPs etc. For example:

defaults: &defaults_config
  	default_config: testing

  	<<: *defaults_config
  	url: https://example.demo.url.net
  	client_id: asjfhjasfh980977
  	secret_key: afyewoiufy9879070

  	<<: *defaults_config
  	url: https://example.testing.url.net
  	client_id: asjfhjasfh980977
  	secret_key: afyewoiufy9879070

This is used from Jenkins jobs to run tests on various environments. We use these environments on almost every project.

  • Testing — Internal facing testing environment. Code is deployed from existing CI branch which is mostly Develop
  • Demo/Staging/UAT — Prod like external facing environment. This is used for client demos. Code is deployed from existing Release Branch for a current release
  • Production — Synced up with the Master branch

Tests are run with an ENV_CONFIG command line parameter which dictates the test run environment. For example:

$ cucumber -f pretty -f html -o <path>/name-of-test-report.html -f junit -o ./junit_reports ENV_CONFIG='testing'

Every test run publishes Junit and Cucumber reports.

We setup the CI pipeline as below and all test runs from Jenkins jobs are configued in a similar fashion.

  1. Build off develop branch
  2. Deploy to testing environment
  3. Run tests against testing environment
  4. Merge to release branch and build
  5. Deploy to demo environment
  6. Run tests against demo environment