Cost of failure multiples if defects are present in backend web service code, and we don't take it seriously.
Having good functional test coverage around the backend web service code is really important as the cost of failure/defects is significantly high when there are problems in web service code.
The effort of manually testing backend web services is high, when compared to manual frontend testing. Manual API testing is inaccurate and lacks repeatability therefore we favour 100% automated coverage.
We use an out-of-process approach, where the code is deployed on a server and exercised by different HTTP requests (catered by endpoints) to various Web Service endpoints. The response messages are verified against an expected result. Rather than working with pre-populated data, a better approach is that every test can populate its own data to avoid leaky scenarios where data left behind by one scenario affects the results of the other.
Our entire framework is orchestrated using Cucumber.
cql-rb(drivers used to connect and run queries).
json, a gem to parse JSON responses from the web service.
xmlsimple, gem to parse XML responses from the web service.
httparty, an HTTP client gem.
To increase repeatability and maintainability we have various sets of help methods located in the features/support folder. These methods are re-used across step definitions.
api_helpers— all API related verification and setups are collected in this file
db_helpers— contains methods to help us setup or fetch data from the database. This is used so that the same database queries are not repeated in various step definitions.
request_api_helper— all required parameters for various endpoints are defined as methods in this file. This helps increase maintainability if web service specifications change and changes are reflected across all tests.
Every test sets up its own set of data rather than seeding data up-front. This is to avoid any leaky scenarios and dependencies from one test to another.
Tests should be independent and self-contained. Setup and teardown are mostly done through built-in hooks in Cucumber, like
before, tagged or
Every test pack contains a
config.yaml sitting in the root folder of the project which is used to list the specific details of various environments, i.e., server_urls, database IPs etc. For example:
defaults: &defaults_config default_config: testing demo: <<: *defaults_config url: https://example.demo.url.net client_id: asjfhjasfh980977 secret_key: afyewoiufy9879070 testing: <<: *defaults_config url: https://example.testing.url.net client_id: asjfhjasfh980977 secret_key: afyewoiufy9879070
This is used from Jenkins jobs to run tests on various environments. We use these environments on almost every project.
Tests are run with an
ENV_CONFIG command line parameter which dictates the test run environment. For example:
$ cucumber -f pretty -f html -o <path>/name-of-test-report.html -f junit -o ./junit_reports ENV_CONFIG='testing'
Every test run publishes Junit and Cucumber reports.
We setup the CI pipeline as below and all test runs from Jenkins jobs are configued in a similar fashion.