To fully grasp what we will be covering you should have a basic knowledge of software testing and understand what the following terms mean:
Stubbing, mocking or service virtualization are three names referring to the same the practice of simulating backend or third party systems to improve development and testing of applications. You can use a stubbing, mocking or service virtualization tool to create a simulation of a backend or third party API that will help you to test your application in hypothetical or hard to test situations. You will test your application or component in isolation without its dependencies. It allows you to run more test scenarios earlier and with less effort. The simulators you create for services are called stubs, mocks or virtual services.
Imagine testing a new model of a jet engine. After engineers build a prototype, they will test it in a test lab without mounting it to a jet fighter. The jet engine test lab will have many specialised devices that simulate a real environment. They measure many parameters of the engine and the environment variables during these initial tests. It can measure the fuel consumption, thrust, and other characteristics without having to do expensive and unreliable field tests with a real jet plane. We can apply the same principle of using test labs by employing real-environment simulators to test software.
Sometimes it is hard to develop and test an application because it depends on components which are unavailable in the development and test environments. Also, better control of those components is required to test hypothetical situations which aren’t always available.
Stubbing, mocking and service virtualization are just names of a family of solutions called test doubles. A test double is a simulator that is used in software development and testing to simulate a system instead of using the real system.
Developers and testers use other names as well, like over-the-wire test doubles, API simulation and API mocks.
It is important to understand that the name you use does not matter. What matters is that you will be developing and testing applications and software components in isolation by simulating backend and third party APIs and systems.
If you are interested in more details on the name differences have a look at this article for an in-depth introduction to the subject.
Virtual services can be created in several different ways depending on which service virtualization tool you use and the architecture of your test environment. You can, for example, craft them manually, generate them from specification automatically, or record requests and responses.
Next, for the service virtualization tool to record any requests and responses we want to make the Finance Application send a request to the Market Data API. We can do it by running a test case manually, or by running an automated test case provided it is available. It causes a request to be sent to the Market Data API which the service virtualization tool will record. This way we will create a virtual service by peeking on what the application is sending to the API, rather than having to craft virtual services manually based documentation.
After we have recorded the request and response pair, we can turn off the recorder and start playing back the recording. If we run through the test plan now, a request will be sent to the virtual service, which will reference what has been recorded. This will allow us to test the application in isolation, so it is not dependent on the Market Data API anymore to set up the test data.
You should use virtual services for your backend, or third party system, when it is very hard or impossible to setup test data. It is typical for third party systems to return production data, which means it is harder to alter it and test your application in hypothetical situations without a virtual service. It is also quite common for large companies to have old backend systems that require a lot of effort when setting up test data (days or weeks). In both of these situations when you use virtual services you can configure the data they way you need it and on demand.
For example, you can use a virtual service to simulate different test data scenarios. Instead of returning “£12.01” as the stock price from the Market Data API, you can set up a virtual service to return a negative price “-£7.00”. Or, an empty price “ ”, a large price “£10000000000000000.00”, a different currently price “$12.01” or put in unrecognized characters “🏳0🌈️” (from the list of naughty strings) and test what the application does in these situations.
Often API documentation lists many different types of error messages. It is typical that the API will not return those error messages unless an error occurs. Hence, there is no way to test what your application will do when it encounters a given error message. You can use virtual services to simulate those error messages.
For example, let us assume the documentation of the Marked Data HTTP API describes when an unknown server occurs, the HTTP response status code will be 503. There is no way of simulating that server error using the real API. What you can do in that situation is to set up a virtual service to pretend to be that API and make it return a 503 status code.
Every communication protocol has its typical ways of failing. Teams should observe production environments and replicate problematic situations in the test environments. Testers often use this option when investigating bugs that need a protocol failure to be simulated to reproduce a bug. For example, connections to HTTP APIs can be reset or dropped. If you see those kinds of errors on production servers often enough that you might want to test your application in those scenarios as well. One way of simulating a dropped connection is to unplug the network cable from the socket, but it is much more convenient if you use a virtual service to simulate those situations.
One way you can do this is by designing a test case where you set up the Market Data API virtual service to drop the connections. You can then assert that the Finance Application recovers from the error and presents a meaningful message to the user.
Monitoring for performance of backend and third party systems could be done in real time. Another option can be creating a virtual simulator to mimic slow responses without involving production services.
For example, the Finance Application sometimes does not serve client requests for unknown reasons. You investigate the production logs and see a lot of connection timeouts to the Market Data API. This leads to discovering that the API is behaving very slowly every once in awhile. You raise this issue with the Market Data API team to investigate why the API is having performance issues. To test for weak points in the Finance Application, you create virtual services that simulate slow responses from the Market Data API. It’s a test case that allows the developers to reproduce the situation in their environment and improve the application so that it does not fall over if the Market Data API is slow for certain types of requests.
Once the testing in isolation has been done, and the application works according to our expectations in isolation, you can proceed to integration testing it with the Market Data API.
This approach is very common in many organisations for both manual and automated testing.
Automated testing can be very challenging if you do not use any virtual services or stubs (or mocks). The tests without stubs or mocks could become flaky very fast, run longer than expected, and could become harder to write as the real API changes and other applications use it as well. Manual testing without virtual services is possible but can be more time consuming and often frustrating. These are common problems in a microservices architecture.
Here are several of the more typical problems testers will experience while testing web applications, mobile applications, legacy systems and APIs and how service virtualization can help with solving them.
This is a common problem and service virtualization alone will not fix it, but it is an essential part of the solution. If you can utilize service virtualization in your builds and pipelines you can have more confidence in your release cycle. The focus changes from trying to test everything to testing what could be at risk of having issues. These kinds of checks can give you a quick health status of the application, or a piece of it, without dedicating a lot of testing time. Quick feedback means quick wins overall.
In this case, you can use virtual services to simulate the non-existing API. If there are business requirements or documentation, you can create virtual services based on them to use until the real service is available to test. After you verify the system works in isolation according to your expectations, you can wait for the dependent API to be ready and do integration testing using the same tests. This is an example of using TDD for integration testing.
Test data and complex interactions with backend and third party systems can cause issues reproducing bugs found in production. Issues with setting up test data in multiple systems simultaneously, simulating error messages or simulating protocol issues are hard to accomplish tasks getting in the way of easy reproduction of the environment where the bug was found in production. Fortunately, complex environments like these and many others can be simulated with virtual services, allowing for more flexibility and peace of mind when reproducing bugs.
Often when you test old systems you have to wait for test data to be created, sometimes it’s even impossible to create the test data you need (especially in third party systems). Also, backend and third party systems can be hard to set up to return error responses, or throttle network responses, on demand. The solution is to use virtual services which are under your control, so you can set up any type of test data on demand.
Especially large banks with old mainframe systems experience this issue where the test environments are costly to create, so there is a limited number of them shared across many testing teams and projects. Those old APIs almost never get changed but they are included in many test plans, so you have to schedule time on those environments to run your tests. You can simulate the API that never changes with virtual services and test your application under test more often without having to wait for the API availability. It is also less likely to impact other teams by burning through their test data causing further delays.
In most cases, those paid APIs are very simple in nature and very rarely change. The cost of accessing third party APIs can complicate performance testing efforts as well. Simulating them using virtual services can reduce the third party transaction costs.
When you performance test many components simultaneously, it is often challenging to pinpoint the code change that causes a performance issue. The feedback loop is too large, and you are testing too many moving parts at one time. You ideally want to test small code releases in isolation and a limited number of components. That will ensure that if you see a degradation in performance, you can quickly point to a code changeset that was likely the cause. To run performance tests in isolation, you need virtual services. Most service virtualization tools on top of setting up test data capabilities will offer the possibility to define response times for the virtual services so you can create a production-like environment.
When you select a tool to use for service virtualization, keep the problem you are trying to solve in mind. Ask yourself a few questions about the problem at hand and what service virtualization can do to solve it.
Once you have solved your problems with smart or dynamic virtual services, you probably have introduced a bit more complexity to the delivery lifecycle. The next stage is to reduce that complexity of the architecture and APIs without re-introducing the problems you have just resolved. Architects and developers can focus on the long-term solution of simplifying the design to be able to simplify the virtual services as well. This is called architectural refactoring. Tackling complexity with more complexity is often a necessary short-term solution, but you have to keep the 5-10 year road ahead of you in mind as well. You do not want to end up in a place where there is just too much complexity for the job you are trying to do.
Service virtualization tools can be useful when you are doing a lot of extensive exploratory testing of a microservice. Microservices can have many backend or third-party dependencies and you want to test what happens when one or more of those systems return unexpected or unusual responses. Examples of unusual or unexpected responses might be: empty book title, non-integer cart item count, negative stock quantity, slow response, cut connections, etc.
A service virtualization tool, with a graphical user interface designed specifically for exploratory testing, could allow a user to set up testing scenarios fairly quickly without delving into much of the service code base. Once a virtual service is created, the tester can focus on calls from the front end of the application or the service to the database or another service. Service Virtualization eliminates dependencies in testing a workflow by making them a non-issue as long as it’s used correctly.
There are many tools for over-the-wire stubbing, mocking or service virtualization available on the market. For a comparison see next generation service virtualization tools comparison.
Service virtualization tools are essential in every testers toolbox. You can use them to unblock your team when it is waiting for a new API for testing, help reduce numbers of bugs, decrease the time it takes to reproduce bugs, or solve issues with setting up test data. There are a number of tools available on the market that you can use depending on system architecture and budget requirements. Any project could benefit from applying the practice of service virtualization.