Blog

David Sedláček

David Sedláček

Test Manager

Cross-platform testing: choosing configurations

Education
Testing on multiple configurations is relevant especially in the testing phases when the products’ UI is stable. To ensure the apps maximum functionality for as many end users as possible it’s necessary to cover the use cases as well as possible but also the conditions in which the app will be used.

Testing on multiple configurations is relevant especially in the testing phases when the products’ UI is stable. To ensure the app's maximum functionality for as many end users as possible it is necessary to cover all use cases as well as possible, as well as the conditions in which the app will be used.

Due to the number of variables and the speed of development of new system versions and devices we cannot ever cover all cases. That is why we try to consider the crucial parts:

  • Device type
  • Maker
  • Operating system
  • Browser
  • Screen resolution

Which we use to suggest configurations for testing. From suggestion to implementation we often come across various problems. Let us discover how we can and should approach them.

If we wanted to be more thorough, we could always add more parameters that include things like the systems architecture, enlarging the system’s view, support of touch control, system language, the used file system, level of user rights and more.

1. Common problems with planning and choosing the configuration

1.1. We do not know what is important

The solution is both simple and complicated. There are multiple existing paid and free services and portals providing information about the popularity of browsers, operating systems, screen resolutions and other indicators. Companies also often collect these statistics from their users using Google Analytics.

While testing we should cover the statistically most important configuration of these indicators. While analyzing it is necessary to be aware of the context and the data going into the measured statistics. That is why we follow these rules:

  1. Our own data is better than the market’s data.
  2. We use data at the lowest possible level of accuracy (country, continent, world)
  3. We analyze data separately for PCs, tablets, and phones
  4. We aggregate the versions of Chrome, Safari, Firefox and IE into one variable
  5. We further divide operating systems into:
    • New ones (trend wave)
    • Maintained ones (modern, often changes)
    • Supported ones (often popular, large market share)
    • Unsupported ones (often a considerable market share)
    • Discontinued ones (not necessary to test)
  6. We analyze the screen resolution of mobile devices by two contexts:
    • By side ratio
    • By resolution details (modern devices often have a unique resolution, we often test those on the closest standard resolution)

1.2. We do not have the necessary devices

Generally, we want to either have physical devices or help ourselves using virtualization. Testing on physical devices is always better when it comes to reliability. Virtual devices do not always have to behave the same way as the real ones.

If investing in real devices is not possible, we consider sharing with other teams or projects. That allows us to use less of our projects budget, but it adds worries about the device’s availability.

Alternatively, similarly to company laptops we can organize company testing devices. After the suffering when starting with a minimal number of the same devices for individual configurations this solution works great with a relatively low cost of expanding and adding devices.

We can also use a service that manages these devices and their possible virtualization for us. The market offers solutions from Sauce Labs for example who support test automation more and more. We are in the early phases of using these devices instead of physical ones which is why we are expecting their continuous improving when it comes to their speed and stability in the coming years.

1.3. Insufficient resources

Human resources, time and finances together define the amount of work that will be possible to do on a project. If these are fixed there is very little that we can do about them. If one resource is so important that we are willing to submit another to it there are a few options.

If we have fixed resources, we can:

  • Lower the testing coverage
  • Increase tester’s qualification
  • Eliminate downtime, increase effectivity
    • Better work environment, helping monitor
    • Process management and regular optimization
    • Creating work procedures
  • Revise the approach to automation
    • Avoid introducing new automation
    • Support automation maintenance
    • Support scalable automation

If we have flexible resources, we can:

  • Move the due date
  • Enroll more testers into the team
  • Add automation professionals for:
    • Functionality testing
    • Design testing
    • Performance testing

2. Choosing configurations

Generally, we determine which configurations are most relevant for us and which we would ideally want to look out for. Then we lower their scope by:

  1. Those that cannot happen
  2. Those that are not supported by our app
  3. Those that we merge
  4. Those that are statistically irrelevant
  5. Those that do not fit into the project’s budget

Let’s say that we’re looking for testing configurations on a project and the analysis tells us that we’re looking at testing on Windows 10, Mac OS and Linux operating systems, Chrome, Firefox, IE and Safari browsers in resolutions of 480p, 720p, 1080p and 1440p. From that we can create 32 valid and 16 irrelevant configurations.

To do every test 32 times is unrealistic for most projects, so we need to lower that number. Using the above-mentioned steps, we can lower the number of configurations to 16 which is still too many. What are our options?

  • Further lower the number of configurations
  • Test only some configurations in each run so that at the end of testing all configurations would be tested
  • Cover the tests with a lower priority in less configurations

2.1. Configurations using the Pairwise method

This is an approach when we reduce the number of configurations so that we maximize the number of independent combinations of two parameters. Speculating that from the situation mentioned above we reduced the resolution to 720p and 1080p which gave us 16 rational configurations – Table 1: Rational configurations.

Table 1: Rational configurations

Configuration
Operating System
Browser
Resolution
1
Windows
Chrome
720p
2
Windows
Chrome
1080p
3
Windows
Firefox
720p
4
Windows
Firefox
1080p
5
Windows
IE
720p
6
Windows
IE
1080p
7
Mac OS
Safari
720p
8
Mac OS
Safari
1080p
9
Mac OS
Chrome
720p
10
Mac OS
Chrome
1080p
11
Mac OS
Firefox
720p
12
Mac OS
Firefox
1080p
13
Linux
Chrome
720p
14
Linux
Chrome
1080p
15
Linux
Firefox
720p
16
Linux
Firefox
1080p

We optimize the table, removing the repeating pairs of variables. At the end of the process we will ideally have one configuration with the combination of Windows – Chrome and one combination with Chrome – 1080p.

In practice it is improbable to create an ideal set of configurations without repetition. The referral result is Table 2: Pairwise configuration combinations.

Table 2: Pairwise cofiguration combinations

Configuration
Operating System
Browser
Resolution
1
Windows
Chrome
720p
2
Windows
Firefox
1080p
3
Windows
IE
720p
4
Mac OS
Safari
1080p
5
Mac OS
Chrome
1080p
6
Mac OS
Firefox
720p
7
Linux
Chrome
720p
8
Linux
Firefox
1080p

The Pairwise method is slightly more complicated than is shown on this simple example and its center lays in the practical usage in specific examples which is why it best to refer those interested to the course of Practical Test Analysis where we focus on this method besides others.

2.2. Assigning configurations to tests

We usually assign configurations to tests only when we know how much time we have for testing and how long it will take to complete one test. When we know that we want to complete 800 tests but only have time for 300 of them it is necessary to start eliminating some. But here we are not eliminating configurations but them being assigned to tests.

Generally, we use two approaches, assigning based on priority or based on passing through the application. Assigning based on priority connects the most important tests and configurations and primarily focuses on those. The remaining tests of descending priority are tested in less configurations. The tests with the lowest priority can be tested in only a single configuration or be eliminated completely.

But assigning by priority often focuses too much on a small portion of an application. We go through the same screen many times in all configurations, so it is possible that we will not go through other screens even once.

That is why we often choose a smaller set of tests that goes through all the screens of the app and we do that in all configurations. If necessary, similarly to the Pairwise method we split the configurations into alternative outcomes of given screens. That way we ensure that all the screens have been tested in the chosen configurations and we will have more time for responsible testing of important functionality and processes.

Are you a contractor looking for a new project?

Are you a contractor looking for a new project?

Book an online meeting with Hana and discuss your current options.
Choose a meeting date