Tomáš Hák
Test automation specialist
Blog
Tomáš Hák
Test automation specialist
A significant number of companies have recently followed the trend and shifted their development from a waterfall model to agile methodologies - most often Scrum or Kanban. In this article we won't be evaluating ifthis decision is actually good or whether something else might be better. There are plenty of articles and videos on this topic, but I'd like to point out to one interesting recording of a meeting of testers from Prague. In this meeting an imaginary duel of both approaches took place and even our Marcel Veselka from Tesena took up a jersey for the team waterfall.
Tesena has been providing testing services for many years, and during this time I have experienced many projects with alot of different approaches to testing in an agile world. I am more than happy to share them with you in the following lines. In general I could divide my experiences into 3 distinct parts which I will gradually describe in the following chapters. These are the personnel approach, the technological approach and the planning approach. In this article, we will also focus on:
Before we discuss the 3 points mentioned in the introduction in detail, I would like to mention the basic principles of agile development, specifically the Agile Manifesto.
A list of clearly defined roles is defined in the agile SCRUM methodology. In short, they can be described as follows:
These roles are formed by a self-organizing team (sometimes known as a 'company within a company') that develops a product in short periods of time. These are typically two-week sprints. During the sprint, there are certain meetings which are called ceremonies in the agile world. You've probably already heard of them. These are:
Previously, development took place differently than is common today. First of all, the release took place only a few times a year. All phases were timed so from the beginning the assignment was used to create an analysis of the entire solution, including all the impacts. Documentation began to be prepared and at the same time the test team was putting together test cases. It wasn't until the developers were completing the application that all testers began testing and reporting bugs. Individual roles were represented here as teams of people which means that for example, testing was always performed by more people at the same time. They had a test manager who managed them, divided the work and communicated with other teams.
With the transition to agile development, everything has changed. The time from assignment to release of the finished function has been shortened by an order of magnitude. There are deployments to production several times a week in small parts and it is impossible to wait for a week of regression manual testing. This results in a greater effort to automate some of the tests and ideally to include them in the continuous integration / continuous deployment (CI / CD) processes. Importantly, dedicated teams of developers, testers and analysts have ceased to exist. Separate teams (companies in the company) were formed, which typically have 3-6 developers, one analyst and one to two testers. The teams take responsibility for their parts of the application, and not as before, for a specific phase of development. Extensive documentation and analysis has been abandoned. The product owner comes up with an idea of what function he would like to have in the application. The analyst identifies possible impacts on other applications or functions and also necessary data fields that are sent to the backend, or other information needed for the development. In Grooming, all this is already known and it is part of the ticket in which the task is already present. Developers add their information to the task, and the tester must know as much as possible about what he needs in order to test it as best s/he can. The tasks themselves should not be larger in scope than work for a single sprint. In case the work needed is more laborious it is important to divide the task into smaller sections. It is also recommended to divide the whole task (or story) into tasks containing small separate subtasks.
As already mentioned, it was necessary to think about all phases during planning andgrooming, and testing was always a part of the workload of the task. The tester had to actively ask during grooming to find out possible future complications, which would for example make it impossible to complete all the functionality during the sprint. On projects, it proved very useful for us to add test subtasks to the main task, e.g., for preparation of test data /users, preparation of TC's, repair of automated tests and then the manual verification itself. Part of it could be to prepared in advance and at the time developers handed the task to testing, part of the process was already ready.
During the planning itself, there was a problem with determining the difficulty of the task, e.g. how to score it. On some projects, testing was estimated separately and then added to the estimation from the developers, or both numbers were shared. But for me, the approach when I evaluated it together with the developers worked best.I imagined the task comprehensively: over time we also created sample stories for 1, 2, 3, 5… points. The tester was then able to determine the difficulty similarly to the developer. Only if I knew that the testing would involve a lot of work would we as a team increase the final number, for example from 5 to 8 to reflect the complexity of testing. On some projects a parallel sprint and backlog for testing was introduced, but this usually didn't work out well, because the whole process was made very confusing.
In an agile world, there are smaller self-organizing teams. Related to this is the fact that a small number of developers usually have one tester. There is no large team of testers led by a test manager to collaborate on tasks. The first thing that strikes everyone is certainly less options for substitutability. Mostly it's an analyst or one of the developers. Therefore, it is necessary for the tester to maintain quality documentation and, for example an overview of test data that other team members can use in his/her absence. The whole team should regularly review this data and procedures for application testing.
If the tester is alone in the team in this way,s/he must be the one who should pass on the basic ideas about testing within the team. It is also necessary to set processes during the sprint so as to avoid overloading the tester, for example at the end of the sprint. This can be achieved by dividing larger tasks into smaller separate units, where the tester can handle them gradually. If it is appropriate to add an automated test to a new function, its implementation can be postponed to the beginning of the next sprint, when there is room for this as there is likely then to be less work for testing in general.
If it is known in advance that the tester will be absent for 2-3 weeks, it pays off to sort it out beforehand in the planning, so that the tasks in the sprint take into account the reduced testing capability in a way that developers are able to handle it between themselves. A checklist of cases can be added to the task that the tester recommends to go through so that nothing is omitted.
There is an effort on most agile projects to automate tests. There is a large number of tools and frameworks for this in virtually all programming languages used today. As testers, we choose them according to the purposes and types of applications that we want to test. The tool/s will be therefore decided primarily based on whether we want to handle only API tests, or in some kind of combination with a web application; also whether it will be only functional tests, or we would like to test performance as well, etc. In many of the projects where my colleagues from Tesena and I were together, we had to take into account what tools the developers were using and what programming language they used. It is important to take into account that automation will be handled by practically everyone in the team, because if the only tester in the team falls ill or is on vacation, it is necessary that there is substitutability. So if there are only PHP developers in the team, we can successfully use the Codeception framework, or pure PHP with libraries for Selenium. Similarly we can adapt to Java programmers, as well as Javascript and Python as there are many different options.
From my own experience, I can say that developers often have no idea about the function of automated tests, or have just a vague idea. In case that we prepare an hour-long demonstration for them, including analyzing the source code, running the tests live and checking their results. What I have seen is that in most cases developers are thrilled about the results and suggest that they would like to help with writing the tests. There are several advantages. First of all, the tester will have their hands partially freed, because the developer will be able to write the basic tests for the developed function themselves. Other scenarios, especially negative test cases, will still remain in the tester's hands. The tester should know all possible scenarios that may occur and, if possible, that everything is covered by prepared tests. Another advantage is that the tester and the developer can revise the code of automated tests with each other. The tester thus has the opportunity to improve the code quality, avoid common problems and generally write his or hers code in a more readable way. If developers and testers have already managed to work together in such a way, usually the developers themselves will start to suggest for example, how to modify the web application to make it easier to automate. This can be seen as attempts to add an ID to the elements on the page, or to add Javascript that can inform us that the page is completely loaded. They can even open up help systems in the background for automated testing. The testing can then use, for example, a code list of individual elements in different languages, create data/users using the API on the backend, or verify some results in the DB etc. The tests then become more complex and to some extent more stable.
So now we consider a situation where developers help with writing the automated tests. The tester, however, is still the person who should check their results, typically the results of the nightly run. In case of errors, analyze what happened and whether the tests need to be fixed or a new bug reported. The first problem is where the tests run. In most cases this is done in Jenkins (or another CI/CD tool), which is a separate instance purely for running tests and has the necessary libraries installed on its nodes. Of course, the tester is used to checking the reports, but for developers it can be just another system to go to and have access to. The solution here is to incorporate the running of tests in the same place where applications are built or deployed to the test environment. Deploying of a new version can then start the test automatically. Or we can separate our set of smoke tests from our tests, which will take a short period of time and include their start of a run directly into the deployment pipeline.
As well as the way the tests are run, the format of the result is no less important. Automation tools usually offer their own way of representing the result of the run. However, such a format does not always have to be suitable and it is necessary to think about how to bring the results as close as possible to other team members to get an immediate overview of what is happening and what stopped working. I recently published an article on this topic.
The agile approach in SW development did not only mean accelerating the development itself, but also the overall testing changed. Small changes in the application are tested, but at the same time it is necessary to test regressively that the change didn't cause an error in another part of application. For this reason, more emphasis is placed on automation and CI/CD. The differences in the team between developers and testers are shrinking. Everyone still has their own scope, but the new approach also allows for mutual cooperation, revision and assistance. For the tester, this means the need to adapt to the team, because s/he is usually alone in the team for testing. He or she will then often focus on selecting tools that are also suitable to the developers.
So far we have dealt with all these challenges in all of our projects in Tesena. If you would like to know more, or you have some more detailed questions, please feel free to let us know, and we will be delighted to discuss them over a warm cup of coffee.