How to rescue an inherited product after a shipwreck
An inherited project is always a problem. An inherited project with complex business logic and lack of documentation is a double trouble. Imagine that there are no people on the customer side who are familiar with business logic and the needs of end users.
Our project is exactly like that. It was obvious that not everything went smoothly, and in a few months all the responsible people and then the development team changed. This is the way we got this app. We found ourselves in the position of Robinson Crusoe on an uninhabited island after a shipwreck.
- to get into the details of implementation and business logic as soon as possible;
- begin to develop a qualitatively new features and fix old issues;
- deserve customer loyalty.
We set the following tasks for us:
Introduction
Robinson started with the island examination and collecting objects thrown ashore.
We did the same thing and found out that:
- We got a fairly large project with complex logic;
- The project was connected with the life and health of people, which means that the mistakes were worth a lot;
- The customer at that time could provide only general information. It took a long time to elaborate details;
- We couldn’t delay involving into the project. Delay in such conditions would be disastrous for the team.
What we managed to gather on the shore:
Source code of the project, but not all of it. Some part of the source code, apparently, went to the bottom of the ocean during a storm, along with all the documentation. A couple of test servers that the customer provided.
And ... actually, nothing else.
Robinson, after spending a couple of days on the beach, built himself a hut. We also started with the infrastructure: restarted the tracker, the wiki system and the build server (Jira + Confluence + Bamboo). On the buildserver we set up deploy on staging. And then they came across the first problem.
"We are not used to"
The customer didn’t use to work with Task Trackers. The tasks were set via Skype and mail. The discussion of the details was held the same way. It wasn’t very convenient and could result in information loss. What’s more, agreements from chatting weren’t always documented.
We diligently tried to transfer everything to Jira, but clearly that wasn’t enough. We had to teach customer representatives to use this tool. We constantly asked and reminded our client to create a task in the Task Trecker and add a full specification. To be honest, we couldn’t have done it without our customer’s understanding and desire to build processes.
Documentation restorage
It isn’t easy, but doable. We followed 4 principles to make it happen:
We were exploring app’s logic as Robinson was exploring the island using free search method. You won’t get all the details but you will learn the basic features.
New features documentation is an obligatory thing to do. Each new feature had at least 3 documents: functionality description, an article from the developer with details of implementation, test cases.
Communication with developers within a team. They helped to restore logic details by analyzing code. It is also better to set tasks in a tracker to do it.
Developers don’t like to be distracted from working on the current task. But if there is a separate task for analysis, it can be done in between the main ones. The results of the research must be documented. It is also convenient, so that if the customer asks to explore the logic, we already have a ready answer.
Communication with the customer: demonstration of results and identification of needs. Despite the fact that the customer at that time was not familiar with the application, he was still the only one who knew the needs of end users. It was a valuable data source, but not always fast. Sometimes we even requested a video demonstration of end users work.
Infrastructure recovery
When the problem with the documentation began to be gradually solved, the problem of test infrastructure arose.
Items found by Robinson after the shipwreck were of little use. The staging and build server turned out to be insufficient: build of the build server took about an hour, deploy time was 15-30 minutes, the deploys and builds often fell due to infrastructural problems. Half of the working time of a QA was spent preparing the testing environment. What’s more, the customer often used the second testing ring for his own tests and demonstrations. It means minus one tester in the team. We couldn’t increase the number of staging servers or increase the current capacity. As a result, we found the only relevant solution at that time - to deploy the project locally for each QA. This speeded up the work of the QA team greatly, and staging servers were used only for specific cases, for example, testing the apps installation.
Documentation storage
Having solved housing issues, Robinson started to store up provisions. Some time later the problem of storage of its storage appeared. We were in the same position.
The amount of testing documentation was getting bigger and bigger. The problem wasn’t evident when we had a few test cases. We kept them in Google sheets as our customer used to do. However over the time it became really difficult to maintain them and regression test turned into disaster. Documentation transferring into special sources became a must.
Our choice fell on Zephyr (plug-in to Jira) due to its simplicity and tight integration with our Task Tracker. The tool is not perfect, but it meets our needs.
We chose Zephyr (plug-in to Jira) because of its simplicity and close integration with our Task Tracker. The tool wasn’t not perfect, but it met our needs. Now we were able to store test documentation, statistics and reports for regression tests conveniently. And it became easy to demonstrate the coverage of the tests to the customer.
Meeting customer’s expectations
Robinson learnt it by his own experience. Our requirements interpretations weren’t always right that’s why the new functionality did not always fully meet the needs of users.
We found a way out - regular demonstrations of the new features to the customer before the release. This allowed to receive feedback even during development and to quickly refine the required functionality. And the customer saw in what direction the development was going, and was sure that the implementation will be exactly as he intended it.
Routine automatization
Years went by, Robinson completely settled down on the island: he started gardening, cattle farming. And he felt lack of time for all household chores. Therefore, Robinson was pleased to find Friday.
Our project as well as a test coverage and time for regression testing also grew. We ran two-week sprints, and at the end of each we had release and regression testing. At some point, the time for regression testing became excessively large. Automatization became obviously in need.
The inherited code was not suitable for unit test coverage. Integration tests also could not cover all cases. Most of the functionality needed to be automated through GUI testing. Our application had both web and desktop parts, so Selenium didn’t meet our needs. It was necessary to find out a universal tool.
Ranorex became our Friday. It proved to be an optimal solution for us and the customer in terms of price, functionality and integration with C # code. Now we have 30% of the regression tests covered with autotests. We plan to increase the coverage up to 70%.
Conclusion
Now our project is almost 4 years old. We constantly improve the processes and interaction within the team and with the customer. We receive a positive feedback from both the customer and his users. Don’t get despaired even in a difficult situation. Calm down, analyze the product, identify urgent issues and solve them one by one. Then progress and improvement won't take long.