Codacy was already successful in bringing value to the developer’s use case by integrating the experience in tools they use every day. This means developers did not need to log in on Codacy to extract value from the platform.
Engineering managers had shown interest in consuming the data that Codacy provided to developers, and we wanted to explore this.
Conquering managers as a successful use case would both benefit our acquisition and retention strategies as more people in the same organization would experience the constant value of the platform.
I was leading design, from discovery to the visual design and QA. Throughout the process, I collaborated with a product manager and engineers to shape the solution.
Engineering managers were typically the decision-makers when it came to using Codacy. We wanted to reach out to these folks but, at the time, we didn't have role data associated with each customer. So we launched a survey to the whole user-base to learn:
→ What's your role and team size?
→ What do you want to accomplish with Codacy?
→ What questions are you looking to answer every morning, every week, and every month?
We got answers from engineering managers and individual contributors from all kinds of contexts. This case study is focused on the managers but we later used the additional data to improve other features as well.
Based on the insights, we gathered with the squad to brainstorm assumptions and potential ideas.
Assumption 1
Aggregating data can help managers on having visibility across different repositories and teams, which supports their quality standardization use case.
Assumption 2
Creating attention points can guide managers and developers to where they should be focused on next, which supports their technical debt use case.
We wireframed these assumptions into multiple potential solutions and settled on a couple of variables, testing them with five engineering managers who answered our initial survey.
The goal of this test was to increase our confidence level in the solution: Does this solve your pain points? How often would you visit Codacy to extract data? What sort of insights and actions would you take?
With the feedback collected from the prototype testing sessions, we decided to dive deeper into one of the options.
We wanted to create a view where managers could consume the quality data they need and give them attention points that would guide them to what needs to be tackled now.
We wanted to unblock discussions on daily meetings or help them prioritize quality into their sprints.
One of the main pain points was the ability to compare projects and understand which ones are below the quality standards.
By aggregating data and giving visibility into the health of projects, we were able to empower managers to prioritize and understand where to focus. With Logs, managers could have visibility into the changes that affect each project analysis.
To prevent unhealthy code from affecting their product, we created a list of the most problematic pull requests that are about to be merged into production, allowing them to anticipate problems from happening or simply help a developer that might be struggling.
We also know that all products have technical debt but it's hard to have visibility into it and know where to start when cleaning. So Hotspots were a way to flag quick wins and things that need attention across projects.
After release, we kept paying attention to qualitative and quantitative data. We learned that these use cases were valid to more customers and potential customers.
Comparison between projects and Pull requests were the most useful features to help managers navigate the information and take action.
During the development phase, we realized some Hotspots and Logs that we had previously ideated could not be done in a reasonable timeline. So we built an MVP.
We kept monitoring usage and learned that the technical execution on Hotspots and Logs was harming us: The dashboards were slow to load and the information available was limited.
For managers, Hotspots were still too low level. They were interested in the information we had not built.
When we decided to reduce the scope of both these features, we should have gone back and validated with users what we were going to ship.
We did technical research on how to overcome our constraints and provide the data our users needed. We quickly realized the solution wasn't easy and we needed a fix.
Later, we did an iteration to:
→ Fix load time. As we acquire bigger teams this dashboard must support dealing with data coming from many projects.
→ Remove Hotspots and Logs. The information was limited,
the technical solution didn’t allow iterations nor scalability, and users were mostly using the Chart and Pull request list to cover
the standardization and technical debt use cases.
With this, we were able to acquire bigger teams. It was easier to justify the value of Codacy, because decision-makers could experience that value as well.
We kept collecting feedback on the motivations and needs of engineering managers, which ultimately fed our vision: Codacy is now building a standalone product that is tailored for this use case.