Technical Research and Preparation
I wrote previously about starting on a big engineering problem by identifying the problems through talking and listening to people. The people based research phase was about boosting a successful outcome through identifying the right problem, the technical research phase expands on that discovery work and moves into identifying the right solution.
The people based discovery work should yield insights into the problem(s) to be addressed, areas to focus in on, and some initial ideas for possible approaches to a solution. Now equipped with that information, I find the next useful phase is to do some technical research. By ‘technical research’ I mean exploring or writing code in preparation for developing new work.
My experience of doing technical research has typically fallen into three categories:
- What is the problem we’re trying to solve?
- How could we approach solving it?
- How can we prepare for minimal negative consequences?
What is the problem we’re trying to solve?
Technical research can help gain detailed insight into what the problem is that we’re trying to solve. In my first few months at Fastly, in order to get an understanding of the nature and scope of the problems on the technical side of the API, I set out to analyze the code and documentation. This was no easy feat, as the code spanned many applications in multiple frameworks. I wanted to get a measure of known problems and gain an understanding of unknown ones.
I started with writing some scripts that parsed the application code, detected1 the methods I was most interested in (API endpoints), and gathered data on certain patterns I was curious to learn more about - partly from what I’d heard when I’d interviewed my colleagues, and partly based on my own experience of tricky areas in API codebases. My scripts identified and collated things like what objects and messages were returned from various endpoints, who was authorized to perform different actions, and if/how those patterns varied within and between the apps.
I extracted code examples from the documentation and analyzed that too. I wrote a script that made calls to the API, from a fake customer account I’d set up, so that I could then programmatically compare the coded and documented endpoints. I got a measure on any differences between those contexts: any responses that varied, inconsistencies in variables, routes etc.
My approach to this analysis was fairly hacky, but that’s OK! This isn’t code that goes into production, it’s code to find and present information that I can learn from. It might lay the groundwork for something more long-term and for others to use, like a GitHub App that automatically reads new code, performs some analysis and provides feedback to the author. But to start, when writing code to find your way through code, it’s OK to be messy.
This research yielded concrete information that described the current system and the scope and scale of the problems that needed addressed. Having quantitative information here was particularly helpful in that it confirmed the problems mentioned in personal anecdotes from that earlier research with people; it indicated the scale of the work ahead; it helped convince people across the organization that problems existed and needed addressed.
How could we approach solving it?
Now that we have a detailed understanding of the problem we’re trying to solve, we need to figure out how we might go about solving it.
When I’m planning to propose building a new system or functionality, I’ll typically make a spike or two (or three or four) to demonstrate the concept. By ‘spike’ I mean writing just enough code to check how something could work. The purpose of this is to gain insight, and could have different motivations. One typical scenario is to learn more about a desired approach and how it would need built - so you might enable part of a flow, or produce an example request/response for a new API, and generally identify what would be the technical challenges if this idea progressed. Another useful scenario for a spike is to gain insight into why not to take a particular approach. I’ve done this in the past when a manager suggested a direction but my gut feeling was that it wasn’t the best way to go, so I explored the idea through a spike and through the clearer understanding gained was able to articulate exactly why it wouldn’t suit. This can be more useful than just debating possible approaches on theory alone - having something tangible de-personalizes the arguments. So spikes can be used to prove to yourself that something is a bad idea, as well as a good idea.
Another scenario is to look at several approaches, trying to identify the problems with each, so that you gain a better understanding of the trade-offs. With this scenario, I’ve found it’s particularly useful to explore three or more ideas - there’s a conceptual liberation that comes when you move beyond comparing two, which can naturally start biased as the contender and the decoy. When you have multiple options that all seem reasonable at first, exploring them each via a spike can very quickly uncover challenges and previously unknown trade-offs, in a more palpable way than any theoretical or opinion based discussions can. This work will take time but is a very efficient way to discover what you’re wrong about.
I find spikes very creatively satisfying and fun! The fact that you know from the outset that this code will not go near production is liberating. There might be tests, or not, depending on what exactly feels useful to explore and identify how something could work. There typically isn’t error handling or implementation to handle the ‘unhappy’ path.
On the downside, approaching the start of technical research through spikes can also feel scary and intimidating. This can partly be because the work ahead may be ambiguous and the direction feel amorphous, at this stage. I believe acknowledging and embracing that as part of the job is helpful: get comfortable with the discomfort of not knowing a lot. My belief system around this is influenced by being an endurance athlete. From my athletic world I have developed a trust in process, with a focus on consistent and thorough preparation. The consolidation of work on all the controllable parts might then align to produce the successful end goal.
This work can also feel scary in the wider context of your engineering group. There is a risk that someone sees what you’re working on and stomps all over your spike PRs with all the reasons why they think this approach is a terrible idea. Whether this actually happens or not, this is a real fear, experienced even by some of the best engineers I know. Nevertheless, please do share your work. It’s hugely beneficial not just to a particular project, but to the healthy culture of an engineering org that work is discoverable and people share their journey and workings. Colleagues you don’t even know yet can learn from this. To help mitigate the risk of unhelpful input, you can set boundaries around your spikes: share the work publicly in a Pull Request, but be explicit about what the purpose is, and whether you want any feedback, or not, at this stage. Some examples:
Once you know just enough, then throw the code away. No really, throw it away. Spikes should not go into production. You may need to resist temptation or pressure on this, but spikes aren’t a first version that you’ll iterate on, they are deliberately to serve a different and specific purpose. Spikes are a temporary means to an end: simply stringing some parts together enough to learn something about how it could work. They work best when the scope is small and focused, for a part of a system not the entire system itself. Much like a sculptor might build a study, or sketch, to understand the tension and weight in a particular part of a larger piece, then build the next sketch imbued with understanding those forces, having literally touched them.
A small selection of my spikes made during early technical research for GitHub Apps. Note these PRs were all closed, not merged.
Partly I do spikes for my own knowledge, to validate or invalidate each approach, and also for confidence in how to approach a problem. I have learned that they can also be highly valuable tools in persuading others of the viability of a project. While at GitHub, when I was researching how we might integrate better with third party data, I knew I had to make a persuasive pitch to get buy-in to proceed. To do that, I wrote a detailed strategy document and backed up my words with code illustrations from spikes that demonstrated how the various dots could be connected. The spikes turned out to be critically valuable in getting buy-in from the company leadership to proceed with the project.
How can we prepare for minimal negative consequences?
Then assuming we want to proceed to implement our preferred direction, we might do more technical research to gain detailed knowledge of the possible negative side-effects and consequences, and how to mitigate or address them.
At GitHub, way before we started writing code for what became GitHub Apps, we knew we would need to introduce a new type of actor to the system. Up until this point, the codebase had been built around the existence of two types of actor.
We needed to get a better understanding of the impact of introducing a third type of actor, so my teammate Jason and I set about the arduous task of finding out. First, we catalogued every occurence in the web and API codebases where a method checked the type of actor. We added code to the codebase to keep measuring these callsites2, noting if any were added or removed, so we always had an accurate list to work from. Then we analysed every single callsite to determine two things: what would be the impact here if these assumptions of two types of actor changed? And what was the real need here, what did the code actually need to know? Where possible, one-by-one, we replaced any call where the assumptions could be problematic, or where we could replace a generic question with a new, more intention revealing method. For example, instead of asking actor.is_a_human?
we might ask actor.can_walk_on_two_legs?
.
That work took several months and was fairly monotonous. But our meticulous technical research gave us the confidence to add new functionality cleanly, on a solidly prepared work surface, which doubtlessly saved us and our colleagues from being paged due to exceptions, later.
There are two main layers to this work: the business or product problem you are considering, and then the actual changes to get there. The first two phases described here, of carrying out technical research to explore what the problem is and how you might approach it gives you the knowledge and understanding to plan and guide development. Careful and systematic technical research as described in the third phase creates the space and confidence to implement the solution successfully.
This work is time consuming, it can be tiresome, and is likely to involve spreadsheets. There’s something comforting in the fact that there’s nothing magical here, just solid preparation. However, it is invaluable in helping build the most robust and appropriate solutions, and saves countless time and effort through steering our work in a good direction early on. The cost of change is at its lowest during the research phase.
For me, the bonus of this work has also been in building credibility and gaining meaningful influence within an organization, which enables me to do more of the work that I love.
Thanks to Andy, Jason and Katrina for reading a draft of this.
-
For the Ruby applications, I used whitequark/parser for this - to parse code and traverse the AST. ↩
-
We used github/scientist during this work, a Ruby library for carefully refactoring critical paths. ↩