This post was originally published in my blog smartpuffin.com.
I sometimes have this feeling: The Hunch™. It's when I sense there is something wrong with the project, with the requirement, or with the feature, even when everyone else is sure all is well.
When I sense this, I feel like a dog who picked up the scent. I follow the scent by asking questions, until I get my answer.
Here's how it happens.
Dataset Extension
Our Dev team asked the Data team to collect some new data, in order to extend our already existing dataset. We had 1000 items, and we wanted 5000 more. We told them how we want it to be prioritised, we agred on the timeline, and we left reassured.
The first deliverable was ready. They sent is a huge shared spreadsheet with 1 thousand rows. We were happy, since it meant we would soon have twice more data than before!..
I wrote a script to upload all this new data to our database. I found some small problems in the data, such as columns mixed up, or string values in numerical columns, and the Data team fixed them quickly. Nothing serious.
All was ready. I was (literally!) holding my finger above the Enter key, pressing which would upload the data to the live servers.
But I hesitated. I had The Hunch™. And so I didn't press enter.
I asked the Data team the very last (or so I thought) question:
"Do I understand it right: all this is the very new data? When we upload it, we'll have 2 thousand items, is that correct?"
No, they told me. They had decided to execute a clean-up project along with collecting new data for us. 600 rows in this spreadsheet are to be updated. Only 400 are to be created.
It took me a minute to gather my thoughts. Not a single time was the clean-up mentioned to us before. How happy I was that I asked!.. Had I uploaded the data, we would have gotten 600 duplicated items.
How, asked I, do I understand from the spreadsheet which items are new and which are old? Surely you have an ID column for updates?
No, they told me. There was no such column. There is no other way to understand that. They simply didn't realise we will have to know which exact items have to be updated - until I asked.
Long story short, after much discussion, they added the ID column for us, marked the updated items, and we uploaded all the data successfully.
But had I not have The Hunch™, had I not asked the question, we would have uploaded the bad data, and we would have learnt about it much, much later.
Square Miles Conversion
I saw a feature on our website, where some geo-math was involved. I immediately had The Hunch™: there must be a bug.
It turned out to be correct, there was a funny bug which was live on our website for 10 years, without anyone noticing.
New API idea
Our team was rebuilding a product. The original product was built some 15 years ago, and evolved chaotically.
My colleagues were in love with a particular idea. The new API was supposed to distinguish between 3 use cases that were the same in the old product.
I was very new to the team, and I wasn't completely sold on the idea. I felt like I didn't understand it enough. I thought it might be too complex for the end users. I estimated that a lot of work was needed to support the split, including categorizing the existing data and supporting backwards compatibility. In short, my first days in the new team were full of The Hunch™.
But I decided that my colleagues, knowing the domain area better than me, have thought about all this. I assumed they estimated the amount of work and agreed with it - before I joined them.
They assured me that yes, some work is needed to categorise the existing data into these 3 use cases, but it is rather straightforward. A person from my team will do that, and will ask 2 more data specialists for help. The results will be ready very soon. And meanwhile, why don't we go ahead with the implementation?
I voiced my concerns, but started to implement the new API.
A week after, we met to see how the data split was going.
All 3 data specialists who were categorizing the data disagreed on every single data entry.
I asked them some questions, and it became clear that they understood the use cases differently. Moreover, all my own team members - including myself - understood it differently as well.
Setting what's happening, my team members agreed to postpone the feature until there is more clarity on the use cases.
Had I not seen that the use cases and not thought through enough, we would have offered a half-baked product to our users, and we wouldn't be able to explain the reason to split the use cases to them.
Collaboration between two teams
I supervised implementing a cross-team task.
The problem was that many places used different APIs to fetch the same data. The APIs worked slightly differently and returned different results. The users saw different values and complained.
The task was to clean it all up. The purpose of the task was to:
- Use one single API in all places, because:
- We wanted to unify display.
Developer1 implemented new API calls in three places. For some reason, in two of them they overrided results and displayed something different based on some condition.
So we still had the same inconsistency as before. This solution sort of complies with purpose 1), but completely defeats 2).
Developer2 replaced the API call in one place that I pointed out as an example. Turned out, results were overrided further in the code, based on yet another condition.
Moreover, in the area of responsibility of Developer2 there was one more place with the old API. They didn't notice it and didn't make any changes to that place.
Again, this solution half-complies with 1), but is out of line with 2).
I found out about all this only when I replaced the API call in yet another place. While pushing my code, I felt The Hunch™. Because the task was so complex and distributed between several people, I decided to test all changes at once.
What I saw was that all of them were different from each other.
After all these efforts, customers were still confused by different data on different pages. The developers' time was spent twice. And again, it would have been noticed much later, had I not decided to double check.
When I failed
Long time ago, we were implementing a new GIS-system. Our old system was storing coordinates in the UTM-projection, which meant you can work with only a relatively small piece of map at a time.
In the new system, we wanted to have the map of the whole world.
We made a decision quickly: let's store coordinates in latitude and longitude, and always work with them like that. When we want to display them on the screen, we will project them as needed.
Our next decision was to split the map to tiles to display them on the screen. And here we also made the decision to use latitude+longitude. It seemed only logical.
I spent almost a month implementing the tiles on latitude+longitude coordinates. Doing math on the sphere is hard, and I made a lot of mistakes. It was taking forever. I just didn't seem to get it right.
Seeing how I struggle, a more senior developer proposed to scratch all that and make a decision to have tiles in flat projected coordinates, using the projection our customers were using the most.
I agreed reluctantly, since I still thought having "real" spherical coordinates was "right". I deleted the complex code and wrote the new one in just 3 days. It's been working perfectly ever since.
I didn't have The Hunch™ back then - and, as you can see, the team lost a month because I didn't think very well about the consequences of the decision - how complex the math would be to implement.
How to do it yourself
I see many people not thinking through all aspects of their tasks and decisions, from technical to product ones, to business ones, to data collection ones.
A task, even small and simple one, often involves much more than it seems. For developers, I wrote an article helping them to think about corner cases in advance, when coding. The similar advice is applicable to all other sorts of tasks.
Try to think about the problem from all angles. What other people it involves? Does it involve changing existing stuff, be it code or processes or data? Does it align smoothly with everything else? Is there a logical flaw?
It is like testing. You list positive cases, negative cases, and corner cases; you also test other classes your change could touch; you ask your users for help; etc.
It will almost definitely be cheaper and faster to spend some time investigating first, than to rush into implementation and fail later.
Knowing what is actually involved will help you with prioritization. If the task is large, you might decide to do a smaller one instead.
Sometimes, after looking at all aspects, you might make a decision to do the task with the flaw. You might decide to not care about a certain problem or make a trade-off. Still, you will make an informed decision, and not simply go with the flow.