The wall of technical debt

Arpit Mohan - Feb 25 '20 - - Dev Community

TL;DR notes from articles I read today.

The wall of technical debt

A method to make technical debt visible and negotiable:

  • Use a physical wall to visualize your tech debt on sticky notes. It is easy to start and maintain, yet it can usefully impact choices that add, payback or ignore the technical debt. Pick a central location with high visibility. Make the display dramatic.
  • Decide on some sort of tally mark to represent costs (time or money), so it is not a matter of opinion only. Conversely, if it does not have a cost but only looks awkward, don’t log it as debt.
  • Build a working habit and stay honest
  • Keep your notes short but easy to understand - what made code difficult to understand, what slowed it down, why a bug was hard to find, what should have been better documented or tested. Estimate the opportunity cost as well as time to fix the issues. If your team is using an issue tracker, add the ID to the sticky notes.
  • Negotiate tradeoffs based on this visualization. Discuss as a team whenever someone needs to add a note or time notation whether it is faster or cheaper to fix the debt. If it is, fix it right there and then. If not, add it to the wall. Give away control to managers to decide what to focus on.
  • Beware of starting with a complete debt audit - it can become its own bottleneck as it calls for buy-in and tends to get put off indefinitely.


Full post here, 7 mins read


Scaling to 100k users

  • When you first build an application, API, DB and client may reside on one machine/server. As you scale up, you can split out the DB layer into a managed service.
  • Consider the client as a separate entity from the API as you grow further and build for multiple platforms: web, mobile web, Android, iOS, desktop apps, third-party services, etc.
  • As you grow to about 1000 users, you might add a load balancer in front of the API to allow for horizontal scaling.
  • As serving and uploading resources start overloading servers, at say 10,000 users, move to a CDN, which you can get with a cloud storage service for static content so the API no longer needs to handle this load.
  • Finally, you might scale out the data layer at 100,000 users, with relational database systems such as PostgreSQL, MySQL, etc.
  • You might also add a cache layer using an in-memory key-value store like Redis or Memcached so that multiple hits to the DB can be served by cached data. Cache services are also easier to scale out than DBs themselves.
  • Finally, you might split out services to scale them independently, with say a load balancer exclusively for the web socket service; or you might need to partition and shard the DB, depending on your service; you might also want to install monitoring services.

Full post here, 8 mins read


Get these notes directly in your inbox every weekday by signing up for my newsletter, in.snippets().

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player