Just like modern product development, the best practices for your team also come from learning, iterating and adapting to changing requirements. There are no "best practices" that can be applied to all teams and the way you work will likely be unique to your team's requirements and outcomes.
Applying a software tool like Jira or GitHub PRs locks a team in to the tool's practices. The team's practices become tightly coupled to the tool's prescription.
The danger here is that the tool accidentally becomes "the way we do it here". The original reasons the tool was chosen are forgotten and it's more difficult to experiment with better practices when your team is artificially bounded by a mandatory tool.
Flexibility and change
There might be some months where work is more suited to upfront design and a waterfall build and other months the work will be iterative because you don't know the right answer at the start. Some months you might want lots of tests and other months no tests etc. etc.
This change is normal and a small cross-functional development team should be regularly adjusting their practices. Flexibility is key. Your culture, engineering principals, team maturity and capabilities all have a part to play in this but consider that tools might also be restricting potential improvements.
You should encourage regular critical analysis and evaluation of your tool driven practices.
Jira for every team, every product and every piece of work
Your organisation might use an issue tracker for software product development work. The issue tracker is often Jira these days.
Modern collaborative product development is not issue tracking. Iteration in a cross-functional team doesn't fit a paradigm of "assign issue to developer, developer works on issue, assign issue to next-in-line, developer grabs new issue".
Jira is a fantastic tool for tracking issues in specific scenarios and great for providing structure to new teams. But it shouldn't be a mandatory tool for every team, every product and every piece of work or task.
One of the problems is that Jira is explicit about having a single person assigned to a task at one time. This directly contradicts encouraging a team to be responsible for an outcome rather than an individual. If you're trying to encourage healthy, collaborative engineering this restriction might be providing silent friction for every task your team works on.
A counter-example is GitHub issues where you can have multiple assignees on an issue. You can wrangle Jira to do this through groups but it's fundamentally not designed for it.
Next up, most issue trackers are configured to have a task in a discrete state like "In Design", "In Test" or "In Development". These discrete states don't really support the complexity faced in developing a feature nowadays. A team might learn something new as they're coding and require some new design, they might find a bug if doing acceptance testing and require a bugfix from a developer. Is the issue "in test" or "in development" now?
It's a mess trying to fit this complexity to discrete states. These situations are normal and happen all the time. The team probably already has ways to work around this restriction about task state already. It might be chats, slack or post-it notes somewhere.
If reality doesn't match the tool's UI, can you change the UI to match how work is really being done? Should you have that UI at all if it's not helpful? Some issue trackers only provide "To do", "In Progress", "Done" for this reason. "In Progress" just means the team as a whole is working on it.
If you do use an issue tracker for product development then regularly check if the defined process or other UI in the issue tracker is the best solution for the team. Make sure it's not locking the team into unsuitable patterns.
Using a free-form solution like a whiteboard for as long as possible encourages continuing iteration in practices as the team evolves.
GitHub Pull Requests
The GitHub pull request combines two things
- Peer feedback - Github provides a diff tool that a developer can use to provide peer feedback on lines of code
- Merge gate - A gate for the developer to easily mark their code as ready to deploy to production (the merge button)
These days Github PRs usually also include CI checks with GitHub Actions.
The CI checks and diff-tool provided by GitHub are awesome but a team shouldn't assume that the asynchronous GitHub PR is best solution for their code changes.
The PR was developed for dispersed open-source code contributions. A product team might have other engineers, designers, domain experts or QA experts sitting in the same room or a quick zoom call away for code feedback.
Consider all the other potential peer reviews that are possible when you have direct access to the expertise in your team.
- If you work with the potential reviewer would it be better to screen share and walk through the change together in an IDE and running the code?
- Discuss the change risk with a QA and work together through all the automated tests required to verify the change and prevent future regressions.
- If you have a UI change, show the product manager, the customer and a designer and incorporate feedback before merging.
- Consult DevOps if you're running a gnarly DB migration before applying to a shared environment.
These are mostly synchronous but all of them should reduce the risk of rework and mistakes post-merge.
The asynchronous PR might be the best tool for the team but it's ok to be critical about it. The person merging the code change should be responsible for deciding on the necessary peer review required for their change because they are deploying the code to production and are hopefully running it while it's on production.
Make sure that the GitHub PR tool isn't determining the team's pre-merge practices.
Critically assess your tools regularly
You have to be diligent that your software tools don't simply become your practices and restrict improvements.
In product development teams we usually have fantastic systems for iterating to improve product outcomes. We should apply these same principle to our process improvement!
- Identify your desired outcomes
- Forget about the existing tools and practices
- Work with the team how to get to those outcomes
- Iterate and keep learning
For example:
- Have you tried to stop using a tool for a few weeks and see what processes or practices people invent to get stuff done? Is it different to the tool's prescription? Are the outcomes better?
- Have you tried to use a tool with a different methodology?
- Can you reconfigure the current tool to suit changes like team size, team composition or product direction?
Wrapping up
Your practices should adjust as your teams evolve and desired outcomes change. You should encourage and enable that adaption.
Make sure a team knows they can critically analyse their use of software tools as long as the outcomes achieved are improved. Sometimes it's not obvious that that is the case because if you see everyone else using a tool you might make assumptions.
Software tools are obviously helpful. But you should regularly question if these tools are artificially limiting better outcomes for your team.