Feeling in a safe environment is a prerequisite for a happy and productive team. I’ll mention a software team as an example but it could be applied to any team.
Maslow’s hierarchy of needs it’s a structured pyramid of human needs.
Notice that personal security lies within the safety needs, just above the physiological needs. If we don’t satisfy those lower-level needs, how can we unleash the ones above, where creativity, innovation, and productivity can happen? Studies show that psychological safety allows for moderate risk-taking, speaking your mind, creativity, and sticking your neck out without fear of having it cut off.
According to Wikipedia, “psychological safety is being able to show and employ one’s self without fear of negative consequences of self-image, status or career”. If you’re living in fear, whether because you’re afraid of breaking a system or of being judged, you’ll refrain from challenging the status quo or even participating. You may even resort to find ways to hide those fears.
If you didn’t understand something but you don’t say it out of fear, how unproductive is that? Ideally, you want to work at a place where it’s fine to show your weaknesses. Saying “I don’t know” should be a normal thing.
Fear and anxiety are always counterproductive. Taking risks is a requirement to evolve and making errors along the way is natural. If errors are part of life and work, we better learn to embrace them.
We learn from failure, not from success. Bram Stocker
The team should be mature enough to see mistakes as part of the learning process. You should feel safe to make mistakes (no feeling of guilt), regardless of your seniority level. Accepting the error is mostly about culture. It boils down to the people who make up the company and the team. With that in place, there’s autonomy to implement some practices within the team’s ways of working and use technology in strategic ways (e.g. automation). Let’s explore these topics further.
Here, we include culture, practices, and methodology topics. There are too many topics to cover, so I’ll just give some relevant examples:
- Proper welcome: you don’t need to throw a party when a new team member joins in, but make sure you prepare a few onboarding sessions. Pair programming can be a powerful welcome tool providing a sense of security and belonging.
- Focus: productivity improves the risk reduces if you’re constantly swapping contexts. Ideally, you should work on a single project/product at a time for a given period. Stress and error increase on high context switching.
- No finger-pointing and gatekeeping: a team should be a cohesive group of people who help each other accomplishing a common goal. Competition is the opposite of that. Don’t make someone feel bad for not knowing something, regardless of their experience. Use pair programming as a tool to foster learning and decrease know-how gaps.
- Don’t be a lone wolf. Resort to practices such as code reviews and pair programming to increase quality and know your colleagues. These practices reduce the fear of the unknown because we get the team’s feedback about us and our work.
- Feedback culture: consider regular retrospectives, speedbacks, team health checks, and ad-hoc 1-on-1 feedback sessions so the team can increase trust in each other and continuously improve.
- Experiments: one way to reduce resistance to change is to make small experiments inspired by the scientific method: based on a hypothesis, propose an experiment, analyze the results, and act on that. This applies to anything — from tweaking the technology to adapting the methodology. This reduces the fear of commitment because we can try things before. It’s also a way to make big changes through small steps always adapting in the process.
- Get to know each other: in my experience, trust is the number one factor for a functional team. One shortcut for that is doing some get-together activities.
- Lean approach: do the minimum needed to have something useful; get early and frequent users’ feedback.
- Gradual approach: reduce risk by delivering small user-driven stories; do not create long-standing branches with big changes; make frequent (and green) commits.
- Test-driven development: coupled with a CI/CD system, in my case, it provided lots of emotional safety because I knew it was hard to break things since they had tests and those tests were run very frequently. Also, I almost stopped using the debugger because the development was much more gradual with fewer surprises.
- Continuous delivery: consider swapping from releases to continuous delivery for a constant value stream and to reduce the risks inherent to big bang (i.e. waterfall) releases.
Since we’re talking about software development, technology is surely fundamental in fear and risk management. Jakob Nielsen proposed the 10 Usability Heuristics for User Interface Design in 1994 — they’re a landmark in usability. Some of them serve as good mnemonics when adapted to the use of technology in a software team.
Visibility of system status
The design should always keep users informed about what is going on, through appropriate feedback within a reasonable amount of time.
Knowing the current system status is essential for dealing with anxiety. How can you manage a system if you don’t know its status?
A CI/CD dashboard can easily summarize the current build status. Some teams have it always displayed on a TV set in the team space.
Whenever a problem is happening in production, you also want to know what’s happening. For that, there’s the concept of observability, which means that one can determine the behavior of the entire system from its outputs.
Observability is a superset of monitoring. It provides not only high-level overviews of the system’s health but also highly granular insights into the implicit failure modes of the system. In addition, an observable system furnishes ample context about its inner workings, unlocking the ability to uncover deeper, systemic issues. Distributed Systems Observability
Good error messages are important, but the best designs carefully prevent problems from occurring in the first place. Either eliminate error-prone conditions, or check for them and present users with a confirmation option before they commit to the action.
If we apply this rule to software development, we’re talking about building safety nets to prevent problems in the first place. We should accept that we, as humans, can make mistakes. Automating repetitive and error-prone tasks is essential because we put the machines doing what they’re good at. Let’s go through the most common examples:
- Define an adequate testing strategy for your needs. From unit testing to end-to-end testing, all have their role in the creation of safety nets — one of the goals of automated testing.
- You should run the test suite locally often; at least before pushing code to source control. The local testing environment should be the closest possible to a realistic one (Docker can help a lot with that).
- High feedback loop and automated build system: configure the CI/CD pipeline with care. It should trigger a build for every push, running the tests first; if a single test fails, the system is not deployed into production; if all the tests pass, you have a green build and the system is deployed. It’s all or nothing.
- Stop messing with the production database. Instead, create command-line and/or graphical tools for the support team and the developers. Besides being much safer, these guarantee you always run your domain logic when updating live data.
- Create automated scripts for frequent developer tasks (e.g.
test.sh). This reduces the likelihood of forgetting certain agreed-upon steps. Consider git hooks for some of them.
In theory, you should consider automation for repetitive or error-prone tasks. Still, be aware of the cost/benefit of the dev tools you create — the cost of implementation and maintenance versus the benefit it will create in the long run.
Help users recognize, diagnose, and recover from errors
Error messages should be expressed in plain language (no error codes), precisely indicate the problem, and constructively suggest a solution.
If you can’t prevent the error in the first place, it should be easy to recover from it. The obvious example here is that it should be easy to revert a change that generated an issue: if something goes wrong, going back should be as simple as reverting the commit, pushing it, and waiting for another automated build (hopefully fast).
As in error prevention, developers should also build tools to help them in these situations, namely CLIs and GUIs to further inspect the problem.
If some serious problem happens, talk openly about it and apply the post mortem practices. Also, keep in mind that if that happened, it’s because there were no mechanisms in place to prevent it or to recover from it; don’t blame people or make them feel bad about it. Learn from the errors and act on them. However, be careful not the act on the symptoms but rather find out the root cause and work on it.