If you’ve ever managed or been a part of a software project, chances are you’ve had your fair share of dealing with stakeholders and dependencies with other teams and within your tech stack. You may have had some limitations with your deployment process, which always introduced delays and inefficiencies within your team, or needed to get team buy-in on your roadmap.
Abuna Demoz, a seasoned Senior Software Engineering Manager, has seen it all - from building and growing startups, a stint at AWS, to now leading 3 engineering teams. These experiences have helped inform how he manages projects, stakeholders and even growing an engineering team.
In this week's installment of our expert series, Software Development 2.0, Abuna shares his insights on managing project stakeholders, reducing dependencies and some great pearls of wisdom on hiring for team-fit.
We’re all located in one office, since we want to avoid having remote teams spread out across different sites. We feel that it adds a lot of friction, increases churn within the team, communication is more difficult and adds a lot of overhead.
During our regular program update, we send out and usually depend on newsletters. These newsletters are usually a combination of people using automation to generate some of the data and then spending time to actually craft the narrative and the message in the newsletter.
It normally goes out to all the stakeholders, and then at the beginning we tend to identify who we want to add, and do that for different projects. Usually at the end of every planning cycle, all of the stakeholders who are involved in that planning cycle are added to the announce list.
A lot of it is on the manual side. We’ll have two PMs generate program reports and then they’ll send those reports out to an announced list of every few weeks to keep everyone updated.
I encourage everyone on my team to try and reduce the number of dependencies, just for the sake of reducing complexity, thus increasing the chance of success and on-time delivery. But, just by nature, it’s really hard to avoid. It’s sort of a tax that we have to live with that slows down engineering velocity.
What I always try and teach my team is that when you take on a new dependency, treat that as a big decision that should go through design review and something you should get feedback on. Dependencies tend to only increase and it’s hard to remove them. They slow our velocity and decrease our reliability.
I like to remind people that your SLA is the product of all of your dependencies. If all of your dependencies are three 9s, your SLA is going to be much lower than that. You also have to balance it with not having to reinvent the wheel, and wanting to increase launch velocity. You’re always trying to manage a lot of tradeoffs.
We sort of manage dates where, for example, if you promise something will get done by the end of Q2, and it’s not going to happen, then you delay it to Q3 and let everyone know. What matters for us is the operational metrics, so once you release something you can see how well is it running in production, how much scale can it handle, what are the error rates, etc.
Those are the things that you have to get right, and you’re expected as an engineer to put it to the top of your priority list. If a schedule slips, because you’re getting the reliability and system health to be very good, that’s a tradeoff you’re encouraged to make. But, it sort of depends on your organization, your product, where you’re at in product life cycle, and what’s critical to your team at the time.
When I was at my startup 10 years ago, we didn’t have to engineer as much for reliability, because we didn't know if what we built would be used by a lot of people. We didn’t have to build for scale - it was much more important to get stuff out the door as fast as we could to do our mvp and feature set. Plus, get that in front of customers so we could collect feedback.
Our review process goes through what we call “readability,” which is making sure that when people transfer teams, somebody new can ramp on the code really easily and it’s not super convoluted or hard to read. We have these ‘readability’ reviewers, and their job is to be the second review on a check in.
The first review is making sure that the code is functionally correct, and the second review is making sure it’s easy to read and it’s maintainable by somebody who’s not on the team and doesn’t have subject matter expertise. Your readability reviewer can be on your team, or it can be on other teams, and there’s a process you have to go through in order to become a readability reviewer yourself. It’s a way to double check how maintainable our code base is.
The ideal state is that you have your deployment systems set up so that if you deploy a canary, and any of your metrics sort of exceed the allowed errors bars in the canary, then your deployment automatically rolls back. If you don’t then you’re deployment automatically gets promoted to production in a certain amount of time.
I’ve had teams in the past who’ve been there, so it sort of depends on where you’re at in terms of improving your operational excellence. I think that’s one of those principles that is great for every development team. You should be working toward being able to have automated deployments with automated rollbacks based on what your metrics are telling you. That will ensure health more consistently and more quickly than any manual human process ever could.
The first step is test automation - you could break that down firstly with automated unit tests, and then you should have automated integration tests. Then, for every bug that you fix, you should be adding a regression test for that bug. That slowly starts to build up the framework that you need to enable your workflow to have automation. After your testing is at a good point, you could have builds being automated all the way up to staging or canary.
If some parts are still manual, you could have a human go look at the test results and perform other queries to do some manual checks, and then click a button to activate the last step of automation. Once you do that for awhile and are comfortable with the process and have added enough test coverage, then you can flip the last step, which is taking the manual review process out and then doing full automation of my deployments.
Investing is testing automation is an investment that pays off really quickly, gives you ROI really quickly, and is almost a no brainer, but you have to make short-term tradeoffs in order to justify doing it.
I think you’re touching on something that’s an absolute truism for software development, which is it’s always better to do lots of small deployments than it is to do one large deployment. With lots of small deployments, you drastically reduce the risk of something going wrong and you drastically increase how easy it is to find a bug when there is an issue with your deployment.
If you have a giant deployment, it’s really hard to reason through all the things that are changing at once, and find out which change is the root cause of the bug.
I’ve been doing this for 19 years, and I can’t think of any situation where it’s better to have one large deployment, than multiple smaller ones.
It’s a combination of a bunch of different things - some of the stuff is top down and the other is bottoms up. What we call horizontals, which are priorities that we communicate to the customers and have to treat them as our highest priority, and then figure out how to get them done. Those priorities are top-down, and then we have our own customer-driven roadmap, where we’re talking to customers, figuring out what the most important features are, and then prioritizing those and putting them into a roadmap. And then that roadmap feeds into a certain percentage of our allocated time every quarter.
Then the third category is our internal priorities, all of the improvements around efficiency, velocity. These are all the platform improvements, the things we’re building that will benefit everyone who uses our platform.
We’re pretty good at sticking to it, but depending on impact, sometimes you have a lot of requests that come in that are really important that don’t fit into your framework, and you need to make adjustments. You’re making a judgement call on something that’s the right thing to do for the business.
My planning philosophy is to try and set priorities and principles that let everybody on the team to think the same way about the process. Every once in awhile, we’re going to have exceptions and we have to get together to make judgement calls.
Since I'm managing 3 engineering teams now, I look at my role as a manager in hiring as helping a potential candidate figure out is my team is the best place for them to advance their career goals. I’ve gone through a change where I used to think that my role was to convince somebody that my team was the best place. It’s an important distinction because I think one is just a sales job, and the other is seeing if there’s cultural and values alignment, and building trust and thinking long term. People often pick a job based on where they want to be in two years, and in general, I think most managers look at the candidates they’re hiring in terms of the job that they need done today, that they have a hole that needs to be filled.
There’s usually a mismatch there, and I try to shift my thinking and be more empathetic to what the candidate is looking for. I look at all the relationships with the people on my team as something that will outlast the team, project and maybe even the company.
Optimizing for the short term is bad for everybody and that’s what I look at as sort of my value-add as a manager. I’m transparent with the people I talk to, and tell them the good and the bad, and try to give them the most objective assessment on possible on whether or not my team is a good place for them to advance their goals and values.
I actually found that it works for landing the best candidates, even though most people assume that a sales job will help you land the best candidates. I started doing this because it aligned with my values, and found that the outcomes are better.
Authenticity is important, and act according to your own values, not to what you think you should do or what everybody thinks you should do. Be values driven and good things happen because people can see that authenticity and it will often resonate with them.
In Software Development 2.0, experts in the field of software development share their insights and best practices with the community.
Interested in sharing your experiences? Give us a shout at firstname.lastname@example.org.