-
Notifications
You must be signed in to change notification settings - Fork 54
Agency Risk

Coordinating a team is difficult enough when everyone on the team has a single Goal. But, people have their own goals, too. Sometimes, the goals harmlessly co-exist with the team's goal, but other times they don't.
This is Agency Risk. This term comes from finance and refers to the situation where you (the "principal") entrust your money to someone (the "agent") in order to invest it, but they don't necessarily have your best interests at heart. They may instead elect to invest the money in ways that help them, or outright steal it.
"This dilemma exists in circumstances where agents are motivated to act in their own best interests, which are contrary to those of their principals, and is an example of moral hazard." - Principal-Agent Problem, Wikipedia
The less visibility you have of the agent's activities, the bigger the risk. However, the whole point of giving the money to the agent was that you would have to spend less time and effort managing it.

Agency Risk clearly includes the behaviour of Bad Actors. But, this is a very strict definition of Agency Risk. In software development, we're not lending each other money, but we are being paid by the project sponsor, so they are assuming Agency Risk by employing us.
As we saw in the previous section on Process Risk, Agency Risk doesn't just apply to people: it can apply to running software or whole teams.
Let's look at some examples of borderline Agency Risk situations, in order to sketch out where the domain of this risk lies.
We can't (shouldn't) expect people on a project to sacrifice their personal lives for the success of the project, right? Except that "Crunch Time" is exactly how some software companies work:
"Game development... requires long working hours and dedication from their employees. Some video game developers (such as Electronic Arts) have been accused of the excessive invocation of "crunch time". "Crunch time" is the point at which the team is thought to be failing to achieve milestones needed to launch a game on schedule. " - Crunch Time, Wikipedia
People taking time off, going to funerals, looking after sick relatives and so on are all Agency Risk, but they should be accepted on the project. They are a necessary Attendant Risk of having staff rather than slaves.
"The one who stays later than the others is a hero. " - Hero Culture, Ward's Wiki
Conversely, Heroes put in more hours and try to rescue projects single-handedly, often cutting corners like team communication and process in order to get there.
Sometimes, projects don't get done without heroes. But other times, the hero has an alternative agenda than just getting the project done:
- A need for control, and for their own vision.
- A preference to work alone.
- A desire for recognition and acclaim from colleagues.
- For the job security of being a Key Man.
A team can make use of heroism, but it's a double-edged sword. The hero can becomes a bottleneck to work getting done, and because want to solve all the problems themselves, they under-communicate.
When you work with an external consultancy, there is always more Agency Risk than with a direct employee. This is because as well as your goals and the employee's goals, there is also the consultancy's goals.
This is a good argument for not using consultancies, but sometimes the technical expertise they bring can outweigh this risk.
Also, try to look for hungry consultancies: if you being a happy client is valuable to them, they will work at a discount (either working cheaper, harder or longer or more carefully) as a result.
This is when someone decides that the project needs a dose of "Some Technology X", but in actual fact, this is either completely unhelpful to the project (incurring large amounts of Complexity Risk), or merely less useful than something else.
It's very easy to spot CV building: look for choices of technology that are incongruently complex compared to the problem they solve, and then challenge by suggesting a simpler alternative.
Heroes can be useful, but underused project members are a nightmare. The problem is, people who are not fully occupied begin to worry that actually, the team would be better off without them, and then wonder if their jobs are at risk.
The solution to this is "busy-work": finding tasks that, at first sight, look useful, and then delivering them in an over-elaborate way (Gold Plating) that'll keep them occupied. This will leave you with more Complexity Risk than you had in the first place.
Even if they don't worry about their jobs, doing this is a way to stave off boredom.
A project, activity or goal pursued as a personal favourite, rather than because it is generally accepted as necessary or important. - Pet Project, Wiktionary
Sometimes, budget-holders have projects they value more than others without reference to the value placed on them by the business. Perhaps the project has a goal that aligns closely with the budget holder's passions, or its related to work they were previously responsible for.
Working on a pet project usually means you get lots of attention (and more than enough budget), but due to Map and Territory Risk, it can fall apart very quickly under scrutiny.
Morale, also known as Esprit de Corps is the capacity of a group's members to retain belief in an institution or goal, particularly in the face of opposition or hardship - Morale, Wikipedia
Sometimes, the morale of the team or individuals within it dips, leading to lack of motivation. Morale Risk is a kind of Agency Risk because it really means that a team member or the whole team isn't committed to the Goal, may decide their efforts are best spent elsewhere. Morale Risk might be caused by:
- External factors: Perhaps the employees' dog has died, or they're simply tired of the industry, or are not feeling challenged.
- If the team don't believe a goal is achievable, they won't commit their full effort to it. This might be due to to a difference in the evaluation of the risks on the project between the team members and the leader.
- If the goal isn't considered sufficiently worthy, or the team isn't sufficiently valued.
- In military science, a second meaning of morale is how well supplied and equipped a unit is. This would also seem like a useful reference point for IT projects. If teams are under-staffed or under-equipped, this will impact on motivation too.
It seems strange that humans are over-confident. You would have thought that evolution would drive out this trait but apparently it's not so:
"Now, new computer simulations show that a false sense of optimism, whether when deciding to go to war or investing in a new stock, can often improve your chances of winning." - Evolution of Narcissism, National Geographic
In any case, humans have lots of self-destructive tendencies that haven't been evolved away, and we get by.
Development is a craft, and ideally, we'd like developers to take pride in their work. Too little pride means lack of care, but too much pride is hubris, and the belief that you are better than you really are. Who does hubris benefit? Certainly not the team, and not the goal, because hubris blinds the team to hidden risks that they really should have seen.
Although over-confidence might be a useful trait when bargaining with other humans, the thesis of everything so far is that Meeting Reality will punish your over-confidence again and again.
Perhaps it's a little unfair to draw out one human characteristic for attention. After all, we are riddled with biases. There is probably an interesting article to be written about the effects of different biases on the software development and project management processes. (This task is left as an exercise for the reader.)
Agency Risk doesn't just refer to people - it refers to anything which has agency over it's actions.
"Agency is the capacity of an actor to act in a given environment... Agency may either be classified as unconscious, involuntary behaviour, or purposeful, goal directed activity (intentional action). " - Agency, Wikipedia
There is significant Agency Risk in running software at all. Since computer systems follow rules we set for them, we shouldn't be surprised when those rules have exceptions that lead to disaster. For example:
- A process continually writing log files until the disks fill up, crashing the system.
- Bugs causing data to get corrupted, causing financial loss.
- Malware infecting a system, and sending your passwords and data to undesirables.
Agency Risk also covers whole teams too. It's perfectly possible that a team within an organisation develops Goals that don't align with those of the overall organisation. For example:
- A team introduces excessive Bureaucracy in order to avoid work it doesn't like.
- A team gets obsessed with a particular technology, or their own internal process improvement, at the expense of delivering business value.
- A marginalised team forces their services on other teams in the name of "consistency". (This can happen a lot with "Architecture", "Branding" and "Testing" teams, sometimes for the better, sometimes for the worse.)
Intersecting both the internal and external environments are security concerns.
Interestingly, security is handled in very similar ways at all sorts of levels:
- Walls: defences around the complex system, to protect it's parts from the external environment.
- Doors: ways to get in and out of the complex system, possibly with locks.
- Guards: to make sure only the right things go in and out. (i.e. to try and keep out Bad Actors).
- Police: to defend from within the system, against Agency Risk and invaders.
- Subterfuge: Hiding, camouflage, disguises, pretending to be something else. tbd
These work various levels in our own bodies: our cells have cell walls around them, and cell membranes that act as the guards to allow things in and out. Our bodies have skin to keep the world out, and we have mouths, eyes, pores and so on to allow things in and out. We have an immune system to act as the police.
Our societies work in similar ways: in medieval times, a city would have walls, guards and doors to keep out intruders. Nowadays, we have customs control, borders and passports.
We're waking up to the realisation that our software systems need to work the same way: we have Firewalls to protect our organisations, we lock down ports on servers to ensure there are the minimum number of doors to guard and we police the servers ourselves with monitoring tools and anti-virus software.
- Security Risk
- Hacking
- Denial Of Service
- Security, Trust and Complexity
- oWASp
tbd, How much do compilers do for you? Now, they prevent many kinds of security error. Libraries too.

We've looked here at some illustrative examples of Agency Risk. But as we stated at the beginning, Agency Risk at any level comes down to differences of Goals between the different agents, whether they are people, teams or software.
So, having looked at agents individually, it's time to look more closely at Goals, and the Attendant Risks when aligning them amongst multiple agents.
On to Coordination Risk...
- Discuss here.
- Watch/Star this project to be invited to join the Risk-First team.

