Joseph Pelrine

At the intersection of psychology, social complexity, and agility

Category: Agile Adages

More Thoughts on Power

In my last blog post on the different types of power (here), I noted that it is more advantageous to focus on increasing expert and referent power than the other power bases. I’d like to look at this in a bit more detail here.

Power bases may be divided into two categories

–       positional, i.e. situations where a person is in a position to have power
–       personal, i.e. situations where a person’s power is intrinsic in nature

Reward, coercive, and legitimate power are all positional in nature. To have them, one must be in a position to reward or coerce, or to have legitimate authority. Raven’s later addition, information power, is also positional in nature (Raven, 1965). Personal power bases are, in a way, easier to increase, as the effort required is purely personal, and does not always require outside help. This being said, which of the 2 personal power bases should one focus on increasing?

Benfari et al. (1986) define power relatively simple as “the capacity to influence the behaviour of others”. For them, power is value-neutral. Power is also interpersonal; it requires (at least) two individuals, whose power interactions may be reciprocal or one-sided. For individuals, power exists as two aspects: as a motive, and as a behaviour, and either aspect may be disturbing to the person(s) on the receiving end.

The need for power exists in all of us to a greater or lesser degree. Simply having control over one’s own life is a form of power. Problems start when the urge for power increases, when the desire for power becomes a motive or driving force for someone’s actions, up to the point where it becomes pathological (e.g., as seen in too many politicians). When power is exercised and manifested as influence, it becomes a behaviour in itself. Even though power may be value-neutral, the response to power actions may be positive (P+) or negative (P-) for the recipient, and will thusly change the recipient’s view of the wielder of the power. For example, consider expert power. We have all profited at some time from the advice of an expert, but how often have we experienced someone flaunting his expertise as a know-it-all?

The recipient views the power as negative if he feels a sense of being exploited or manipulated. If the recipient views the power as positive, i.e. when the recipient benefits from the power, he feels support, increased motivation, and an ego boost.

Different types of power have different positive and negative aspects. Reward power is only positive, coercive power only negative. Benfari et al. list 7 types of power – in addition to the 5 classic types described by French and Raven, they include Raven’s information power, and add what they call “affiliation power”. This type of power is similar to the concept of centrality in social network analysis (Bonacich, 1987), and describes the power a person has by virtue of their access to other persons of power, by being affiliated with them. (n.b. Benfari et al. also mention the power of groups. Since this is not an individual power base, we’ll look at it in a later post, but not here).

Power base Explanation Perceived as
Reward Positive strokes, remuneration, awards, compliments, other symbolic gestures of praise P+
Coercion Physical or psychological injury, verbal and non-verbal put-downs, slights, symbolic gestures of disdain, physical attack,demotion, unwanted transfer, withholding of needed resources P-
Legitimate Management right to control, obligation of others to obey, playing ‘the boss’ and abusing authority P-
Exercise of leadership based on authority in times of crisis or need P+
Referent Identification based on personal characteristics, sometimes on perception of charisma; or reciprocal identification based on friendship, association, sharing personal
information, providing something of value to the other, and on common interests, values, viewpoints and preferences; creation of reciprocal ‘IOUs’
P+
Expert Possession of specialized knowledge valued by others, used to help others, given freely when solicited P+
Unsolicited expertise, seen as unwarranted intrusion; continual use can create barriers; expertise offered in a condescending manner can be seen as coercive; withholding expertise in times of need P-
Information Access to information that is not public knowledge, because of position or connections; can exist at all levels in the organization, not just at the top; those at the top may know less about what is going on; secretaries and personal assistants to senior executives often have information power, and can often control information flows to and from superiors P-
Affiliation ‘Borrowed’ from an authority source with whom one is associated – executive secretaries and staff assistants act as surrogates for their superiors; acting on the wishes of the superior P+
Acting on their own self-interest; using negative affiliation power by applying accounting and personnel policies rigidly P-

Source: Based on Benfari et al. (2001)

As can be easily seen from the table above, the only types of power that are considered purely positive are reward and referent power, and of these, only referent power is personal-based. This provides a good argument for a focus on increasing referent power if you want to be able to help your clients’ teams, and co-workers. In my last blog post, I mentioned a number of core competencies to concentrate on for increasing referent power. Here are some other tips from Benfari et al.

  • Get to know the motives, preferences, values and interests of colleagues.
  • Build relationships using shared motives, goals and interests.
  • Build large networks of people and information – make connections between individuals and between individuals and different stakeholders.
  • Respect differences in interests, and points of view-don’t attack – invite reciprocity.
  • Give positive strokes, use reward power, confirm others competence.
  • Share information and expertise with others.
  • Minimise concerns with status.
  • Develop communication skills – assertiveness, meta-communication, question asking, clarity and rapport.
  • Manage your impression – dress, body language, facial expression, voice one, etc.
  • Develop understanding of how people tick – e.g. body language, use of language, ‘trigger points’.
  • Develop an understanding of systemic deep dynamics and implicit information channels – know what you are sitting in in any given moment.
  • Demonstrate congruence between espoused values and behaviours.
  • Develop ability to take risks and lead on issues – even if it is lobbying for an idea within a meeting.

Sounds like good advice for anyone wanting to become a better ScrumMaster, doesn’t it?

References

Benfari, R.C. et al. (1986) The Effective Use of Power. Business Horizons, 29, 12.
Bonacich, P. (1987) Power and Centrality: A Family of Measures. The American Journal of Sociology, Vol. 92, No. 5, 1170-1182.
Raven, B.H. (1965) Social influence and power. In Current studies in social psychology, (Ed, Steiner, I.D.F., M.) New York: Hoh, Rinehart. Winston, pp. 371-381.

Authority and Power

One of the classic sayings in Scrum is that the ScrumMaster has no authority. He cannot tell his team members what to do, or what not to do. In a way, this makes sense. If the ScrumMaster had the authority to tell people what to do, he would take away their opportunity to take responsibility for their actions, to become committed and not just involved. Looking at it differently, by telling team members what to do, he would give them the chance to refuse responsibility for their actions. “It’s not my problem, he told me to do it!” Even though the ScrumMaster has no authority, this does not mean that he has no power.

Power cannot be taken, it can only be given. A person only has power over you if you give them this power over you. Being aware of the ways you give people power over you will help you avoid doing it unintentionally or inadvertently. So, what types of power may a person possess, and which types of power should one best focus on increasing?

In 1959, John French and Bertram Raven wrote a seminal paper on the different types of power (1959). According to French and Raven, power is defined as the potential ability of one person to influence another person. Thus, power is potential influence, while influence is kinetic power.

In their paper, French and Raven define five different types of power, all of which may vary in their domain, strength and range.

Reward power. You gives a person reward power over you when you believe that they can do something good for you or they can take away something bad. Obviously, if the person actually does do something good for you, his or her reward power over you increases.

Coercive power. You give a person coercive power over you if you believe that the person can do something bad to you, for example, cause you harm or pain. If the person actually does something bad to you, their power over you increases. The mere awareness or threat of coercive power is often enough to enforce compliance. Imagine if you were driving a car and you saw a police officer standing on the corner. Even if he was not looking directly at you, you would tend to drive slower.

Legitimate power. You give a person legitimate power over you if you believe that they have the right to have this power. This right often comes as the result of an implicit or explicit social contract. An example for an implicit social contract would be a contract that parents have with their children (although as every parent knows, this contract must be renegotiated regularly). An explicit social contract would be your work contract, which gives your boss power over you. As one can see, legitimate power contains both reward and coercive components. If you do something good at your job, you may get a bonus. If you do something bad at your job, you may get fired.

Expert power. You give a person expert power over you if you believe they have superior knowledge relevant to the situation and to the task at hand. This power rarely extends outside of the domain of expertise, but the implied transference of expert power into other domains is a technique often used in advertising.

Referent power. Referent power is the most difficult type of power to describe. It is best understood as a type of power that comes from personal integrity and/or from charisma. This is the type of power that Gandhi or Nelson Mandala or Martin Luther King had. Over time, though, the power these men had became legitimate power, as they were voted into political office.

In his later works, Raven added a sixth type of power, which he termed informational power. Having access to information, and the ability to use this information, can give a person power. As an example, think of Edward Snowden.

Looking at the different types of power, one sees that they can be grouped into two groups – positional power, which is power one receives as the result of being in a position to have the power, and personal power, which is not dependent upon position, but solely dependent upon the person. If you want to increase your power base, in order to better help people, then which type(s) of power should you focus on increasing?

Being in a position to give a reward or to coerce, or being in a position of having the right to legitimate power, puts one in the position to command others. This is not the type of power a ScrumMaster should use. This is the reason why no one who is in a management position can be a ScrumMaster for his team. It is better to focus on increasing your personal power than on increasing your positional power.

How can you start increasing your personal power? You can increase your expert power by concentrating on your continued professional development. Reading, keeping up to date on new developments in your field, attending trainings, etc., are all ways of increasing your expert power.

Increasing your referent power can be done by focusing on your continued personal development. This is a noble task whether or not you work as a ScrumMaster, since furthering your personal competencies will increase your feeling of well-being. Awartani et al. (2008) define well-being as the realisation of one’s physical, emotional, mental, social and spiritual potential.

Mental or rational well-being as that part of life which is primarily related to thinking and cognition, and to the processes of the rational mind, e.g.: planning, understanding, focusing, envisioning, abstraction, reflection, evaluation.

Emotional refers to the intrapersonal or inward-looking awareness and processing of feelings, of understanding your feelings, their triggers, and your reaction patterns, of having your emotions under control, and not being “hijacked” by them.

Social refers to the interpersonal or outward-looking awareness and processing of feelings, of understanding how they influence our interactions with others.

Physical refers to those aspects of life related to the physical senses and to sensory experience, to our bodies, and to the material and natural environments. The actions and functions of doing, building, taking apart, detailing, producing, acting, and making practical are included.

Spiritual is not necessarily a religious or esoteric concept, but refers to life, to its meaning and purpose, to beliefs and what one believes in. You are believable for others when they feel that you believe deeply and strongly in something.

Although all these aspects each play a role, well-being represents a pervasive feeling about oneself, one’s life, and one’s environment. You can start right now and take a first step towards your own well-being. Take a few moments to think about these 5 areas, where your strengths and weaknesses are, and which areas you want to focus on strengthening.

References

Awartani, M. et al. (2008) Developing Instruments to Capture Young People’s Perceptions of how School as a Learning Environment Affects their Well-Being. European Journal of Education, 43, 51-70.
French, J.R.P.Jr. & Raven, B. (1959) The Bases of Social Power. In Studies in Social Power, Ann Arbor, Michigan: Institute for Social Research, University of Michigan, pp. 150-167.

The Volcano Principle

(n.b. Back in 2011, I was asked to write an article for a German magazine for their special issue celebrating the 10 year anniversary of the Agile Manifesto. I wasn’t too happy with the direction their editors pushed the article in, and since a lot has happened since then, here’s a new, English version).

Responding to change

On 14 April 2010, a volcano in Iceland exploded. As we struggled to pronounce “Eyjafjallajökull”, air traffic throughout Europe was paralyzed by clouds of ash. I was lucky enough to be trapped at home, much to my client’s frustration, but others had it worse. Friends of mine had to travel for 48 hours by train to get home from a client’s site, and I even heard of a stranded Swiss Air pilot who missed his own wedding.

I call it the volcano principle. Time and again, things happen that are beyond our control, but that affect us. Things that we have to respond to. We do not know when something will happen; we do not know what will happen. Some of us do not want to admit that such things happen, and are then taken by surprise by the events. Others are Agile.

We value: responding to change over following a plan

As Harrison Owen says, there are no closed systems (see [Owe08]). We never know when and where volcanoes, which we may have never previously encountered, will erupt. It may be that our customers have new needs and wishes. It may be – and it is often the case – that our competitors bring something new onto the market. Or it may be that politicians adopt new laws or regulations that we have to consider. Volcanoes lurk everywhere. Either we react to them, or we are out of the game.

How much are you prepared to throw away?

When I worked with Kent Beck in the early days of the Agile movement, we spent time trying to explain to people what the basic and fundamental ideas behind eXtreme Programming (XP) (see [Bec99]) were. Kent often used the metaphor of driving a car. That was entertaining, not only because Kent could tell some horror stories of his first driving lessons with his father, but also because driving was suitable as a metaphor for many XP ideas, principles, and techniques. Driving is an example of test-driven development – the tests are your eyes. It is an example of cost estimation – how far is it to the destination, which is often answered as time and not distance. It is also a good example of quickly adapting to the current circumstances – when driving, you are not merely pressing on the accelerator, but always correcting – a little faster, or slower, a little left, a little right. The driving analogy shows up the issues that arise when you’re stubbornly following a plan, or directions. Once there is a detour, a traffic jam or an accident, you’re stuck if all you have are directions. The phrase “responding to change over following a plan” reminds us to, demands that we really focus on what’s important: that we do not stubbornly follow a plan, but that we reach our goal. Agility requires of us that we take to heart the following question: what do we want – the implementation of a first specification or a successful project? The two will never be the same.

We can even go further. One of Kent Beck’s favourite sayings was: “How much are you prepared to throw away?” If we think about driving a car, when we have invested time in developing a plan or getting directions to a far-away place, we will often cling to the plan – even when it is no longer useful or if the road is not passable. Otherwise, the time we invested would be considered as wasted time. The Agile approach, however reminds us that there will always be sites or volcanoes – so we shouldn’t consider them to be exceptions processed by change requests. Instead, we should Embrace Change, we should expect from the beginning that changes will take place, and should build our process accordingly.

Where are we now?

But where we are today, ten years after publication of the Agile Manifesto? Do we really live those values, or do we just go through the motions? My friend Dave Snowden describes the development of new methods in his inimitable way, as follows (see [Sno05]):

• An Academic group studies a range of organizations to identify causal linkages between things those organizations do and results that they achieve or fail to achieve, from which they derive a hypothesis that forms a definition of best practice. A popular management book then follows and a new “fad” is born.
• Consultants and IT providers produce industrial-strength recipes based on the new idea, which ideally involve a consultancy process, followed by a technology implementation, and some form of organizational change or cultural alignment with the programme to orient employees to the new goals
• Managers go through a process based on the recipe to determine a desired end state defined in terms of economic performance, behaviour characteristics etc. They then determine the current state and identity a series of process steps to achieve that goal and roll out the programme promising substantial improvements as a result to their stakeholders many of whom in the “employee” category are already suffering from substantial ‘initiative fatigue’.
• Some years after the fad has run its course in industry, and its limitations are apparent, the consultants find a lucrative secondary market in applying “industrial best practice” to government clients.

Back when Kent Beck first published his ideas about a way we could develop better software, he was often laughed at. Many people thought that something like this could never work. In this sense, the first edition of “Extreme Programming Explained” was perhaps not the great management book, but it was a call to battle against old methods of software development.

XP is not easy. At the time we joked, for example, that we do pair programming because no one has the discipline to stick to all the XP practices alone. The first consultants and IT providers who worked with XP, like Ken Auer and Joshua Kerievsky in the USA, or Steve Freeman, Tim Mackinnon, and the other founders of London’s eXtreme Tuesday Club, had to go on a journey of discovery, and they allowed their clients to share their experiences rather than selling them recipes or methods.

Then came Scrum

Some causal linkages between the things organizations do and the results that they achieve or fail to achieve were described in a 1998 paper on Scrum (see [Bee00]). The patterns described there refer in turn to an article published in 1986 in the Harvard Business Review (see [Tak86]). The authors have, however, discreetly circumnavigated the problem described in Snowden’s statement, “Patterns have a predictive capacity only within clearly defined ontological boundaries” (see [Sno03]). The domain of patterns as “Best Practice”, that can be applied repeatedly, in different situations, to obtain reproducible results, is in simple and complicated systems, which are inherently ordered. But in complex systems, of which software development is one, a retrospectively coherent causality exists (see [Pel09]). Viewed like this, it should come as no surprise that those “best practices” don’t always work.

The requisite management book came from Ken Schwaber (see [Sch01]), who also supplied the associated branding and the money machine: the “Certified ScrumMaster” Program. Whereas XP was “extreme” also in the sense of being hard to do, Scrum offered a lower barrier to entry. You didn’t need an understanding of software development, let alone to be able to develop software yourself. “It’s only management,” thought the many management consultants who jumped on the Scrum bandwagon. This mixture of a number of “best practices,” a management book, and a low entry threshold, meant that the way was open for the consultants and IT service providers. They duly arrived with their “Scrum Checklists”, but their primary goal was often to market their own tools and methods as Scrum (compatible) in order to milk a new cash cow. Is it any wonder that Ken Schwaber says, “75% of companies that say they’re doing Scrum will never achieve the full benefits”?

Just as the star of Scrum was starting to fade, the next Silver Bullet arrived: Kanban. The same consultants who told us a few years ago that XP was too heavy, and it would be easier to get started with Scrum, now say that Scrum is too heavy and requires a reorganization of the company. Just put up a Kanban Board, limit the work-in-progress – and magically everything becomes visible, and you’ll be fine. The approach is good, but it is not enough. It is not enough merely to show people where their weaknesses are, as some are remarkably resistant to change (see [Keg09]). This transparency is a necessary, but not a sufficient, condition for change. Just as Scrum – used properly – showed that the technical practices of an XP approach are required to deliver high quality software, Kanban – done correctly – will show that certain practices from Scrum and/or XP make sense, or are necessary if one wants to continuously improve. (n.b. the question of whether Kanban can or should be classified as a fad, according to Snowden’s criteria, is left as an exercise to the reader).

Sure, there are world-class companies such as Joshua Kerievsky’s Industrial Logic, who work without effort estimation, planning, iterations, and so on. Joshua, though, was one of the first pioneers of XP – someone who has gone through the whole learning process from the ground up – and his team have worked together for years. Such a world-class team will be successful in spite of each method. Unfortunately, most companies that want to throw everything overboard are not nearly as far advanced or as good as Industrial Logic. “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change” (Charles Darwin).

What about us? Are the players in the Agile Scene (coaches, trainers, Scrum Master, Scrum Alliance, change agents, etc.) Agile themselves, in the sense that they embrace change? Should they? How should the players themselves deal with this demand, this challenge?

Scrum, for example, is a very good, simple framework for managing work in complex environments. Scrum is Agile, adaptable, and anything but prescriptive. This is precisely why it requires an understanding of the underlying theories and principles to get it right and to apply it in sustainable manner. The work around this basic understanding has been rather slow, and only a few researchers are concerned about the deeper reasons why Agile processes actually work (see [Pel10]). Many ignore the fact that science has made big jumps since the publication of the HBR article (see [Tak86]) and Scrum pattern-paper (see [Bee00]), possibly also because the latest research critically questions some basic assumptions of these papers (see e.g. [Gou06] and [Nor06]). Instead, there is now an open struggle going on between the Scrum Alliance on the one hand and Scrum.org on the other side about who “owns” Scrum, and about what Scrum “really” is. These attempts to nail down the definition of Scrum, together with efforts to develop certifications based on these definitions – the battle for the “one true Scrum” – are leading to the problem that Scrum is being defined more rigidly and prescriptively, and is itself becoming less Agile.

Where people are Agile is in finding ways to make money from the Agile methods. Although we all have to earn a living, we must never forget the first principle of the Agile Manifesto: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” Maybe the manifesto is missing a sentence such as: “We value helping our customers over taking their money.”

Outlook

What do I think will happen in the second decade of the Agile Manifesto? I dare venture only a few predictions: research will surely continue, the search for better ways to develop software will too, as will the search for Silver Bullets and new sources of earning money from Agile. I am confident that new and better methods will appear. But even if they do not, and if only the last point of Snowden’s description holds true, and the current “industry best practices” take hold in government, they would constitute a step forward.

I close with a guiding principle of the first Agile philosopher, Heraclitus: “The only constant in the universe is change.”

Literature & Links
[Bec99] K. Beck, Extreme Programming Explained: Embrace Change, Addison-Wesley Professional, 1999
[Bee00] M. Beedle et al, Scrum. A Pattern Language for Hyperproductive Software Development, Pattern Languages of Program Design 4, Addison-Wesley, 2000
[Gou06] S. Gourlay, Conceptualizing Knowledge Creation: A Critique of Nonaka’s Theory, Journal of Management Studies, Volume 43 Issue 7, 2006
[Keg09] R. Kegan, L. Lahey, Immunity to Change: How to Overcome it and Unlock the Potential in Yourself and Your Organization, McGraw-Hill Professional 2009
[Nor06] D. Nordberg, Knowledge creation: revisiting the ‘ba’ humbug, 2006, see http://ssrn.com/abstract=891068
[Owe08] H. Owen et al, Leadership for high perfomance in a self-organizing world, Berrett-Koehler, 2008
[Pel09] J. Pelrine, On retrospective coherence, 2009, see http://bit.ly/95Pqqt
[Pel10] J. Pelrine, on understanding software agility – from a social complexity point of view, 1st Int. Workshop on Complexity and Real-World Applications, Southampton, UK 2010
[Sch01] K. Schwaber and M. Beedle, Agile Software Development with Scrum, Prentice Hall 2001
[Sno05] D. Snowden, multi-ontological sense-making, Management Today Yearbook 2005
[Sno03] D. Snowden, Managing for Serendipity: why we should lay off “best practice” in KM, ARK Knowledge Management, Vol 6 Issue 8, 2003
[Tak86] H. Takeuchi and I. Nonaka, The New New Product Development Game, Harvard Business Review, Jan-February 1986

Thoughts on priority poker

Many years ago, I took part in one of the first Certified Scrum Product Owner courses ever held by Ken Schwaber and Mike Cohn. One of my favourite exercises from that course, and one I’ve often used since, was priority poker. Priority poker is a method for assigning relative priority, or weight, to items, and is based on methods recommended in multi-attribute utility analysis literature[1]. In this exercise, you spread your user story cards out on a table, and allocate a number of poker chips to each story in order of their importance. Priority poker is a great learning tool. It’s easy to learn, it’s tactile, and it’s fun. For me, the most important learning experience of this exercise is the “aha” effect that comes from realising that the number of resources is finite, and that prioritising a particular story higher comes at the cost of prioritising another story lower.

Priority poker

Figure 1: Priority poker

A question popped up recently in one of my product owner courses which made me re-think a few premises about priority poker. Two groups came up with a different number of stories for an exercise we were doing, but I gave them both the same number of poker chips. Looking at the results of the group’s prioritisation exercise, I realised that there must be some connection between the number of stories and the number of chips. What would be the optimum relationship/proportion between chips and possible stories to ensure the best possible prioritisation? With too few chips, there’s not enough flexibility in choosing. With too many chips, things can become equally important, and one loses the nuances of such an exercise [2]. Is there a mathematical formula to calculate this? Empirical data? Rules of thumb?

I’m lucky to have many friends who are a lot smarter than I am, and who don’t mind me asking them questions like these. In this case, my victim was my friend Dr. Natasha Zharanova, who works in the finance department at eBay in Amsterdam, and who did her PhD in economics at Princeton. Natasha once attended a product owner training I did while working at eBay where we did priority poker, and she mentioned to me that she had done her master’s research on a similar topic, cumulative voting. Her answers to my questions caused me a thoughtful and sleepless night, tossing and turning in bed thinking about this question, until I finally gave up and started hacking Smalltalk at 3:30 AM to run some simulations.

Natasha pointed out that one of the goals of cumulative voting is to ensure adequate minority representation. Indeed, when we did the exercise together, we had given various stakeholders a few chips, and their strategic placement of chips influenced the final prioritisation in interesting and unexpected ways. (Note: you probably know cumulative voting as “dot voting”, where each member of a working group gets 3 dots to place on their e.g. favourite topics to be discussed).

Some points which Natasha pointed out to me:

– the total number of votes allotted to a voter is equal to the number of winning candidates.

– This works when all voters are seen as equal. When it comes to shareholders, usually the number of votes one gets is also proportional to his/her number of shares (I.e., a shareholder with 10\% of shares gets 9 times less than one with 90\% of shares), but the total sum is still related to the number of winning candidates.

– Of course, in most examples where this is used (I.e., board of directors elections), the number of open positions is fixed. In my example, though, the number of stories selected is not known in advance; but perhaps this is where another assumption / rule of thumb could be introduced? Something along the lines of “there is no way that more than 5 stories can be top priority simultaneously”? Then everybody gets 5 chips…

With a product owner (in a strict sense), the number of voters is 1. This simplifies things a bit, as it forces the focus onto the relationship between votes (chips) and options (user stories). Natasha’s question of “how many projects can be top priority simultaneously?” got me thinking, and I ended up reversing it, asking “how many projects or stories can we afford to ignore?”

I think the optimum number of votes would be the minimum number that ensures that each option can receive a unique, non-zero number of votes. For 1 item, it would be 1 chip. For 2 items, it would be 2+1, or 3, chips, for 3 items 3+2+1, or 6 chips, etc. The number of votes would be a number in the sequence 1, 3, 6, 10, 15, 21, 28 … or essentially, for n items:

or simply

This would be the smallest number of chips that would allow all options to be represented, while still forcing an increase in value of one option at the cost of another.

Looking at the first formula, the last (n – 1) becomes interesting. This is our lower bound, normally 1, but what it represents is the number of options we can afford to ignore, right? If we substitute this number with a higher bound, thus saying that we can afford to ignore more items, then the number of votes necessary to ensure prioritisation decreases. For me, this lower bound would be the number of stories that we can allow to have 0 chips.

I couldn’t get that number sequence 1, 3, 6, 10, 15, 21, 28 … out of my head. I was sure I’d seen it somewhere before, but couldn’t remember the name. Factorial? Sort of, but adding, not multiplying. Fibonacci? Sort of, but summing up all previous numbers, not just the last two.

In cases like these, I normally call on my math wizards, Tatiana Pastukhova and Wiggert Loonstra. They both gave me the answer: triangular numbers. Think billiard balls here.

Figure 2: Triangle numbers

A triangular number is the number of balls in an equilateral triangle completely filled with balls. Looking at the Wikipedia page on triangular numbers, I found the most efficient way to calculate them:

So, to figure out the number of chips you need for prioritising your backlog using priority poker, first figure out how many stories you can afford to do without (this is a good reality check in any case). Then let

and the number of chips needed will be

Simple, right?

Notes:

1. See Emil J. Posavac and Raymond G. Carey, Program Evaluation: Methods and Case Studies, Prentice-Hall, 1989; W. Edwards and J.R. Newman, “Multiattribute Evaluation”, in H.R. Arkes and K.R. Hammond (eds.), Judgment and Decision Making, Cambridge University Press, Cambridge, England, 1986, 13-37; and C. Kirkwood, Strategic Decision Making: Multiobjective Decision Analysis with Spreadsheets, Duxbury Press, 1997, 53-61.
2. See Barry Schwartz, The Paradox of Choice, Harper Collins, 2005.

The relationship between XP and Scrum project variables

I’m on the road right now, and don’t have time to write a long post, but I don’t want to withhold this interesting insight from you.

Last week, I attended the first Swiss Lean/Agile/Scrum conference. As usual, the Swiss take a long time to catch up with new ideas and technology, but when there’s money to be had, they’re right up at the front of the line. Anyway, Ken Schwaber gave a very interesting presentation on the concept of “Done”. He showed the canonical burndown chart, and then took the individual vectors apart.

Seeing the variables separated like that got me to thinking, and I had the chance to run my thoughts past Karl Scotland and Keith Braithwaite, both of whom went out for a beer with me afterwards.

Back in the early days of XP, we defined a set of project variables:

The rule went, “Time, Resources, Quality, Scope – choose three”. Whichever three you chosse, the fourth variable would be a function of the other three. Which three were actually chosen was the customer’s decision. Some customers (and managers) don’t understand this rule, and try to grab control of all 4 variables. By doing that, the first variable that’s dropped is quality, followed by time.

So, how do these sets of variables map to each other? The first two are easy:

  • time = time
  • backlog = scope

The other 2 are a bit more difficult:

  • V = f(R)

Velocity (burndown rate) is a function of the available resources. Regardless of what you have to do, having more resources will normally allow you to go faster. This function is non-simple and non-linear, unfortunately: as Fred Brooks rightly says, “adding manpower to a late software project makes it later”, but the simple case is a more or less linear function.

Quality is even more complicated:

  • q = ∂V/∂t

As Dan Rawsthorne says, “Quality is the first derivative of the burndown”. The quality of a product is directly related to the development velocity/burndown rate.

So, so much for that thought. Keith spun the thought even further, proposing in his recent blog post that you can increase velocity by increasing quality. I agree with him, and I recommend you read the post.