Usage of MS Project in an Scrum/Kanban setup

We use MS Project Server for general resource and sprint planning:

It contains the sprint milestones, delivery/go live dates, and the resources
allocated to the teams including vacation information.

By doing so, we can also calculate sprint capacities, plan resources that are
not team memebers, show dependencies to other (non-agile) projects.



Usage of MS Project in an Scrum/Kanban setup

Communities of Practice in Scrum

If you have Scrum on a larger scale in place, you will end up with several cross functional (multi-skilled) teams:

Eg. you will have developers, business analysts, and QA resources in one Scrum team. After a while you’ll need to think about know-how sharing and skill enhancements. I propose to implement communities of practice (CoP, for each skill.

A representative for each skill (horizontal community) is sent to the Scrum of Scrum.


Communities of Practice in Scrum

Being Agile at Organizational Scale

There are many books and posts about agile teams. Doing Scrum or Kanban is a state-of-the-art in software development. But, how does it scale in big organizations? How can agile methods cross department boarders?

I’ve found some answers on the web. In upcoming blogs I’ll go into more details.

Here’s my list of the methods with links to further reading:

Please give me some feedback, if I missed some relevant approaches.

Mirko Blüming


Being Agile at Organizational Scale

How the Agile Manifesto translates into Scrum

„Agile“ has evolved into a best practice for software development and has been described by the values described by the Agile manifesto( ):

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

In the following Blog I’ll describe how these values translate into the Scrum artifacts. Scrum artifacts are underlined.

Individuals and interactions over processes and tools

In Scrum the team size and sprint length is limited, such that the team can manage their tasks on their own. From a project management point of view the teams works as a whole.

All necessary organization and rules originate from the interactions within the team. The team decides about responsibilities and will manage (external) disturbances collaboratively. The Scrum master helps the team to resolve impediments and assures the (self given) rules are obeyed.

Interactions are amplified by reducing the usage of tools. That’s why Scrum teams prefer Scrum boards with paper cards for task planning and tracking over than (electronic) tools.

Working software over comprehensive documentation

The second value reminds you that the first goal is to create working software that brings value to the customer. Since the subject of the documentation often changes faster than comprehensive documentation can be created too much documentation is just a waste of resources. Agile teams favor delivering software early and often (ideally using continuous integration) to ensure requirements can be validated.

However, documentation is important for further development and maintenance – the challenge is to find the right balance between the necessary documentation and the creation of paper that is never read again. The retrospective after each sprint helps to consider also long term consequences.

Customer collaboration over contract negotiation

The customer is represented by the Product Owner, a role define in Scrum. The product owner understands the customer and has profound business expertise. He is available to the team at any time such that interaction is the main source for alignments.

To avoid any kind of negotiation, the backlog (= open task list) is sorted by the product owner according to the customer priorities, so the customer gets always first what he needs most.

However, this requires a thoughtful and reliable product owner.

Responding to change over following a plan

In Scrum the development is done in sprints, i.e. time boxed. Each sprint is a kind of a small project with a planning, implementation, and review phase. After each sprint the team is re-aligned to the customer needs when they take the topmost items from the backlog.

As most of the project plans are outdated as soon as they are published, it’s just consistent to abolish traditional project plans – only the time of the next sprint is considered. This is counter to traditional project management goals of controlling change and keeping to a plan.

The burn down chart visualizes development speed and progress. In doing so unnecessary work is avoided and the risk of failure is controlled.

How the Agile Manifesto translates into Scrum

Planning is not estimation is not analysis

After a fruitful discussion on the last meeting of Limited WIP Society Cologne (, I’d like to summarize some terms: 

Often planning and estimation (and analysis) is used interchangeably, but this yields to confusion and stress:

Estimation is the act of determining values based on uncertain data. Usually, thought as predicting some future result.

Analysis is the act of breaking something into parts to get a better understanding of it. If you ask a software architect to estimate, he’ll most likely start an analysis and provides some complexity measure (e.g. in function/story points). If you force him to convert the result into person days, then he’ll estimate a factor he applies to the result.It’s a good idea to not ask him to convert into person days. Rather let the factor be based on historical data.

Planning is an act of arranging tasks to fulfill a management target. A plan is never wrong, but has some probability to hit the target. To determine the probability you need an estimation.

Planning is not estimation is not analysis

Effort Estimation Techniques

My starting point for agile estimation was the book “Aufwandschätzung bei Softwareprojekten”, Steve McConnell, 2006, Microsoft Press, However, at the end I found the classification by Boehm (Barry W. Boehm, Software Engineering Economics, Englewood Cliffs, NJ : Prentice-Hall, 1981).

In my blog I’ll follow the classification of estimation methods from Boehm 1981 (Barry W. Boehm, Software Engineering Economics, Englewood Cliffs, NJ : Prentice-Hall, 1981):

  • Algorithmic cost modeling
    • Parametric Models (e.g. COCOMO)
    • Function Points / Lines of Code
    • Proxy Based
    • Process Simulation
  • Estimation by analogy
    • StoryPoints
    • T-Shirt Sizes
  • Expert judgment
    • Single experts
    • Group of experts (Wideband Delphi, Planning Poker)
  • Variances and subsidiary techniques
    • Top-down / Bottom-up estimation
    • Combinations of techniques
    • PERT / Fuzzy estimation
    • Parkinson’s Law


Let’s start with some definitions:

An estimation is an approximation based on input data that may be incomplete or uncertain (Dictionary, Wikipedia).

If the estimation is given as a single value (“one-point-estimation”), it’s assumed that there is a 50% probability that the real value is higher, 50% that it is lower than estimated.

It is more accurate to provide an estimation range, with a min, max, and confidence intervals. However this requires more mathematics.

Note: An estimate is different from a project plan: The project plan is designed to hit a target that is a statement of a desirable business objective. A commitment is a promise to hit the target.

Analysis is the act of breaking something into parts to get a better understanding of it.

Law of Large Numbers (LLN) The average of the results obtained from a large number of trials is close to the expected value. Therefore you gain more accuracy, if you involve a group of experts rather than one expert.

Economy of scale Refers to the cost advantages that an enterprise obtains due to expansion.

Diseconomy of Scale Estimation of smaller projects / demands do not scale to bigger projects / demands, due to e.g. communication/management costs, duplication of effort

Closer to the end of a project, uncertainty becomes smaller. This can be visualized as a “cone of uncertainty” – e.g. look at

Now I’ll present an overview of the estimation methods:

One big class of estimation techniques are algorithmic methods: Algorithmic methods use mathematical relations/formulas for the estimation. The formulas are based on research and historical data and use inputs such as Lines of Code (LOC), number of functions to perform, Defects, … you get from analysis. The advantage of these methods is that they are very precise and easy to apply. The limiting factor is the availability of the input data. Also they are unable to handle exceptional conditions, input data might not be available or of poor quality.

Also (dynamic) process simulations are algorithmic methods: They use a dynamic model with assumptions about the project and organization, e.g. their velocities and error rates (need to be calibrated with real data).

Another class of estimation techniques is estimation by analogy: They compare new tasks / project with other tasks / projects already known to derive the estimated effort from historical data. One needs to find areas that can be counted (e.g. number of tables, screens, use cases etc.). Best known techniques areStoryPoints and T-Shirt Sizes:


  • Assign numbers to the categories that are related to the complexity
  • Typical categories are
  • Powers of 2:  1, 2, 4, 8, 16,…
  • Fibonacci: 1, 2, 3, 5, 8, 13, …
  • Story points are relative to a defined anchor, to compare to

T-Shirt Size (S, M, L, XL, …)

  • Generalize the story point categories (maybe “8” story points do not exactly relate to twice the effort of “4”, e.g. due to diseconomy of scale
  • The average size of a category is determined by historical data

The comparison is usually done by expert judgment: The judgment can relay on individual experts: developers, architects, etc. who are asked about the expected effort (one-point estimation) or ranges (min / max) or clusters. It can also rely on a group of experts – e.g. done in the planning Poker (Scrum poker) or poker party.

Last but not least there is the not so seriously meant estimation techniques, the Parkinson’s Law: “Work expands so as to fill the time available for its completion” (Cyril Northcote Parkinson, published in The Economist in 1955). Therefore, the cost is determined by available resources rather than by objective assessment. The estimated effort depends on the customer’s budget and not on the software functionality – e.g. If the software has to be delivered in 12 months and 5 people are available, the effort required is estimated to be 60 person-months.

For more variants of the Parkinson’s law see Wikipedia

Last but not least I’ll provide some variances and subsidiary techniques

Top-down approach

Split Requirements (Epics) into smaller elements (Stories) and assign some relative measure like story points, percentages. Split some (at least one) element further until you can achieve some good estimation

Bottom-up Judgment:

Break down into tasks and ask experts (developers, architects, etc.) about the expected effort

Best results, if tasks < 2d (otherwise details will be overseen)

Sum up to get the total effort. This yields to highly accurate estimated due to the Law of Large Numbers:

Fuzzy Effort Estimation

Fuzzy Numbers represent the physical world more realistically than single valued numbers (“Optimization Criteria for Effort Estimation using Fuzzy Technique”, Harish Mittal/ Pradeep Bhatia, 2007, CLEI ELECTRONIC JOURNAL,

Some further reading I suggest: “The Comparison of the Software Cost Estimating Methods” by Liming Wu (


Effort Estimation Techniques