Confused about sizing MMR (and Little's Law)

Hi everyone,

I’m a little bit afraid of asking a stupid question here, I’m confused about sizing an MMR as described in Chapter 20 Improving While in the Flow of TameFlow. Possibly because I don’t grasp Little’s Law as I should.

I’m running a small experiment at the moment, trying to determine an MMR of a project consisting of 15 Product Backlog Items (PBIs), same class of service (roughly same size). Looking at the history of a past project I was able to determine that the average Flow Time of our PBIs is 15 work days (excluding weekends). By Flow Time I mean time work started by a developer to time the PBI was Done.
Then I divided the 15 days by two and rounded to 7 days as buffer.
If I aggregate these numbers I get:

  • MMR Flow Time = 15 PBIs X 15 days = 225 days
  • MMR Buffer = 15 PBIs X 7 days = 105 days

My confusion lies on Little’s Law -> Flow time = WIP / Throughput.

225 days to complete 15 PBIs? When I look back at the previous project (coincidentally it was 15 PBIs) work on the first PBI started on 12/02/2019 and the last one was finished on 28/06/2019 (that is Throughput = 15 PBIs / 99 work days = 0.1515).

So previous project Ft = 15 / 0.1515 = 99 days.

I was expecting to see the MMR Flow Time to be similar to our past performance, but what I get is 225 days vs 99 days. This looks like a big discrepancy to me.

Are my calculations correct?
What Am I missing?

1 Like

Hi Dhdez,

You are doing good … Here’s the thing with Little’s law. Very easy.

First, Little’s law is based on averages. Big work items, small work items all cohabit nicely in that space.

Now, if you want a nice Flow Time distribution - upon which you base you decision - your Pull Policy should be First in, First served.

If not, you are not following the spirit of Little’s law and your Flow Time distribution upon which you base you estimates will be severely weakened.

So. It’s time for you to watch Daniel Vacanti’s video that explains all of this

From 11:51 til 57:48 This video :::

This is obviously an example that uses CYCLE time instead of FLOW time … So make the distinction as you absorb the material


Cool, thanks a lot Daniel :pray:. I’ll watch the video as I keep studying the subject further

I would start from this…

The important piece of information is

Throughput = 0.1515 PBI/DAY

Then, suppose your next project has 20 PBIs (just to make it different from 15…), we can get the expected approximate average FT for the whole project as:

FT = 20/0.1515 = 132 DAYS

Note well that this is the approximate average FT in virtue of LL (and supposing you have all the conditions of applicability of LL). So you then add the 50% buffer of 132/2 = 66 days.

You can thus expect the new 20 PBI project to be delivered between 132 and (132+66=)198 days.


That makes more sense to me now. I think where I’m getting hung up is how sizing the MMR is described in the book (Chapter 16 Improving while in the flow)
MMR and buffer average flow time = aggregation of all work item average flow times and buffers

Please let me illustrate this with an example, 3 items take 10 days to complete, they all started on the same day, finished on the same day. That gives an average FT = 10. Buffer is 10/2 = 5 days.

Next project I have 5 PBIs. So, if I do that aggregation I get:
FT = 10 days x 5 PBIs = 50 days
Buffer = 5 days x5 PBIs = 25 days
MMR and buffer size = 50 + 25 = 75 days.

If I took into account the throughput, the calculation would be more like:
T = WIP/FT= 3 / 10 = 0.3
For upcoming project, then I just do: FT = 5/0.3 = 16.7
Buffer will be 16.7 / 2 = 8.33
MMR and buffer size = 25.03 days.

While I understand what you have suggested, I still think I’m missing something here. It is definitely two ways of calculating the MMR size. I wonder, I’m misunderstanding the book? (please bear with me, English is not my mother tongue).

@dhdez Yes, the second approach (working with throughput) is how it should be done.

Sorry if the wording in the first book was misleading. I am also a non-native English speaker! :slight_smile: Furthermore the first book did not undergo proper technical review, unfortunately.

What we want to do in these instances is to come up with an estimate/forecast of the “average” flow time we expect our next package of work (MMI, MMR, MOVE…) to take. We need this so that we can “place” and “size” the buffer. If you have already collected flow metrics, then you have the average Throughput so you can do the calculation as suggested.

With the initial data provided, you could also say that the average flow time of a single item is 10 Days / 3 Items = 3.33 Days, so if the same conditions still hold, we can expect five items to take 5 Items * 3.33 Days = 16.7 Days and we’re getting the same result.

Note that when I used the expression of “aggregation” it was not meant as an operational way to calculate the size and placement of the buffer. It was a conceptual explanation. In theory we don’t know how long an arbitrary item takes. So we can reason in terms of an expected average flow time and associated buffer (for every single item). If we have an arbitrary number of arbitrary items, their overall average flow time can be thought of as an “aggregation” of their respective single average flow times - but we cannot say it is their sum, because generally speaking the sum of average is not equal to the average of sums (except in special cases).

In the case you present, you do not have the average flow time for each item. You have a precise flow time, which you measured historically, for every item. So to get back to the averages, you have to calculate them; as in the example above.

Likewise, if you calculate the Throughput, i.e. the 3/10 you did in the second case, then that is already the approximate average Throughput, and you can use it consequently.

Hope this helps!