Demand analysis

The Japanese came with the idea of delivering products according to client demand.  This is less obvious in software development organizations. This post intends to clarify this subject of demand analysis. I guess all organizations want to maximize their Throughput to meet and exceed client demand. The first step towards this goal is simple; understand client demand to arrive at patterns. We should build our software capability to address those patterns so that we can satisfy the demand. Continue reading

Estimation value add

Lead Time is the average time from when client submits a request till related software is produced. Cycle Time is average time between two successive releases from the system. The lower the Lead Time the higher the Throughput, which is number of client-valued features, released every time interval. Ultimately this impacts the bottom line by having lower cost per feature. Continue reading

Perspective on maturity

Going to the plain basics, maturity is to deliver what we promise. I heard this definition from a successful sales executive of product development company. For me this summarizes everything!

Meeting commitment is arguably the most important criteria for success. I use the measure of requests percentage which have deviation from target date exceed “x” days. This measure helps to quantify our maturity as organization. It represents the Voice of the Customer (VoC)

This measure can be organizational wide and it can be used to drive the whole improvement initiative.

The following chart gives high level analysis of causes of immaturity as suggested by this measure.

The foundational cause is the inability of people to timely communicate their issues. I worked with developers who had one month-long tasks and kept reporting that things are according to plan till they report failure at the very last day! I carry the responsibility by not allowing trust environment which encourages them to talk and share their concerns.

Sales people making commitment without consulting engineers is well-known issue which can be solved if they are educated about the capacity measures. These capacity measures can directly improve the above VoC measure. The capacity measures include Average Lead Time for each Class of Service. This measure allows sales people to provide informed estimates in the very narrow window that they might have to secure a deal. They can add percentage of uncertainty based on the inherent risks. I suggest this should be maximum of 20% with promise of being reduced as we proceed into the project.

Finally, a key assumption for failing to meet our commitment is the poor definition of customer valued request. Delivery on time requests which are not meaningful actually invalidates our improvement effort.

Cost of Poor Quality (CoPQ)

As consultant your first action was to ask the client about immediate problem. The discussion has revealed to you that there is performance measure (Y) to be improved so that to help solving existing issues. Y is the percentage of features which the deviation of target and actual delivery dates exceeded “3” days.

The derivation of Y was based on analysis, for example:

  • Understand the Voice of the Customer.
  • Identify the Critical to Quality variables which impacts VoC.
  • Data to support the above.

Now you want to analyze further what are the input factors which affect Y, as shown next:

Let us assume we analyzed existing data based on the above factors and we found that client requests, which have contributed to the main delays in the system, are having:

Technology = Perl,

Existing module design = Based on previous architecture,

Change impact = database and services modules, and

Software configurability = NA;

In a previous post here I suggested that WIP limit tends to be not followed by people who have scarce skills, in our example here they are the Perl developers.

What this means?

This means that the empirical observation of exceeding the WIP limit for certain stage, and showing resistance by people who work at that stage, are backed by data which directly impact Y.

By having shortage in these scarce skills, Y will increase which will increase CoPQ. Therefore, we’re not economical in our delivery of maintenance requests.

Instead, I suggest adding a separate class of service for Perl based requests in the above example with its own policies.  The class of service helps to identify:

– Prioritization scheme

– Request hand-off from one stage to the other

– Expected SLA performance

– Any specific workflow rules

Increase velocity or widen gap

Sizing is focused on answering, “If we assume no identified  requirement risk, what is the size?” This is useful, but it can be harmful if it’s used for estimation while potential risks are ignored.  If that happens, we might achieve high team velocity while the overall system throughput is low. Also, it can widen the execution gap between the team and organization as the team wants isolation in order to achieve its velocity. I found many teams who are devoted to measure their velocities are almost always operate in isolation from the rest of the organization. Once, they begin to align to the organization, velocity becomes of less relevance.

For me the above point is specially critical. The team increases productivity is the driver, while the reality is that the team increases the velocity of divergence from the organization.

Measuring people performance based on number has been always warned against as it:

– Reduces team initiative because the team is focused on achieving its velocity target if it ignores all aspects important to operate in harmony with the organization.

– Leads to non-meaningful comparison between different teams in the organization based on the velocity of each.

– Misguides the organization because after all velocity is a vritual figure at it has no baring to what brings value to the client.

There is lot of literature about performance management and team performance which do not relate to the performance in the Engineering aspects.

We should focus on measuring:
– the business valued features which the client appreciates,
– the cost per feature delivered to the client, and
– Lead Time.

Risk management in Scrum – Insights

I am writing this post after exchanging tweets with @flowchainsensi.

The Nokia test provides criteria to measure the adequacy of Scrum implementation. If we have ScrumBut (aka inadequate Scrum), then we should not expect the accomplishments in terms of higher velocity, better quality and increased customer value.

Even while we are implementing ScrumBut, we should strive to show some value from using Scrum. This value will allow us to remove handicaps for Scrum implementation and therefore improve our score in Nokia test.

ScrumBut introduces risks to the project. Such risks should be managed using the rigor of Risk Management (RskM) process. PMBOK® and CMMI® have in-depth addressing of RskM details. Traditionally, software development is driven by risks. There are two drivers that can help us start brainstorming for risk identification.

  1. Risks originated from ScrumBut.
  2. Risks originated from not achieving the value which management expects as a result of using Scrum.
I suggest implementing weekly RskM as 30 minutes meeting to:
  1. Monitor the risks
  2. Update risks status
  3. Identify new risks
  4. Create actions to address risks

Such meetings and the outcomes tasks are planned in the sprint backlog.