Organizational (Re)Assessment



Assessments (a.k.a. health checks) be an effective way to monitor organizational progress through agile transformation.  Properly identified and carefully monitored agile maturity metrics (AMMs) could be an effective “lever” to steer an organization towards success.  However, some challenges could be caused by collecting and applying inappropriate metrics and then performing unskilled assessments that are based on misleading numbers (note: please beware of some important issues caused by AMMs). It is not uncommon for an organization to focus on metrics and other locally collected  numerical attributes and miss out on much bigger picture.

Some organizations, with cultures that are driven by KPIs, scorecards and individual performance assessments, over-stress the meaning of single numbers (“checks and balances”), instead of trying to understand underlying causes of dysfunctions (e.g. organizational design, team dynamics and structure).   Such organizations zoom in on trivial elements and overlook key elements: focusing too much on local indicators of success and missing out on global factors (local optimization at expense of system optimization).

Before identifying and collecting metrics to perform an assessment, organizations must answer the following questions:

  1. What decisions will organization make based on metrics collected?
  2. Does organization measure the right thing or the easiest thing that can be collected?
  3. What is an intention of collecting a given metric?
  4. Who is collecting metrics and why?
  5. Who is analyzing metrics? Is analysis done skillfully, by someone with experience?
  6. What happens to metrics as they travel up through multiple organizational layers?
  7. What if there are some unintended consequences of collecting and/or misinterpreting a given metric?
  8. Is a given metric a variable that can be measured directly or is there a proxy-variable that has to be measured instead?
  9. How could metrics be “played or gamed”?
  10. What bad behaviors (e.g. “cooking books”/manipulating metrics) could be expected?

(More information about inappropriate use of metrics can be found in this post.)

Metrics can be collected and be indicative of conditions at different organizational levels. Here are examples:

  • Team level assessment – indicative of condition at single team level
  • Multi-team assessment – indicative of conditions across multiple teams
  • Executive level Management assessment – typically, assesses multiple organizational units and departments

“Higher-level” metrics (e.g. multi-team, executive) could, potentially, be derived by rolling up lower-level (team) metrics. However, caution must be exercised, while mixing up non-mixable units of measure. Not all metrics are comparable, unless appropriate normalization is applied.  Some metrics are qualify-able but non-quantifiable and cannot be assessed “by a machine”; only an experienced person can give an objective interpretation to a non-quantifiable metric.

Below are some examples of Team-Level metrics that are frequently used to assess single or multi- Scrum team maturity. Some of them could be parametric (numerical), others – binary (yes/no).   As time goes by weight of various metrics’ values are expected to change.

Scrum Team Dynamics and Structure:

  • What is a team’s “happiness factor”? What is overall team’s morale and motivation?
  • Is being a team member viewed as an opportunity, a career path? Or is it rather perceived, as a constraint, limitation and a burden?
  • Are internal relationships among team members healthy?
  • Team size – is number of individuals on a team optimal?
  • Is human resource “churning” (attrition rate) high?
  • Are there hierarchical relationships on a team that prevent good teaming and jeopardize individual safety?
  • Is a team cross-functional: does it have all subject matter expertise necessary to perform work?
  • T- Shaping individuals: are cross-functional team members present on a team?
  • Are key roles well understood and supported by senior management (e.g. ScrumMaster, Product Owner, Team member)?
  • Dedicated resources: are team members shared with other teams or distracted from sprint work?
  • Are team members collocated? If distributed – how?
  • Is a team self-organized or does it require management from outside?
  • If multiple teams are involved, is there an effective multi-team synchronization?
  • When impediments are discovered, how effectively are they being removed?
  • Can a team effectively limit WIP (work in progress)?

 

Product Delivery by Scrum Team:

  • Has Product Owner produced a clear Product vision and/or Strategic Product Road-map?
  • Is product PBI (potentially shippable increment) produced at the end of each sprint/iteration?
  • Is each PBI properly sliced (vertically) , sized and “INVEST-able”?
  • Are DoR (Definition of Ready) and DoD (Definition of Done) clearly defined by a team and PO?

 

Work Cadence and Logistics of Scrum Team:

  • Are Daily Scrums (Daily Stand-ups) effective?
  • Do Product Backlog Refinement sessions happen regularly and are they effective?
  • Do Sprint Showcases produce valuable customer feedback?
  • Do Team Retrospectives lead to continuous improvement?
  • Is a team focused only on Scrum work?
  • Is sprint work feature-centric (does it produce customer value)?
  • Has a team developed reliable estimation techniques and has it’s ways to track progress?

 

Agile Engineering Practices of Scrum Team:

  • Is Architecture flexible to accommodate potential future changes?
  • Does a team use TDD or BDD?
  • Is Continuous Integration in place?
  • Is there full test coverage of code-base? Code refactoring?
  • Are unit tests present?

(For single Scrum Team dynamics, please also refer to Henrik Kniberg’s Scrum Check List.)

Every metrical unit that is used for an initial assessment or subsequent maturity assessments must be clearly understood and thoughtfully applied by experienced individuals.  In cases, of agile scaling, additional care must be exercised to collect and ‘roll-up’ metrics from multiple organizational verticals and product development areas, in ways that do not cause confusion and misinterpretation of data.

If you would like to understand some of the most commonly seen mistakes with using agile maturity metrics and deepen your personal system-level understanding of observed team dynamics, please  visit this page.

A good way to start and assessment is to conduct an informal Lunch & Learn session.

Please, use the form below to submit your inquiry. If you are interested in training, coaching or consulting support, please provide, as much information as you can in the 'Message' field below: