Prioritising the scientific way
If everyone knows how to prioritise, why is it still one of the most difficult task to master? How can we avoid opinions and biases seeping into decision making? What can be the balancing / corrective action that can reliably push you to become a better decision maker?
When decision makers do not prioritise with adequate knowledge supported by a strong framework, you will notice a strange situation where simple things are complicated and deprioritised while complex things are over simplified and prioritised!
When leading the analytics and data science practices, I often deal with these questions on prioritisation and making the right decisions that has long lasting impacts. Almost every one sincerely believes that they know the answer to prioritisation and yet, prioritisation is not done adequately and consistently. I started writing this article for data science specifically, but, on suggestion from peers, I am making it generic for broader application.
In this article, I propose a simple mental model and a scientific prioritisation framework: first principles approach and second order thinking. The overall components of this framework are not new. The main objective is to provide a simplified mental model that can make you approach a problem like a scientist with a curious and skeptical mindset and a philosophical view of what is required.
Whether it is product features to build or a data science project to take up, the task of choosing the best thing to work on is always the most important problem to tackle. The problem is exacerbated when the stakeholders are not able to comprehend the complexity of a problem and it quickly becomes a nerd fest of technically complex items picked up because it is a cool thing to do. Particularly in data science, I have seen many times where simple things are complicated and deprioritised while complex things are over simplified and prioritised.
In the field of AI/ML, the last decade was full of exploratory activities with many organisations trying out some AI/ML concepts as they had a huge fear of missing out. More than a handful of times when I ask what exactly they do in data science, the stakeholder was not able to explain clearly the project scope or the benefit that is expected out of it.
But now, most organisations realise that putting data to use is not as easy as it seems and similarly, building a cool product does not translate to customer acquisition. The emergence of user research is an evidence that organisations recognise the complexities and put the value propositions first than technical solutions. Likewise, strong leaders in data field have a very close business sense and authority on overall strategy so that they can prioritise the right projects the right way.
There are already a ton of prioritisation frameworks, mostly delving on the cost-benefit analysis. These are easy to use in theory. However, in practice, the process of estimating business benefit and complexity / cost is not straight forward and it often ends up with opinions and unconscious biases determining the outcome. We will try to consciously avoid it through a simple and straight forward way of prioritisation that covers from problem statement till solution conceptualisation.
Stage 1: First Principles Approach for Problem Definition
We all know the stereotypical kid asking “why” all the time and adults get frustrated. Children by nature, follow first principle approach as they come into a new world and they need to know how things operate. Simply put, a first principles approach is to address a problem as if we have no clue on anything. But as we become experienced, our biases start to come in and our judgements get clouded. The best way to fight this is to break it down into more fundamental components and justifications moving away from abstract notions and implicit biases. This is applying first principles approach for anything we do.
The simplest way to apply first principles is to ask at least 5 whys. It requires very little preparation and anyone can start with the first question of “Why do you want to do X?”. The key part is for the question to lead towards another set of questions, all targeted to break the hidden barriers and remove any assumptions. This is the best way to find the ultimate root of the problem that we are trying to address instead of starting with the solution first.
As we ask 5 or more questions, we will find out the root causes for the problem and related hypotheses. This forms the foundational set of problem definition and root cause analysis. Most of the problems that seem significant or abstract would now turn out to precise and manageable root causes that most people can understand.
The key items that needs to be documented in this process are:
- Overall problem statement
- The 5 whys and their responses (need not be religious about 5 whys)
- Identified root causes for the problem statement
- Related hypotheses that needs to be validated for the problem statement to hold true
Stage 2: Solution conceptualisation
For the problem at hand with each of the root cause identified, the organisation need to respond to counter each of these root causes. These are called “counter measures”. For each of the fundamental root causes, we need to propose a counter measure, assuming that it will somehow get solved. At this stage, it is better not to over complicate with technical feasibility as it would lead to analysis paralysis too early. Counter measure proposals are functional in nature and often do not involve technological components.
Post that, the action plan needs to be put together in terms of how these counter measures can be enacted. This is the actual solutioning phase where the technologists take the lead and work with business stakeholders to provide a rough cut of what the solution would look like.
At the end of the stage, we should have a good grasp of the problem definition and the counter measures & action plans that are at reach for the organisation.
Second order thinking
Second order thinking is to understand the effect of effects or consequences of consequences. Depending on the granularity, it could be even third or fourth order thinking. The application of second order thinking will mostly be in terms of business impacts, while it can also be applied to uncover hidden complexities and effects such as team burn out etc.
Higher order Impacts
For example, when we build a data science algorithm, the first order thinking is to just say “I am building this so that the product can use them when required”. But, if you go one step further with second order thinking, you would then say “I am building this for products so that they can use it for helping the customer with XYZ”. The latter part of helping the customer is the second order. A third order thinking would continue as “so that we can engage the customer more” and a fourth order thinking would be “so that we can scale our offerings and revenue effectively”.
Ultimately, for a profitable business, any impact should ultimately relate on monetary terms and should help in evaluating the ROI of the project at hand.
Complexity
Complexity in a project can be of many dimensions. Most of these lead to uncertainties in execution, which then requires more effort to ensure certainty and team buy-in. As explained in the article here, the four major types of complexities are as follows:
- Structural Complexity — This type of complexity refers to difficulty in managing interconnected activities. Examples include data dependencies or team dependencies for infrastructure etc.
- Technical Complexity — This type of project complexity refers to challenges in project design and technical details. The complexity is associated with new projects about which sufficient technical details are not available.
- Temporal Complexity — Temporal complexity refers to projects that with an uncertain environment. The uncertain factors include unexpected legislative changes, environmental impacts, seasonalities, trend changes etc.
- Directional Complexity — This type of complexity refers to challenges in determining project goals and objectives. The goals are generally shared with hidden agendas and vague project requirements.
If the first stage of problem definition is done adequately, these complexities can be uncovered and addressed much earlier. There are numerous articles just on complexity definition and estimation. But, for sake of brevity, we will not delve too deep in those (please leave comments if you need me to address it).
Complications vs complexities
Many times a project is complicated due to inadequate attention to scoping or being too ambitious. This should not be confused with complexities as they are materially different. The best explanation is from Dombkins below:
“The differences between complicated and complex projects are not readily understood by many. Complicated projects are relatively common and are usually delivered by decomposing the project into subprojects, and then resolving inter-dependencies (integration) between subproject boundaries. To many, complicated projects will seem complex. Complicated projects, although usually very large, are able to have their scope defined to a high degree of accuracy at project inception and throughout the design phase. This is in stark contrast to complex projects where it is very often impossible to undertake accurate detailed long term planning” (Dombkins, 2008).
As explained, a clear and contained scope with focus on immediate deliverable with agile principles is the key for reducing complications in projects.
Scientific Prioritisation
Before we discuss about prioritisation, let us understand the key aspect of a scientific process:
A scientific process is the process of observing, asking questions, and seeking answers through tests, experiments and factual evidence.
The main part is to leave a trail of evidence so that we can look back at those and improve our prioritisation methodology with the newly acquired knowledge. So, documentation is key here. Tools like confluence, Jira etc. can significantly help in terms of setting up a template and checklists to ensure all the required documentations are done.
Now, with all the data in place, we can proceed towards prioritisation. I have done a more elaborate 3x3 prioritisation matrix as against conventional 2x2. The infographic above is explanatory. But, there are some callouts that are noteworthy.
Most “good” projects that are worthy of investments are either medium or high complexity with relatively high impact potential. These are often picked up and acted upon by the team while the low hanging fruits are already exhausted.
There are low hanging fruits with low complexity and high impact. But, these are very easy to handle and are often picked up much early on. Not all of these have high impact. Typically, idle time is utilised for the low impact and low complexity tasks.
The tricky ones are medium to high complexity and medium impact. These often end up being those projects that an organisation got to undertake for future proofing its business model. An example could be long tail solutions that require large investments for modest gains in customer value propositions to prevent competitors from encroaching that space.
With different levels of priorities, size, complexities etc., the decision maker has a choice on what kind of resources to deploy and what level of commitment is expected. The important part is to ensure the deliverables start rolling out and re-look at the decision making process for how to improve them.
Consistency is key for allowing positives effects to compound itself.
I hope you had a good read and are able to apply the overall approach of prioritisation in your work. I like to recollect the note that documentation of both the prioritisation process as well as the decisions and feedback (of how prioritisation helped) is the key to get the process adopted in your area of work. With documentation, you can transfer the accountability of decision making and prioritisation to the process than to yourself as a decision maker. And thats one sure way to have a more peaceful mind :)
Author info
Shyam is currently heading the Data team for Yara SmallHolder solutions where his team is responsible for Data Management & Engineering, BI, Product Analytics, Marketing Analytics, Market Intelligence, Strategic Insights and Data Science. He has multiple experiences in setting up fully functional teams in both technology and data domains. He is also a practicing data scientist himself and has experience in data science strategy consulting for large corporates.
Please feel free to connect and reach out for a chat in LinkedIn here.