Small probabilities can distort our judgment about the significance of change
Look carefully to avoid an unpleasant surprise later!
We have all fallen for the boiling frog1 syndrome at some point in our lives! Our judgment about the significance of change can be distorted by our perception relative to a baseline level. Generally, it doesn’t matter too much. But sometimes, we end up regretting our delayed response to signals that we should have picked up much earlier.
When it comes to risk management, we are constantly dealing with uncertainty. We hope to reduce the level of uncertainty over time and take timely actions to minimize the consequences.
But how do we take timely action when we don’t correctly judge the significance of changes that appear too small and insignificant?
Let us say we are monitoring changes in the rate of occurrence of a certain harm on a regular basis. In period 1, we estimate the probability of occurrence to be 0.001 based on the analysis of complaints data. In period 2, the same calculation yields a value of 0.00182. We notice that the probability has increased by 0.00082 over the last period. This feels like a very small change, right? Maybe nothing to worry about. Let us keep monitoring.
How about if I presented the same information as an 82% increase in probability of occurrence of harm in period2 compared to period1?
When I asked this question to my colleagues in a LinkedIn poll, nearly 90% of them “felt” the 82% increase over the prior period was “significant”. Only a few noticed that both options represented the same level of increase.
This, by no means, is a scientific poll. But it demonstrates how most people “feel” about the significance of change. Normally, we don’t do fancy statistical analysis before making judgments. It is not good or bad; just human nature. It is not easy to do mental math, even for highly analytical people. But it is much easier to judge the significance of change when the information is presented in a relative way that can help us make a quick comparison.
How we present data matters when evaluating changes in risk
A common practice in the medical device industry is to use the rate of occurrence of harm as a proxy for probability. But they are not necessarily the same.
Probability value is always between 0 and 1. It is calculated by dividing the number of events of interest by total opportunities. The probability of occurrence reflects a chance, not necessarily a rate of occurrence you will observe in real life. The theoretical probability of a head or a tail in a coin toss is 0.5, but it does not mean that you will see exactly 50 heads in 100 trials of a fair coin. It reflects an expectation of likelihood, not a predicted rate.
In general, it is not feasible to estimate a theoretical probability of occurrence of harm. In practice, the observed rate of occurrence in a given time period is used as a proxy estimate for the probability of occurrence. But it is only an estimate. It should be updated in each time period and it should not be used to make any predictions.
Rate of occurrence can be expressed in different units such as percent, per million, per 100,000 etc. It depends on how the data is normalized. The denominator in this calculation generally represents the number of units sold or number of procedures performed in a given time period.
In the example above, the probability value of 0.001 in period1 corresponds to a rate of 1 in 1000 or 0.1%. Similarly, the probability value of 0.0182 in period2 corresponds to approximately 1.82 in 1000 or 0.182%.
When we try to compare 0.00182 with 0.001, an absolute change of 0.000182, we don’t have a good comparator to judge the significance of this change. But if we compare 1 in 1000 with nearly 2 in 1000, we can quickly realize that nearly twice the number of patients experienced harm in period2 compared to period1. It may still be completely acceptable in the context of the benefits of our medical device. But we will certainly notice the change and ask questions.
Relative change is more important than absolute change
An absolute change of 0.000182 in the example above does not mean much. However, an increase of 82% over prior period a more noticeable relative change over the baseline. We are likely to be more attentive to the relative change, especially when it appears sufficiently high compared to a reference. In our normal experience, anything close to 100% would seem sufficiently high.
A common practice in the medical device industry is to present the monthly rate data based on the number of units sold or number of procedures performed. Generally speaking, the denominator in the rate calculation is a large number, which makes the absolute value of rate very small. As noted above, it is not easy to appreciate small changes in these numbers without looking at the relative change. Quite often, this practice leads to an underestimation of risk and delays in detecting potential safety signals that require timely action.
It is more difficult to discern changes in occurrence of extremely rare and serious adverse events
Generally speaking, even high risk devices approved by the FDA are considered to be safe and effective. Serious and life-threatening injuries, including death, do occur when these devices are used, but they are generally infrequent.
When trying to discern a change in occurrence of these rare, infrequent events, it is not appropriate to use a rate calculation. The question is not if there is a change in rate of occurrence; rather we need to asses if the observed frequency is disproportionately higher than expected, or when compared to other therapies. Disproportionality methods such as the Proportional Reporting Ratio2 (PRR) are more suitable.
Key takeaways
Risk management of medical devices requires ongoing surveillance throughout their lifecycle. It is generally not feasible to accurately estimate the probability of occurrence of harm, especially when only a limited amount of information is available at the time of market launch. That is why it is important to be vigilant and continually monitor any changes in the occurrence of harm from expected levels.
Our ability to discern significant changes can be greatly affected by how we analyze and present market data. When the rate(s) of occurrence are very low, it is hard to discern any meaningful changes without sophisticated statistical analysis. It is important to be mindful of a false sense of security our data analysis methods can create, especially when the baseline rate of occurrence is low. We should pay more attention to a relative change and use other more appropriate methods such as the proportional reporting ratio for extremely rare events.
Boiling frog: Wikipedia entry - If a frog is put suddenly in boiling water, it will jump out, but if the frog is put in tepid water which is then brought to boil slowly, it will not perceive the danger and will be cooked to death.
See FDA white paper: Data mining at FDA
Thank you, very interesting discussion ... I guess we also should take into account another parameter which could be the "duration" of the trend (sorry for my English). It allows to stay vigilant while not taking decisions too quickly?
I agree with your point. I think SPC can help. If a company is collecting this data, an XmR chart is an easy way to look for rate changes beyond what is normally expected. There are also methods to look at very rare events (1 or 2 per year) and charting them. I don't mean to say only use statistics, but I like using SPC and my gut instinct. Those interested can see Understanding Variation by Donald Wheeler.