All cognition is flawed

Clinicians, like with researchers, can fall prey to potential cognitive bias (Kleinmuntz 1990). It lurks within our minds without us being aware of it, and can present itself in everyday life as a stereotype or an assumption. As clinicians though, the cognitive biases that we have, impact two common things we are required to do; diagnose a problem i.e. ‘what we think it is’ and provide treatment i.e. ‘how to fix it’ (Croskerry 2013).

When identifying a problem that a patient has come to see us for, clinicians can enter into two schools of thinking; heuristics and analytical (Croskerry 2003). Heuristics can be thought of as the shortcuts our brain uses to save energy by solving a problem quickly; it could also be called ‘a rule of thumb’ or ‘an intuitive judgement’. However, while useful when time is short and resources (mental and physical) low, heuristic thinking can lead to trouble, because it increases the chance for cognitive bias to affect the thinking process and lead to diagnostic errors (Kleinmuntz 1990). Over a 100 different cognitive biases have been described (Croskerry 2003), substantially more than what one blog can cover. This post will identify just a few biases and provide examples of how they may affect clinical practice.

Let’s look at the usual presentation of a patient to a clinician. A patient presents with a particular problem. The clinician listens and gathers some routine information such as medication use and lifestyle information. After this the clinician then has to gather further information regarding the presenting problem, herein is the potential for bias, if the clinician arrives at a hypothesis (‘what we think it is’) too quickly and fails to adjust this hypothesis when new information is given (known as anchoring) the clinician increases the chance of a diagnostic error and can develop an incorrect treatment plan (‘how to fix it’) (Graber, Gordon & Franklin 2002). This anchoring bias can be severe when it is combined with confirmation bias. This is where the clinician searches for information that will prove their hypothesis correct rather than looking for information that may prove it to be incorrect (Rabin & Schrag 1999).

For example a patient presents to the emergency department with multiple stab wounds to the chest, head and arms. The patient is intoxicated but calm and co-operative. There are no signs of lung problems and all physical signs (apart from the multiple stab wounds) are normal. The first concern is the stab wound that would cause the most damage to the important organs of the chest. After chest scans and a physical examination of the chest wound. The danger is ruled out and following treatment the patient is discharged. The patient returns 4 days later with blurred vision, vomiting and trouble concentrating. A CT scan of the head shows a knife wound to the head that had penetrated the brain. In this example you can see both biases, the clinician ‘anchored’ onto the chest injury and then used scans and a physical examination of the chest to confirm the initial problem and therefore failed to search for further possibilities or other injuries. This example is perhaps dramatic but it shows you how terrible it can be if you are to miss something, even when it appears that the clinician has done an in-depth examination (example adapted from (Croskerry 2013)) .

Other biases can come into play when receiving previously gathered information (referrals or handovers) from other clinicians. Often the previous clinician will inform you of what they think it is and, like a snowball rolling down the hill, this thought gathers momentum and what started out as a possibility evolves into a “certainty” (known as diagnosis momentum) (Croskerry 2003). Often accompanying this bias is how the previous or referring clinician ‘frames’ the information, which might promote a particular view of the problem and limit other possibilities (Croskerry 2003).

For example a patient is sent to a psychiatric facility by her general doctor for severe symptoms of anxiety and depression. She has been having trouble breathing and has fainted several times. The psychiatrist wants to rule out a chest infection and sends her on to the hospital for an x-ray of her chest to rule out any problems with her chest. At the hospital the patient is assessed and it is noted that she is a smoker, overweight and has asthma. The patient’s chest is checked and chest films are found to be normal. The doctor at the hospital finds that the breathing problems are due to the anxiety. As the patient is leaving, she faints – they are unable to resuscitate her and the monitor has shown that her heart has stopped. The autopsy shows multiple pelvic vein blood clots extending from the femoral vein and in both lungs, which would have caused the breathing problems. To have diagnosed the blood clots, further tests would have needed to have been conducted. Once again a dramatic example, but here one can clearly see the effect of framing and diagnosis momentum. The possibility that the breathing problems were due to anxiety gradually gathered momentum until it stuck.  Despite the hospital clinician knowing conflicting evidence, this was the smoking and weight. Complicating this more was the way in which the information had been ‘framed’ as a ‘chest infection’ affecting the thinking processes of the other clinicians (example adapted from (Croskerry 2013).

The problem with cognitive bias is it accounts for a large proportion of incorrect diagnoses, some which can have a huge negative impact on the patient (Croskerry 2013). What makes it so tricky is that clinicians are required to make decisions based on the information given to us by patients, other clinicians etc. and things like funding, resources and time limit the clinicians’ ability to run multiple tests and spend long hours with a patient (Graber 2003). However, research has shown several ways that we can minimise cognitive bias. One way is called metacognition which is ‘thinking about your thinking’ (Croskerry 2002). In a clinicians case this might mean that you use the analytical school of thinking and double check your possibility before you accept it as true or that when conflicting information is given that you re-think your previous possibility (Croskerry 2002). Another which is frequently done is asking oneself ‘what else could this be?’ or searching for evidence that may disprove your first hypothesis (‘what I think this is’). Lastly, using practice scenarios where cognitive biases can be highlighted and identify ways to decrease them (Croskerry 2002). Clinicians’ can start to use these techniques to minimise the effect of cognitive bias in clinical practice.

This is the second in a three part series of posts looking at what cognitive bias is, and how cognitive bias influences our clinical practice and research.

About Kerwin Talbot

Kerwin Talbot BiMI completed my degree in Podiatry (that’s right feet) with honours in 2011, after working clinically for 2 years I returned back to the research world. After seeing several complexing patients during my clinical years, I decided to start (rather naively) a PhD in pain and neuroscience, and had the amazing fortune of being made a part of the BiM research team. I also do some clinical and academic teaching for Podiatry at the University of South Australia. My research looks at pain and classical conditioning. With the rest of my freetime I try to balance exercising for triathlons and watching devastatingly bad television.

References:

Croskerry, P 2002, ‘Achieving quality in clinical decision making: cognitive strategies and detection of bias’, Academic Emergency Medicine, vol. 9, no. 11, pp. 1184-1204.

Croskerry, P 2003, ‘The importance of cognitive errors in diagnosis and strategies to minimize them’, Academic Medicine, vol. 78, no. 8, pp. 775-780.

Croskerry, P 2013, ‘From mindless to mindful practice—cognitive bias and clinical decision making’, New England Journal of Medicine, vol. 368, no. 26, pp. 2445-2448.

Graber, M, Gordon, R & Franklin, N 2002, ‘Reducing diagnostic errors in medicine: what’s the goal?’, Academic Medicine, vol. 77, no. 10, pp. 981-992.

Graber, M 2003, ‘Metacognitive training to reduce diagnostic errors: ready for prime time?’, Academic Medicine, vol. 78, no. 8, p. 781.

Kleinmuntz, B 1990, ‘Why we still use our heads instead of formulas: Toward an integrative approach’, Psychological Bulletin, vol. 107, no. 3, p. 296.

Rabin, M & Schrag, JL 1999, ‘First impressions matter: A model of confirmatory bias’, Quarterly journal of Economics, pp. 37-82.

Comments

  1. This looks like a debate about two opposing sides: How we feel about how we think and: How we think about how we feel. There are potential biases on both sides, because we’re blind to our own blindness. But we have an advantage over thoughts and feelings: our science and technology. There’s no problem here about bias unless we get in the way with our thoughts and feelings. JQ is right, the higher the bias, the higher the probability of incorrect diagnosis.

    Some PT’s might be better placed to be a psychologist than a PT, this would explain why some aren’t interested in thier pts thoughts and because they see feelings would explain why some feel a need to be unorthodox. Nobody knows how pain works, as Lorrimor says, you might as well know how consciousness works. Just as “unusual and meaningful” are biases.

    Advice on clinical observations and research is a red herring, we all know observations are heavily biased (it takes science and hardwork, to observe without prejudice and use information intelligently) and research/textbooks is a journey not a destination, only those that use feelings think research is a fact/destination rather than a stream.

    We do what we know, when we know better, we do better. That’s the “great” thing about intuition, one doesn’t have to do the work, just tap into your inner knowledge with emotive mirroring and just like, let there be light, the diagnosis appears. Problem of course with this dangerous path is, you set yourself up to be a “god”. There is a reason why at the end of the film The Devils Advocate, the devil says, “Vanity my favorite Sin.” Show me the human that has no vanity and we see the best of what we can be. Interesting that all the deadly sins are feelings and probably a reason why, “what the hell you’re talking about” is possibly apt.

    Same goes with being “Presence” one doesn’t nessessarily have to do the hard work from the past, just be in the moment and let the process unfold in front of you, (you can worry about Pandora when it happens). Presence is important but not without the hardwork. Wisdom without knowledge is dangerous. Thinking being in the Present is how to get rid of bias, impossible, actually it’s the other way around, one can’t fully be Present with any biases. JQ has shown you’re biased EG, you said you are, even in your clinic.

    It’s not clearly defined because it can’t be Alison but it is acknowleged. That’s one of the points of BiM. And the problem with belief Andrew, is it requires no proof what-so-ever, that why it’s called the leap of faith, leap into the unknown. Current researchers don’t think the senses are unreliable, (my favorite is, there is no such thing as an optical illusion because the optic nerve can’t be fooled, it’s a processing illusion). There’s no such thing as Bad Science, if it’s bad, it’s not science, don’t blame science for bad academics. Wittenstein is nearly right, it’s not what is hidden from us that is as important as those things be believe to be held true that are false, is more important. There’s no problem here about bias unless we get in the way with our thoughts and feelings.

    “Brains and neurons have no causal powers. They cause none of our perceptual experiences, and none of our behavior. Brains and neurons are a species-specific set of symbols, a hack.

    Well, this does not stop us from a successful science. What we had is one theory that turned out to be false, that perception is like reality and reality is like our perceptions. That theory turns out to be false. That doesn’t stop us from now postulating all sorts of other theories about the nature of reality, so it’s actually progress to recognize that one of our theories was false. So science continues as normal. There’s no problem here.

    The evolution shows that our perceptions have been shaped not to show us reality as it is, but that does not mean the same thing about our logic or mathematics. It’s likely we’ll find that there are some selection pressures for our logic and our mathematics to be at least in the direction of truth. We don’t get it all right, but at least the selection pressures are not uniformly away from true math and logic. So I think that we’ll find that we have to look at each cognitive faculty one at a time and see what evolution does to it. What’s true about perception may not be true about math and logic. ” D. Hoffman

  2. John Quintner says:

    EG, according to the guidelines that appear below, “promotion of your particular therapy in the comments section is not appropriate …” In addition, although they continue to reflect your cognitive bias, your comments do not relate to the topic under discussion. I await the Web Manager’s decision.

  3. Alison Lingwood says:

    Do these experienced clinicians allow themselves to feel outside the box and explore further their senses, expanding beyond the limitations they had chosen to accept ?

  4. Hi Kerwin, great post. In the industry I work in (vocational rehabilitation), we are often ruled by process and procedure, which means we might not be asking new and different questions or uncovering new ways of doing things. I think especially as clinicians become more experienced, their expertise allows them to make quick decisions, which is good, but like you, I do wonder whether our “rules of thumb” result in some missed opportunities for ourselves and our clients.
    I recently wrote about this topic from a “functional fixedness” point of view if you’re interested to have a look: http://ableminded.com.au/what-duncker-did-differently/. I’ve updated my post to include a reference to your article here as further reading – it’s great stuff!

    Natalie

  5. I just think this is dangerous ground. If you believe that your senses are limited and unreliable, then they certainly will be. But that decision will probably have been made because of a belief in the “science”. So I ask – who is doing the science that proves senses are generally unreliable, and what are their beliefs and biases? It then becomes a circular argument. If these researchers believe this in the first place, then they have argued for a limitation, and so will necessarily find it.

    I’ve said this here before and will repeat it. In my days as a researcher, my mentor spent my first day at work demonstrating to me that he could set up an experiment to prove almost anything if he had freedom to choose the (usually unstated) a priori assumptions that determined the structure of the experiment. For instance, if I don’t believe that parental diet is a factor in development of coeliac disease, but I do believe that genetics play a part, then I will look at genetics and find a correlation and publish a paper which states “coeliac disease caused by expression of genes XYZ”. Then someone who doesn’t realise that I did not also test it against parental diet (which also affects the dietary preferences of children) – or who believes that “genetics control everything” – will assume that all parent-child correlations are necessarily genetic. And so it goes.

    Then we could take a wider example – If I don’t believe that EM causes immune disruption, I would not take the issue seriously and would therefore only investigate dose-related causality chains and “prove” there is no effect. This is a result of an a priori assumption. If we take an a priori assumption that EM *may possibly* cause immune disruption, we might be able to think beyond dose-dependent relationships and design experiments to look at external EM signal strengths similar to the ECG/EEG/EMG field strengths that are found around a human body. The results are very different.

    It’s not that the senses are unreliable – what is unreliable is that individual belief systems restrict the breadth of experimental design… If there are 100 unimaginative papers and one in which the experimenter did not wear blinkers, then under the present system the unimaginative experiments will disprove the imaginative one just by sheer weight of numbers.

    I had to plough through about 200 peer reviewed papers describing badly designed experiments to discover how CSF flows before I found a researcher who had been creative enough to ask the right questions rather than repeat the old errors. In several high profile reviews of CSF literature, this specific researcher had even had all reference to his papers omitted because they did not conform to the norm. Last year. with the unexpected discovery of an independent in-brain lymphatic system, Dan Greitz’s results were 100% validated. But now I ask – how many more years will it be before the weight of previous research ceases to obscure those 3 or 4 papers? And how many more badly designed experiments will continue to be done on critically instrumented animals by people who only read the mainstream reviews? This foray into peer reviewed science doesn’t leave me very impressed by the general standard of medical research.

    Three monkeys

    Doubting everything or believing everything are two equally convenient solutions, both of which save us from thinking. (Jules Henri Poincare)

    The aspects of a thing that are most important to us are hidden to us because of their simplicity and familiarity. (Ludwig Wittenstein)

    Everyone takes the limits of his own vision for the limits of the world … (Arthur Schopenhauer)

    John Quintner Reply:

    Dangerous ground? I agree.

    I am reminded of the cognitive biases of clinicians who were very influential during the Australian “RSI” debate in the 1980s.

    The participants who espoused the views of what I termed “psychalgic fundamentalism” eventually triumphed and, in so doing, effectively stigmatised those (mainly women) with cervico-brachial pain syndromes presenting in an occupational context.

    Sad to say, the repercussions of this cognitively biased view of their predicament are still being felt.

    References:

    Cohen ML, Arroyo JF, Champion GD, Brown CD. In search of the pathogenesis of refractory cervicobrachial pain syndrome: a deconstruction of the RSI phenomenon. Med J Aust 1992; 156: 432-436.

    Quintner JL, Cohen ML. Occupation neuroses and the psychogenic connotation of “repetition strain injury”: the misconstruction of neurosis. Integrative Psychiatry 1994; 10: 165-176.

    Quintner JL. The Australian RSI debate: stereotyping and medicine. Disability and Rehabilitation 1995; 5: 256-262.

  6. Colin Power says:

    Nice to see a Podiatrist delving deeply into pain and linking it to cognitive bias.
    I am biased…so my funnel is broad when I review cases in the orthopaedic triage clinic seeking the bias of others. Acknowledging my flaw has given me a new approach to my clinical life.

    The winner is hopefully the person seeking help.

  7. also Donald Hoffman, do we see reality as it really is?

  8. Alison Lingwood says:

    EG why not make your unorthodox the norm? By being the Presence, or we could equally say before ‘sense’ , other therapists would become aware just like your clients.
    David Eagleman , neuroscientist has a good TED lecture on how our senses/cognition limits us.
    This is working for me.

    EG Reply:

    Nice vid, thanks!

  9. I suppose I have to trust my patient that they are invested in what they feel is the reality of their experience, and work from there!?

    EG Reply:

    My experience suggests that what a client says in an interview is not often particularly useful, so I don’t pay attention on that level… not much anyway.

    Eg. today I had a woman with chronic hip pain. Surface story has elements such as: “started on this date, sore when I do this, always limping, wakes me up” etc etc. It’s not that useful or interesting. Some attention needs to be paid, but not too much. Mostly I’m paying close attention to her inner self (ie. emotions and awareness).

    So I do this with as much Presence as I can muster, and as usual, emotions spring up. She starts to ‘tear up’ and her voice cracks. Since I’m paying very close attention, I feel her emotion through mirroring, but if I can maintain Presence, then that gives me a measure of ‘affect tolerance’. Affect tolerance is the ability to witness others’ negative emotions without aversion or discomfort. So just by doing this, I’ve uncovered something of importance. If this emotion is the cause of her hip pain, then it’s just been re-activated – that’s good, because soon the memory will [according to therory] become plastic and we can maybe work with it.

    So she gets up on the treatment bench and very soon she is telling me she misses her late husband, she hates the state of the world and really feels like a burden to her daughter and wants to ‘leave the planet’. It’s all nicely activated, so after a short break I leave the room and let things settle. I put her on a hotpak and TENS.

    When I come back, [this is a good example of a built-in bias in me], I expect to be able to start some reframing. Instead, she starts right back up where she left off, on the very same topics! It’s full on, so I don’t counter. Countering here would be detrimental. At some point, maybe next visit, she might be open to reframing.

    Alison Lingwood Reply:

    She may have already reframed herself ! This depends on the breadth of the therapist/clients ability to engage with Presence ?

    EG Reply:

    Exactly, and reframing oneself is always the optimal outcome. If I do any reframing at all, it’s always with a ‘light footprint’. But in this case, I know she hadn’t changed because of the continuing barrage of stuff. Ideally a client should become peaceful and quiet and this wasn’t happening. I reviewed the treatment last night, and I did make a basic skill error preventing proper release.

    Everything she was saying was delivered with a laugh and smile. So she’d saying things like: “oh yes, time to go, I’ve had enough, Haha!”. My error was that when she’d smile/laugh, I [unconsciously] mirrored that. That’s just lack of concentration on my part. I became distracted by the surface presentation. The topics aren’t funny – she’s depressed, she hates the world and has never finished grieving her husbands death. What I should have done is totally disregard the surface presentation and stay fixed on the inner self.

    She obviously feels ashamed of her inner emotional life and so feels the need to smile/laugh whenever some of it is shared. But that’s my mistake – I didn’t give her enough space to go fully into that.

  10. The way I think of it is – “if you don’t trust your own experience, whose do you trust?” It’s a good question. Particularly when applied to scientific articles descrtibing how the human body-mind works.

  11. Unusual and meaningful to whom?
    Isn’t the easiest person to fool, is yourself?

    EG Reply:

    Unusual and meaningful to me. You’re saying there’s potential bias there – yeh, sure there is. It can’t be any other way, unless I post videos of treatments for feedback.

    We’re in a strange situation as professional pain helpers – we can’t easily assess how well we are doing at our job. We might treat someone with LBP and think we did a brilliant job to fix it for $800 over 2 months. We might even advertize this fact with a string of sciencey-sounding superlatives. Meanwhile, someone else might have fixed the very same problem for $80 in 1 treatment. Someone else again may have fixed it for free in under 2 minutes [I have posted links to videos of this sort of thing on NOIJAM].

    Locum work can be a good way of finding out who’s doing what, but it can be extremely disheartening to observe the overservicing and nocebo-ing that goes on. And checking up on colleagues’ work isn’t recommended, obviously. So it’s very limited.

    Doing ‘live’ treatments as an inservice is probably an ok option for some. But if I did this, no one would understand what the hell I’m talking about. Even on ‘progressive’ forums and blogs, I’m regarded as unorthodox.

    Ralph Reply:

    I think absence of evidence is not evidence of absence, corolation is not causation.

    Alison Lingwood Reply:

    I agree.
    What is cognition ? One definition says, ” the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses ”
    Patients come to us to find something to help them. This ‘process’ which is not all a mental action is not clearly defined and is not acknowledged in research.

  12. Therapeutic Presence means holding no pre-conceived ideas about how pain works. During the treatment, everything is noted as it happens, even the most minute occurences. Treatment simply progresses as it does. There’s always an element of spontaneity about good treatments.

    Afterwards, unusual and meaningful findings are collated and reflected on (clinical observation, clinical research). This is where ideas and theories develop. Ideas such as the ones I made on the other thread. One can’t help but have quite deep insights when every tiny detail of an interaction is noted in real time and without judgment.

    You need to let go of your biases in order to be properly Present. Therefore, Presence is the only true way to overcome cognitive bias. There’s no greater skill that needs to be developed. Read and put into practice. http://www.sharigeller.ca/publications.php

  13. John Quintner says:

    EG, thanks for the advice.

    However, as for consulting “big” textbooks, I much prefer the advice of Theodor Billroth [1829-1894]: “It is a most gratifying sign of the rapid progress of our time that our best textbooks become antiquated so quickly.”

    In the spirit of this article (“All cognition is flawed”), Ben Hecht [1894-1964], highlighted one of the handicaps to the practice of medicine: ” … the basic incompetence of the human mind, medical or otherwise, to observe without prejudice, acquire information without becoming too smug to use it intelligently, and most of all, to apply its wisdom without vanity. ”

    Therefore I freely admit to being highly prejudiced against the construct of “somatization” as an explanatory model in the context of persistent and unexplained pain (Quintner & Cohen, 1999). But at least I am in good company [Merskey 2009].

    References:
    Billroth T. The Medical Sciences in the German Universities, Pt II. New York: Macmillan, 1924.

    Hecht B. From: Miracle of the Fifteen Murderers. Collier’s Weekly,1943; Jan 16:11-12.

    Merskey H. Somatization: or another God that failed. Pain 2009; 145: 4-5.

    Quintner JL, Cohen ML. Response to Gerald Aronoff (“Myofascial pain syndrome and fibromyalgia: a critical assessment and alternate view”) [letter]. Clin J Pain 1999; 15: 155-157.

  14. John Quintner says:

    Kerwin, two splendid example of cognitive bias have recently appeared on BIM:

    (i) “… most pain is somatization, even perhaps that which results from ‘accidental’ tissue damage.”

    (ii) “… how can any clinician be confident that any physical complaint walking through the door (sic) is not prima facie a side effect of ptsd-spectrum disorder?”

    The probability of an incorrect diagnosis being made by a clinician holding such biased views as these must rate quite highly.

    EG Reply:

    Clinical observation, John. Clinical research. Look ’em up in one of your big textbooks.