Clinical Reasoning
If the fresh facts which come to our knowledge all fit themselves into the scheme, then our hypothesis may gradually become a solution.
—SHERLOCK HOLMES, THE ADVENTURES OF WISTERIA LODGE
POINTS TO REMEMBER:
A diagnosis is a decision about which disease model best fits the patient’s condition. The decision, and the disease model itself, are tentative.
One does not make a meal by tossing together every ingredient found in the pantry. Nor does one make a diagnosis by entering every item in a computerized checklist into an algorithm.
Time spent solely for the purpose of “completing” a prescribed checklist to justify a billing code is not spent focused on patients and their illnesses.
Experts develop sophisticated pattern-recognition skills, incorporating what is present and not present. This process is not reducible to a digitized formula. This chapter helps the student learn some of the components that will gradually be integrated into expertise.
Principles of Clinical Reasoning
Falsifiable Hypotheses
A falsifiable hypothesis is not a fraudulent or inappropriate hypothesis but rather one that is susceptible to being disproved (see Chapter 1). An example of a falsifiable hypothesis is: “The patient has consolidation in the left lower lobe.” This is falsifiable because it is possible to demonstrate that there is no consolidation in the left lower lobe.
In contrast, Galen’s hypothesis about his cure for the plague was not falsifiable: “This cure is efficacious in all cases in which it has been tried; except in those that were so sick that they were going to die anyway.” One cannot prove that the survivors would have died if they had not received his cure (or that those who did die would not have lived without it).
A differential diagnosis (vide infra), as discussed in Chapter 3, should be a list of falsifiable hypotheses.
Negative Propositions
Negative propositions are frequent in medicine. Here are some clinical examples:
It is not possible to percuss the heart borders.
Persons who have one kidney removed never get compensatory hypertrophy of the remaining kidney.
Although negative propositions such as these are difficult to prove, they can be easily disproved (as by counterexample). In other words, they are falsifiable.
As a general rule, negative propositions, if universal, cannot be proved. For instance, if I say, “There are no unicorns,” this implies that neither you nor I have yet seen a unicorn; furthermore, neither of us will find a unicorn in the future; and finally, there are no unicorns hiding in the basement (or perhaps on Mars) that we have overlooked. Such a universal proposition cannot be proved. However, a sufficiently restricted negative proposition can be proved, for example, “There are no visible unicorns in this room right now.” Furthermore, the latter can be disproved (if you open your eyes and see one), so it is falsifiable.
Every positive proposition in a differential diagnosis list implies negative propositions regarding the other entries on the list, assuming that Occam’s razor (see Chapter 26) holds. That fact, plus the general difficulty of proving negative propositions, may be the ultimate basis for the rule that “one should never say never in medicine.”
Null Hypothesis
A restricted kind of negative proposition that is commonly used in science is called the null hypothesis. This is the hypothesis that states there are no differences between two groups.
Let us say that we have randomly allocated some patients suffering from a given disease into two groups. One group (the experimental group) is given a new medicine and the other (the control group) is treated identically except that it does not receive the new medicine. Differences in outcome between the two groups can legitimately be attributed to the new medicine; we avoid the post hoc ergo propter hoc fallacy (vide infra) by including a prospective control group.
Although we are seeking positive information by doing the experiment—about whether the new medicine does (or does not) work—we proceed by attempting to disprove the null hypothesis (i.e., by trying to refute the proposition that the two groups are as alike as two random samples drawn from the same population or universe). Customarily, we usually consider the null hypothesis to be disproved if the probability of the differences arising randomly is shown to be less than 5% (p < 0.05). Remember, however, that only a fraction of studies are published. The published one may have been the only one of 20 to show a “statistically significant” result.
In clinical medicine, we examine one patient at a time. In this setting, the null hypothesis is that he falls within the normal distribution of the rest of the population of healthy people (or of people who do not have the disease that is under consideration).
The student who is philosophically inclined will ask: Does this mean that medical research, and worse, clinical medicine, is basically probabilistic? The answer is yes. The conclusions are arrived
at in medical research by probabilistically refuting the restricted negative proposition known as the null hypothesis. (The restrictions are generally hidden in the case record or in the “Materials and Methods” section of a journal article.)
at in medical research by probabilistically refuting the restricted negative proposition known as the null hypothesis. (The restrictions are generally hidden in the case record or in the “Materials and Methods” section of a journal article.)
Indeed, all “proofs” related to real phenomena (that is, phenomena outside the abstract worlds of mathematics and symbolic logic) involve just this type of probabilistic thinking. This tends to be frustrating to the medical student, who is not trying to do something negative but rather to arrive at a positive diagnosis. In a fundamental sense, all positive diagnostic statements are umbilically related to the act of rejecting other possibilities (such as “normality”). In conditions like “essential hypertension,” which are generally recognized to be “diagnoses of exclusion,” the process is simply more obvious.
In summary, science cannot prove anything. The scientist is engaged in the activity of trying to disprove things.
To illustrate this point, there is an apocryphal story about Galileo. While he and a friend watched ice float down the river one winter, they fell into a dispute as to whether the ice floated because of surface tension and the flat shape of the ice floe or because the specific gravity of the ice was less than that of the water. The sun was shining, and they reasoned that if the sun melted the ice floe from the sides and changed its shape and if it then sank while they were watching it, those events would support the surface tension theory—but the ice floated around a bend in the river and disappeared.
They then proposed to go to the laboratory and carve a small piece of ice in the shape of the ice floe they were observing. This model could be placed in a vessel containing hot water. Again, if it melted from the side and if it then sank, those events in tandem would support the surface tension theory.
As they were proceeding to place the ice model into the water, Galileo suddenly had an idea and changed the experiment. “Instead,” he exclaimed, “let’s place the ice at the bottom of the water and see what happens.”
Some historians of science consider this a pivotal point in the history of ideas. The advantage of Galileo’s modification of the experiment was that whether the ice floated to the top or stayed on the bottom, the result would admit of one and only one of the two available hypotheses, having effectively refuted the other. This is the same type of reasoning that an effective clinician employs in his handling of signs, symptoms, and laboratory tests. He seeks tests whose results exclude one of the possibilities.
To return to the ice floe experiment, if the piece of ice failed to float to the top, then it could not be of less specific gravity than the water, and the surface tension theory must be correct. If it did float, it refuted the idea of surface tension as the explanation because surface tension did not operate at the bottom of the container, and so the specific gravity theory would stand. Of course, if there were a third explanation that neither Galileo nor his friend had considered, they could well draw an incorrect inference from the result of the experiment. Similarly, diagnosis by exclusion is treacherous if the differential diagnosis is incomplete.
Levels of Probability
To avoid error, it is important to keep the probabilistic nature of medicine constantly in mind whenever you listen to case presentations and read about clinical medicine. The probabilistic nature of your conclusions should be made as explicit as possible.
When negative propositions are presented, there are several possible levels of certainty. If someone makes a statement such as example 1 or 2 under the “Negative Propositions” section above, it might mean a number of different things.
First, the speaker may simply be stating a belief. Second, he may be recounting a remembered experience. Without documentation, this level is approximately as reliable as a statement of belief. (I have done a number of clinical projects in which I collected data prospectively by writing the clinical experience on a card and filing it out of sight. On a regular basis, I have found, on reviewing the written records, one or more situations I would otherwise have sworn I had never seen. Most clinical scientists have had similar experiences.) Third, the speaker or author may have documented personal experience. In this case, the proposition might be proved, if sufficiently restricted. (Proposition 1 is a good example. If the speaker cannot determine the heart borders by percussion, then the limited negative proposition might be true … for him. A problem is born, however, when he assumes that all others are equally lacking in this skill, that is, when he extrapolates from a limited negative proposition to a universal one.) Fourth, the speaker may intend to make a much stronger, more general statement: “It never was and we don’t expect it to.” The expectation may be based on a scientific body of work that predicts (but does not prove) that something will not happen in the future. Expectation, however reasonable, is still not a proof.
The scientific physician will strive to achieve the highest level of certainty that is possible and not to overstate the level of certainty that exists.
An Advanced Epistomologic Note
The odd relationship between positive and negative propositions was illustrated by Wittgenstein’s statement: “It appears to me that negation in arithmetic is interesting only in conjunction with a certain generality…. I don’t write ˜ (5 × 5 = 30), I write 5 × 5 ≠ 30, since I’m not negating anything but want to establish a relation between 5 × 5 and 30 (and hence something positive)” (Wittgenstein, 1975). A different example would be better for those not familiar with symbolic logic: We do not say “5 × 5 is roughly 26,” we say, “5 × 5 is definitely not 30.” Although the former might appear “closer,” the latter, as a technique, would eventually allow us to exclude all the incorrect answers and so arrive at the correct one. The former statement is no more correct than the equivalent, but different, “5 × 5 is roughly 24,” whereas the latter statement is invariably true.
Aphorisms
A Bestiary of Clinical Reasoning
A well-known aphorism says, “When you hear hoofbeats, they’re probably coming from a horse, not a zebra” (Fig. 27-1).
Sometimes, the hoofbeats are not coming from either a zebra or a horse (Fig. 27-2), thus the need for differential diagnosis, arranged in order of probability. For hoofbeats, we would list: (a) horse, (b) bull, (c) zebra, and other unlikely possibilities. (One always likes to put a zebra at the end of the differential: “If you don’t think about it, you’ll never diagnose it.”)
Then there is always the confounding possibility that it is a horse, but you do not hear any hoofbeats (Fig. 27-3).
Sutton’s Law
Ordering the differential diagnosis according to decreasing probability is a strategy in accordance with Sutton’s law, which mandates “go where the money is.”
Historic Note
The apocrypha states that Dr George Dock was a visiting professor at Yale, long ago when visiting professors were presented interesting patients to be discussed viva voce with no specific forewarning or preparation. Dock was presented a patient whom he thought was an easy puzzle, that is, the obvious test to perform was a liver biopsy, which would completely resolve the problem. Instead, he was given the results of every other available laboratory test, none of which was capable of resolving the issue. “Why don’t you follow Sutton’s law?” he finally asked.
No one had ever heard of Sutton’s law, so Dock told a story about the bank robber Willie “the Actor” Sutton. Sutton was famous for robbing banks, getting caught, and then escaping from prison by the use of subterfuge and costume (hence the nickname “the Actor”). Each time he escaped, he resumed robbing banks and was eventually returned to prison. Dock said that a newspaper reporter, wondering why the recidivist did not desist from the activity that regularly landed him in prison, asked him, “Why do you keep robbing banks, Willie?”
Sutton allegedly replied, “Because that’s where the money is.”
Dock explained that in the case under discussion, the money was in the liver, and hence Sutton’s law dictated that one should biopsy the liver.
Years later, Sutton was asked if he had actually made that statement, and he laughingly responded in the negative. However, he allowed that it was a good answer and that he would have said it if he had thought of it—but by that time, his name was already firmly ensconced in clinical lore.
Today, Sutton’s law in medicine is all too often invoked in a different context: “money” is taken quite literally, and the money is no longer to be found in making the correct diagnosis or in any other activity related to patient care. On the contrary, the money is lost there as clinical expenditures constitute, by definition, the “medical loss ratio” (Orient and Wright, 1997).
Individual Variability or “The Law of Sigma”
Every probability distribution has a variance (σ); the standard deviation is an estimate of variance around a group mean. If an individual is somewhat different from the group, consider first the possibility that he is simply located near one of the tails of the distribution rather than a member of a different population altogether. This is an application of the null hypothesis: the individual is not significantly different from normal.
Occam’s Razor
As explained in Chapter 26, the physician must try to explain the patient’s problem as economically as possible (with the minimum number of separate diagnoses). For example, if a patient with an organic brain syndrome enters the hospital with a problem that could be explained either by (a) a new problem or (b) too much or too little of a medication prescribed for an old problem, bet on the latter.
Case Report
A patient had been discharged from the hospital with a prescription for phenytoin. Because most of his old records were lost, and
the patient had Wernicke-Korsakoff syndrome, it was not certain whether he had taken his phenytoin.
the patient had Wernicke-Korsakoff syndrome, it was not certain whether he had taken his phenytoin.
The patient was readmitted with orthostatic hypotension, nystagmus, truncal ataxia, and macrocytic anemia. The resident wisely stopped the phenytoin while waiting for the drug level to return from the laboratory.
The patient had lost his truncal ataxia by the time he was examined by the attending, who wrongly attributed its absence to an incorrect prior examination by the house officer. Worse, the attending violated the rule above and diagnosed several new entities instead of phenytoin intoxication.
The phenytoin level at admission came back markedly elevated.
Comment. Although this was not an easy case, following the rule would have enabled the attending to interpret better the loss of truncal ataxia. By the following day, the patient had also lost his nystagmus.
Inference
You will recognize inferential reasoning as illustrated in this anecdote:
Medical Student: “I saw Smith get on the bus this morning. He had been drinking and gambling.”
Scientist: “Did you see him drinking?”
Medical Student: “No.”
Scientist: “Did you see him gambling?”
Medical Student: “No.”
Scientist: “Then how can you scientifically make the statement that he had been drinking and gambling?”
Medical Student: “When he got on the bus, he gave the driver a blue chip and told him to keep the change.”
Still, inferential reasoning is full of potential for error (vide infra).
Frequently Violated Rules for the Logical Handling of Clinical Data
The logical handling of clinical data has been discussed in extended form (Bernard, 1957; Feinstein, 1967), but a few principles are listed here, in addition to those discussed above. I take them to be self-evident, and they were generally accepted by house staff, faculty, and students when presented in the form of an opinion questionnaire (Sapira, 1980). Nevertheless, they are frequently violated in practice.
Rule 1. If some of the findings supporting a new diagnosis can be reasonably rejected as either artifactual or related to a preexistent or coexistent diagnosis, such rejection of these findings does not, per se, refute either the new diagnosis or the verity of the other findings.
Rule 2. The fact that a finding is elicited by only a minority of observers does not mean that the finding can reasonably be rejected as artifactual.
Comment. If a finding is elicited, it is a finding, assuming that clinicians are not hallucinatory or tending toward intentional obfuscation. The finding could have been transient, or perhaps only a minority of the observers might have the skills to elicit it, or perhaps it was misinterpreted or overinterpreted by an unskilled examiner.
Dr Claude Bernard was frequently asked how one could determine which of two identical experiments yielding contrary results should be considered the correct one. Dr Bernard answered that both should be considered correct because two identical experiments could not yield different results. He then pointed out that to yield opposite results, there must have been unrecognized and differing conditions between the two experiments and these would ultimately be shown to be the cause of the differing results. I believe Bernard’s rule is the antecedent of this second principle.
Rule 3. If there are findings whose validity is not contested, supporting a diagnosis with which a consultant does not agree, the consultant is obligated to offer an alternative diagnosis that will also explain the findings.
Rule 4. Positive findings are more important than negative findings, except for those negative findings that are known as “excluders.” (For example, the absence of an increase in the serum bromide concentration would be an “excluder” for the diagnosis of bromism.)
Comment. The antecedent of this principle was apparently formulated by Dr Jack Myers and popularized by Dr Eugene Stead:
Jack Myers frequently said that much clinical learning could be summarized by the following statement: Any positive observation has greater weight than any negative observation. If a marble is found in a room, that is a positive observation and, in general, means that the room did contain a marble. If the doctor finds no marble on searching the room, it may mean that there is no marble there, but many times it will mean that the doctor is not good at finding marbles (Stead, 1978).
Rule 5. If a patient has n findings, the patient’s diagnosis (or diagnoses) should explain all n findings.
Comment. This principle is most frequently violated by sophomore medical students. Because of their praiseworthy intent to “get the diagnosis” (which they correctly assume to be a precondition for the patient’s selection for examination by medical students), students emphasize those positive findings that support their first diagnosis but fail to consider other findings that would suggest an alternative diagnosis.
Logical Fallacies
Post Hoc Ergo Propter Hoc
The Latin expression in this section heading means: “After this, therefore, because of this.” It refers to one of the most common errors of logic committed in the daily clinical practice of medicine: assuming that if A follows B, A was caused by B. The fallacy’s very ubiquity breeds a malignant tolerance: Some persons are unable to accept the fact that an error is being committed even when it is pointed out. Enlightened clinicians may not like to believe themselves capable of making such an unreasonable assumption, but in fact, the inference that sequence is evidence of causality seems so eminently reasonable that the fallacy is easily perpetrated in our very best hospitals, books, journals, and offices. (In fact, sometimes the sequence is reasonable; so far, no one has suggested that Saint Sebastian [see Fig. 5-2] was secreting those arrows!)
Post hoc ergo propter hoc is a special case of an associative fallacy. For example, although it is true that there is a strong statistical association between height and weight, it would be erroneous to conclude that one could become taller simply by overeating. Otherwise the complaint of the fat man, “I’m not overweight: I’m just too short” would be true.
Go to the chart rack and pick up any chart. You might see in bold red letters on the outside: “Allergic to codeine.” What is the scientific basis for such a statement?
To be sure that the patient had an allergic reaction, it should have been replicated on challenge, preferably blind, and must be the sort of reaction recognized as allergic, not simply the pharmacologic (e.g., a histamine-releasing) effect of codeine. However, upon interviewing the patient, one finds that the patient noticed some event that followed the administration of what was believed to have been codeine and assumed a causal relationship. Sometimes the effect attributed to the drug and the time interval described are so unlikely that the chance of causal association is slight. At other times, however, the effect (e.g., nausea) and the time interval are quite good for assuming a causal (if not necessarily allergic) association.
Of course, nothing is wrong with making an assumption as long as one recognizes what one is doing. It is often safer to accept the assumption than to take the risk of a provocative test. However, with post hoc ergo propter hoc such an assumption is often accepted as if it had been proved.
The importance of establishing the likelihood of causality becomes apparent when the patient has a serious infection that is preferably treated by an antibiotic to which the patient is thought (by post hoc ergo propter hoc reasoning) to be allergic. In the broader sense, many expensive and onerous restrictions based on the “precautionary principle”—which itself is largely based on the post hoc ergo propter hoc assumption—eventually have the effect of increasing risk by decreasing flexibility and mandating actions with their own unrecognized adverse effects (Orient, 1996b).
Open a chart, and you may find a statement like this in the progress notes: “The fever has responded well to antibiotics. Cultures still negative.”
First of all, antibiotics are not hypothermic. Infections may respond to antibiotics but fevers do not. At this point, we are not sure that the antibiotics chosen are appropriate for the organism or even that the patient has an infection. In fact, the patient might have a collagen-vascular disease.
It would be much better to enter the note: “Patient afebrile. Cultures still negative.” This contains the same information in fewer words without the error in logic.
Although the argument may seem trivial, consider the patient who has unbeknownst to his doctors developed a febrile drug reaction due to one of the “covering” antibiotics. The logical fallacy may lead to treatment inadvertently based on the hair of the dog, that is, more antibiotics are added to “cover” the fallaciously assumed microbial cause of the fever. (Perhaps a dog chasing its tail might be a more suitable image.)
Thus, the post hoc ergo propter hoc fallacy has great potential for harm, especially in those situations in which it may seem the most reasonable.
Discounting of One Etiology by Eliminating Only One of Multiple Subcomponents
Consider a situation in which syndrome Z can be caused by etiology 1 or etiology 2. Etiology 2 is usually the result of condition A, although it is sometimes the result of condition B. (You may wish to diagram this.) Suppose that a patient with syndrome Z has been proved not to have A. It would then be erroneous to conclude that his syndrome must have been caused by etiology 1.
A specific case might be hypokalemia with hyperkaliuria, which can be caused by renal tubular acidosis (RTA) type 1 or type 2. It can also be caused by mineralocorticoid excess. For the sake of discussion, assume that all other “nonrenal” causes of hypokalemia with hyperkaliuria (diuretics, other drugs, etc.) have been eliminated. Because the patient has a urinary pH of 5.2, it is accepted that he cannot have RTA type 1. However, it would be a fallacy to conclude that he must have mineralocorticoid excess. Why? Write your analysis down before consulting Appendix 27.1.
Differential Diagnosis
Use of Differential Diagnosis as a Guide to Reading
A Personal Perspective
When I was a depressed and anxious freshman in medical school, I approached one of the sophomores, Howie Reidbord, whom I had known in college. I asked what distinguished the students at the top of the class from the students at the bottom of the class, a question of more than casual interest as the bottom 20% did not graduate. “Reading,” said the future Dr Reidbord. “The ones at the top read more than the ones at the bottom.”
It is impossible to communicate what that time and place were like to those who were not there—but let me give one example.
When I was a medical resident, I was reading in the hospital library one evening. As I finished one article and lifted my head to the next article in the stack, I saw an orderly sitting across from me reading a cardiology textbook. Each time I got another article from the stack, I noticed that he was still there, reading intently. Finally, I asked him what he was doing. “Reading about my patients,” he answered, as if to say, why else would somebody be in the library reading medical texts.
I thought I might have misidentified his white uniform, but he truly was an orderly, who was serving a 2-year sentence to public service because of moral objections to the then nascent Vietnam War. He too stayed after his duty hours to read about his patients. It would be misleading to say that everybody always read about all of his patients, but the story illustrates the intellectual environment of the times.1