Many of y’all know that I have been using TherapyNotes as our practice EHR for over 10 years now. I’ve looked at others and I keep coming back to TherapyNotes because they do it all. If you’re interested in an EHR for your practice, you can get two free months of TherapyNotes by going to thetestingpsychologist.com/therapynotes and enter the code “testing”.
This podcast is brought to you in part by PAR.
Hey, guess what y’all, the BRIEF-A has been updated. The BRIEF2A is the latest update to the BRIEF2 family. Use the gold standard and executive functioning assessment to assess adult clients. You can preorder it now, visit parinc.com/products/brief2a.
Hey folks, welcome back to the podcast. I am glad to have you here as always for a [00:01:00] clinical episode with my guest, Dr. David Faust. He is an Emeritus Professor of Psychology at the University of Rhode Island and fellow of the Ryan Institute of Neuroscience with an affiliate appointment in the Warren Alpert Medical School of Brown University.
He has published numerous books and articles, has lectured nationally and internationally on such topics as psychology and law, neuropsychology, and clinical judgment and decision-making, which is the topic of our podcast today.
He is the recipient of various awards and honors in the field. His most recent work with co-authors, Hal Arkes and Chad Gaudet, on clinical judgment and decision making is intended to provide various tools to increase the accuracy of assessment and prediction. The book has been described by the eminent psychologist and test developer, Dr. Cecil Reynolds as “The best book on clinical decision-making ever written”.
So you will see this conversation is dense. There is a lot of material to take away from this conversation. [00:02:00] Don’t be surprised if you find yourself rewinding, listening again. There is a lot to take from this conversation.
We’re talking about clinical decision-making, which is an incredible tool and very useful for us, but I think most of us are a little bit unclear of exactly how it might work. So David and I talk about many concepts from his book, but a few that we cover are; why decision research is important for us, what are base rates, and how do we use them? The role of overconfidence in clinical judgment and strategies to integrate subjective and objective data in our evaluations. These are just a few topics among many that we discuss.
So again, if you listen to the episode and find yourself wanting to go back and take it in again, I don’t blame you at all. There’s a lot to take away and there is certainly a lot of information in his book as well, so go [00:03:00] check that out.
So without further ado, here is my conversation with Dr. David Faust.
David, hey, welcome to the podcast.
Dr. David: Thank you.
Dr. Sharp: I am grateful to have you here. There are a few guests here and there over the course of the year where I hear about their research or a book in your case, and I reach out immediately because I’m like, the audience needs to hear this kind of material.
You’re one of those individuals when I saw the announcement about the book that was coming out probably, I don’t know what it was, six months ago maybe at this point, it immediately grabbed my attention. I was like, I have to try and get this guy on the podcast. [00:04:00] So thank you so much for being here.
Dr. David: Oh, thank you for inviting me.
Dr. Sharp: Absolutely. I’ll start with the question that I always start with, with folks, which is, why this? Of all the things that you could care about and spend your energy and time on in this field, why clinical judgment and decision-making.
Dr. David: There are two basic reasons; one, I find the subject matter very interesting, given some of the other interests I have in epistemology methods of knowing, but more so because I was brought up in a family in which serviced others as very highly valued.
People talk about science practice models; I think that connection is often not so clear. And so if I were to identify different areas of research and literature, in which I feel like I have a decent knowledge base, this is the one that [00:05:00] seems to apply the most directly to me to applied work. And so within the humble limits of my skills, it’s a subject matter that is important because it can enhance care and hence human welfare.
So it’s my little vote for rationality but also to improve practices to the extent I have something to contribute there that will help people. So that’s the fundamental reasons.
Dr. Sharp: Sure. It makes sense. It sounds like a lot of us, it’s a mix of personal interest and benefit to the field and where your skill sets lie. It’s a nice overlap.
I would love to dive into it. I think there’s so much to cover in this area, and we’re going to do our best to communicate the [00:06:00] important pieces from this topic but let’s start with some backgrounds. If you could even start with some kind of definition or operationalization of what do, we mean when we’re talking about clinical judgment and decision-making here?
Dr. David: I think it’s just what you think of, Jeremy. We are faced often with very important decisions, often with predictions, either explicitly or implicitly. So whether it’s treatment choice, differential diagnosis, treatment planning, deciding whether someone should be tried as a juvenile or an adult, or the police officer should carry a gun, pilot return, child custody matters, and so on, psychologists are in the business of decision making and prediction.
[00:07:00] A lot of these predictions, we don’t necessarily think of as predictions, they’re implicit predictions. And so our careers are filled with many decisions. The purpose of the decision-making literature, although, perceived sometimes at the fault of people in the field of decision-making, is there to help us where possible sharpen our decisions and make more accurate decisions or prediction.There is an entire body of knowledge on decision-making. Often psychologists are underexposed to it and sometimes given information that’s contrary, sounds right but may be contrary to what maximizes decision accuracy. So it’s just what you think of the day-to-day, moment-to-moment, many interpretations, decisions, [00:08:00] predictions that psychologists have to make in the decision-making field.
While not specifically targeted as psychology, a lot of the research originates from psychologists and applies to psychology as well as numerous other fields.
Dr. Sharp: Sure. The statement you made about psychologists being underexposed to decision-making research, it just rings so true. I certainly did not have any classes or any portion of a class in graduate school that dove into the decision research. I’m guessing there are a lot of folks out there who would say the same thing. Does that hold true in your experience or are there programs out there that are anchored in this research more than others?
Dr. David: Not too many. Often, the value of science is when it conflicts with what we’ve been told and what we’ve learned that [00:09:00] maybe, and it has the greatest value if that science is solid and applicable. I believe that many of the messages given sometimes are counterproductive. So we hear things about it in any training and interpreting, say, test results to that kind of thing.
I had a meeting once, it doesn’t matter who it’s with. They’re people I respect highly. We independently formulated interpretive procedures for neuropsychology because the ultimate aim, potentially, was to put together some kind of principles and guidelines.
There was an impressive divergence of viewpoint, which is often very good, but ended up in a full day discussion. Perhaps they convinced me [00:10:00] somewhat but in many ways, they were open-minded to looking at some of these other principles of decision making and judgment.
I think a little bit sometimes; I could name probably many areas in which my knowledge is inaccurate or other people know a lot more than I do. Often, the things that are commonly said about decision making in psychology, maybe this will sound disrespectful, it’s not intended to be at all, quite a bit of that is worth considering in the context of the decision making literature, which might suggest these are strategies that are effective, but not optimally effective.
I’m not here to tear down anybody or anything. It’s not my intent. I’ve personally found the decision literature very humbling many times but [00:11:00] helpful. I think sometimes the decision-making principles we sometimes follow are not maybe fully up to date or don’t integrate that decision research.
I’ll give you one concrete example. If you go through virtually any textbook or discussion of what you do after you gather data about how you integrate it, you pretty much virtually always something very similar. The more information you gather, the better. And then the act is of integrating that information, combining it and often looking for patterns or configure relationships among the data.
The average students taking their comps, if they were asked that question and said something like, no, I don’t do it that way and I don’t think you necessarily should, [00:12:00] there’d be a pretty good chance they’d have to take their comps again. However, there are overwhelming reasons to think that that’s not always the best approach.
There are many areas in which there’s a large and often consistent body of literature that’s worth thinking about when we’re thinking about what we’ve been taught and explicitly or implicitly about how to handle interpretive practice. And so that’s going to relate to differential diagnosis. It’s going to relate to judgments about who can live independently. What then the prognosis is over time and so on.
In that particular example, rather than integrating all the information, because some of it’s not going to be very valuable, some of it will be incorrect, is to combine the pieces of information at each [00:13:00] enhanced decision accuracy, which sometimes is a surprisingly small amount. So I’m just using that as an example.
To me, what I found some interesting about the decision literature, so much of it was contrary to what I was being taught or maybe another viewpoint that was worth considering.
Dr. Sharp: Absolutely. I look forward to diving into these ideas pretty deeply. I do want to maybe set out a contrast here at the beginning, maybe it’s a contrast, I don’t know. We can talk about this, but I conceptualize all of this very simply as it’s a conflict between our feeling; so our clinical intuition, let’s call it and actual data, statistics and [00:14:00] actuarial information. I see those as competing and set at odds. I’m curious if you look at it that way, and if not, how you might tweak that relationship between intuition and data.
Dr. David: I think it’s a more nuanced issue, personally. I just had this discussion with someone about a week ago who interpreted initially what I was saying as this kind of hardcore empiricism. I’m a semi-demi-quasi-hemi scientific realist or about objective, so called objective versus qualitative data, and so on.
For example, many of the greatest discoveries in neurology, just to use an example, come from single case studies ideas [00:15:00] that are generated in that context. This is also not a debate about qualitative versus quantitative information because sometimes qualitative judgments are extraordinarily valuable and not reproducible any other way. It’s more discussion about the integration of information.
Also, it’s not a debate about projecting objective things per se. Yes, maybe you want to predict how many days in the hospital before being in there is more damaging than leaving, where the risk of infection exceeds the benefits of staying longer.
Sometimes, what you want to predict are subjective reactions. If you’re predicting the quality of wine, which people now are using statistical formula, and have gotten quite rich off doing that, there’s nothing [00:16:00] objective that you’re predicting. You’re predicting subjective reactions. And often that’s of greatest interest.
If you want to know whether a doctor may have a good bedside manner, a clinician will relate well to individuals. You’re predicting their subjective reactions but those can be extremely important. Also, you look at literature on clinical judgment, there are times that people are spectacularly correct and it’s contrary to a statistical decision procedure, or in many cases, as knowledge does rapidly evolves in psychology, there may not be any quantitative way and so on.
Also, many of the developments and decision-making come from intuition that’s turned into something else. Maybe in my simplistic way of thinking, it’s very important to distinguish between the [00:17:00] context of discovery, the old Reichenbach philosopher. That’s not quite what he said, but generating ideas, thinking of new possibilities, generating hypotheses. You need a hypothesis to the test to figure out whether that correct degree insights and value.
There’s no substitute for brainstorming intuition, qualitative observation and so on in generating ideas. However, it’s not always the best way of testing ideas. I think it’s worth distinguishing between a context of discovery and what’s sometimes called the context of justification.
Many ideas among great thinkers turn out to be wrong because anticipating nature is very hard [00:18:00] and integrating complex data subjectively is very difficult. If you read Newton’s notebooks, there’s a lot in there that’s wrong. No, it’s super hard. If Newton couldn’t do it, I know I can’t expect to do it.
I think it’s very important to respect subjective judgment processes, human genius that leads us to build things that require that kind of leap of thought and so on, but to distinguish these two contexts, because often the methods that are best for generating ideas are not the methods that are best for testing ideas. I find a lot of ways to discussion, in my opinion, but people don’t distinguish those contacts.
If you’re having a brainstorming session and coming up with ideas, especially with a recalcitrant problem, how do I treat this very difficult client? [00:19:00] If I say, you haven’t proven that to be correct, and you’re not thinking in a rigorous fashion, I think that’s a misguided criticism. That is different from saying, however, if we’re going to test that belief.
The more we can use rigorous methods, so we can be a little more sure that we’re approaching the correct answer, that’s a different discussion. There are also occasional incredible intuitive thinkers that come up with things that no one would ever think of otherwise or have made observations in a clinical context.
A lot of people are familiar with Paul Neal. They often think of him as this super uptight empiricist. In fact, he would openly discuss. He was a therapy client for years. He engaged in performing [00:20:00] therapy for many years, which he felt it was a very important way to get a better understanding of the human mind. In his office, he only had two pictures; one of Skinner, one of Freud.
Dr. Sharp: That’s great.
Dr. Sharp: I said there, I think these issues are more nuanced. I would be making a mistake if I say intuition, subjective judgment qualitative data doesn’t have potentially a lot of value. The contrary mistake, perhaps the stereotype decision research is a lot of people are very opposed to so called acturial judgment and sometimes based on misconceptions, and to say that has nothing to offer and it’s hostile to clinical judgment.
I will say some decision researchers [00:21:00] who got frustrated about the lack of acceptance or application of their work set this up often as a dichotomy or as a battle. I personally feel, in applied work, I’ll take any advantage I can get within the bounds of ethics. So both bring important contributions to the process.
Dr. Sharp: Of course. We don’t have to go too far down this rabbit hole, but I think it is threatening to folks. I’m speaking personally as well that if we get into this idea that we can’t necessarily trust our intuition as a great means of decision making, then that’s pretty threatening. That’s vulnerable.
It is an existential crisis in a way unless you take the leap and cross the bridge over to the acknowledgement or acceptance [00:22:00] that, hey I’m going to be unduly influenced by any number of factors, and may not be right and have to settle into the data a little bit. That can be threatening to someone.
Dr. David: I think everybody has a right to be convinced, and so I don’t think it’s necessarily a leap of faith at all. I think it’s something worth considering. That’s the way I’d look at it. It’s not a way of replacing practices. It’s a way of enhancing practices.
One of the biggest barriers is sometimes the clinical context is referred to as a wicked learning environment. I don’t mean by evil, but very difficult.
Dr. Sharp: Is that because we don’t get immediate feedback on whether we’re right or wrong? I’m on …
Dr. David: I’m sorry, I didn’t mean to interrupt you. Please finish.
Dr. Sharp: No, it’s all good. I was just making a guess. It’s come up on the [00:23:00] podcast before with, Dr. Stephanie Nelson, mentioned the wicked learning environment. I was checking to see if I remembered it correctly.
Dr. David: There are two things are going on. There are about six factors, depending on how you slice them up or group them that make it harder to learn. So yes, lack of systematic feedback is one, just to mention a few.
Another thing is you don’t know what would have happened had you done otherwise. So something that’s works 75% of the time, that’s very good. It’s a lot better off than the chance, but maybe there’s something that would work 80% or 85% of the time. And so that little increment, 5% let’s say, over the course of thousands of decisions, you’re getting an edge and that’s important.
To learn about the association between variables, you need four cells of a covariation [00:24:00] table. It’s like seeding their clouds for rain; how often does it rain when you seed? How often does it rain and not rain when you don’t seed the clouds? You have to have those four cells generally to learn experiential, to have a comparison.
That generally doesn’t happen in clinical practice plus we often understudy normal people in our clinical practices. We have less exposure. It’s also very difficult to integrate complex information, but there’s delays in feedback, there’s lack of representative feedback. There’s channeling facts; our behavior channel, client behaviors.
In the old days, in the studies, Freudian [00:25:00] patients had Freudian dreams. Lyrian patients had Lyrian dreams. Jungian patients had Jungian dreams. Now, that’s partially the therapist, but also the framework the therapist is imputing and placing on the individual and so on. Clients often are hesitant to say something negative about therapists that they depend on.
The second component of this problem or issue, which is very important, I don’t like experiences so salient. Why would you choose a Bland statistic over that? The other thing, which is very important here is that most of those conditions tend to foster overconfidence. That is a belief that, one, it is more accurate and is actually the case.
If you look at decades of literature on clinical judgement, it often [00:26:00] is effective or certainly well better than chance and that’s nothing to turn your head up. If you go from, say, three choices and you’re 33% accurate by chance, clinical judgment gets you to 60 or 70%. That’s a massive improvement.
Dr. Sharp: That’s pretty good.
Dr. David: It’s very, because psychology is very hard. What happens if you’re, let’s say 70% accurate, another method is 80% or could bring your accuracy up 10% but your confidence level is at 85%? You’re going to reject that method because studies on the accuracy of these kinds of methods generate information about how accurate they are and are not.
So going back to this wicked learning environment, [00:27:00] imagine a client comes in, they’re worried about their memory. And so you do memory testing, proficient battery, very carefully done. Imagine two situations, you’re articulate, you care, you say, yeah I am concerned. Some of these results are outside normal limits. It’s a little worse than we’d hoped for and there may be something.
In scenario B, exact same scenario, except that falls within normal limits. Oh, people vary in their strengths and weaknesses. Once you get worried about a problem, you tend to hyperfocus on it. You’re still performing your job.
Well, chances are, if you don’t have a naysayer oppositional client and neuropsychologist doing assessments typically don’t, either interpretation will be believed, even if they can’t both be correct. [00:28:00] In that circumstance, the frequency of seeming confirmation of judgmental accuracy exceeds the true rate of accuracy.
And that also happens when an interpretation funnels a client into a certain behavior. The classic example of that is the self-fulfilling prophecy. The banks forgot to fold, I better pull my money out. Banks are okay. It’s one of the things that triggered the Great Depression, but then people started pulling out their money and the banks did fall.
So our beliefs by influencing our behavior often influence our clients’ behaviors. There, that works systematically to make it look like we’re a little more accurate or more accurate than we actually are. And so it’s not just that there’s a wicked learning environment, that it is very hard.
Most of the distorting effects tend to move you towards having greater confidence [00:29:00] than is warranted or maybe a relatively small advantage, but an important one over time of certain decision making aids will be overlooked because they’ll seem inferior, because they generate information directly about how accurate they are and are not.
Generally, the level of confidence exceeds the level of accuracy. And that’s not just in psychology. Most drivers think they’re above average. There are studies with physicians in which they’re 90% confident when they’re 10 and 20% accurate. Most professors think they’re funnier than the average professor and so on.
So this is not a form of psychopathology or anything. We’re surrounded by environments in which feedback often leads to pervasive overconfidence. So often [00:30:00] experience increases confidence more rapidly than it increases the accuracy. And now you start to get to disparity.
There’s an association between confidence and accuracy, don’t get me wrong. It’s a question of calibration. When I’m 70% confident, am I 70% accurate? What happens is it’s often a correlation between confidence and accuracy.
I’m not saying, ignore that but if you’re over-calibrated, which most people are in many walks of life, then you’ll often reject decision aids that can help you be more accurate because it seems like there is, that’s the conundrum.
Dr. Sharp: It seems like it’d be worth it to talk about some of these ways to recalibrate a bit and bring the accuracy up to [00:31:00] the level of confidence or even invert them a little bit.
Dr. David: It is better the calibration generally, the better. It’s better not to be overconfident or underconfident to be as accurate about what your accuracy is. That’s hard. We don’t get that feedback really, right?
Dr. Sharp: Yeah, that’s true. Let’s talk about that. There’s a lot to dive into in terms of what you call normative practices or decision methods. So let’s start with base rates. We talked about base rates a lot. I feel like base rates is one of those terms that most of us have heard, some of us have a good command of, and most of us maybe don’t. So can we start with base rates? What is a base rate? How do we apply it to our work? Let’s just start there.
Dr. David: Okay. Base [00:32:00] rates get defined in different ways. I’m confused, of course, doesn’t take much to confuse me. The essential meaning, and sometimes called prior odds is simply how often do something happen. It doesn’t matter whether you’re talking about scatter onto IQ tests, bee bites, snake bites, hurricanes, tornadoes, frequency of different psychiatric conditions, outcomes, simply frequency of occurrence.
If you look at the literature on the use of base rates, generally, if people know nothing but the base rate. 70% of kids in this school are learning disabled. They will use that.
The moment you add differentiating or clinical information, we’re studying Joe. [00:33:00] Just assume for the minute, these problems we’re considering totally unrelated to whether someone’s name is Joe or not. As soon as you provide case-specific information, utilization of base rates tends to decline.
Dr. Sharp: I want to clarify a little bit, I’ll probably jump in and clarify a lot of these concepts as we go along. So is that the idea that theoretically, if we know that we have a 12-year-old kid coming into our practice for an evaluation, we’re a lot more willing to accept the base rate, let’s say, for a diagnosis of autism before the kid comes in then once we meet the kid and gather a bunch of clinical information, is that what you’re saying?
Dr. David: Pretty much [00:34:00] but if what you know about the kid coming in has absolutely no relationships, let’s say, to the diagnosis of autism like they like kids’ sneakers, whatever. But even that kind of information will move people away from the base rates.
Let me put it this way. In one of the surveys we did, one of the studies, if you ask individuals have they heard of the base rates? Did they use them? It’s yes but if you ask them a question like this, are there times in which base rates are far and away the most accurate diagnostic indicators? People almost never say they strongly agree with it.
In fact, for a fair number of decisions, and it may be many, a base rate [00:35:00] is the single most predicted variable. I can give you some examples of that. People don’t think of it as a diagnostic indicator because in a way it’s very bland and it’s just a stupid number. What does that have much to do with anything?
What we learned is when you look at sensitivity or specificity of a diagnostic method, it’s usually in a study in which half the people have a condition and half don’t. So base rate’s 50%. When the base rate’s 50%, the base rate does you absolutely no good. It’s just like flipping a coin.
What if the odds were 99:1? Let’s just assume that. Sometimes that happens. Maybe we’re trying to identify dissociative personality disorder or it’s many [00:36:00] guises. Let’s say my setting of interest occurs 1 out of 100. If I play the base rates, which is guess what’s most frequent, which in this case is the nonoccurrence, I’m going to be right 99% of the time. So all else being equal for a test to do better than that, it’s got to be more than 99% accurate.
Dr. Sharp: Yeah, that’s tough.
Dr. David: There are instances in which the base rates are even higher or low, very high and very low base rates. If you have to make the dichotomous decision, those who do not have this condition will or will not make a suicide attempt and so on, often, they’re way more accurate.
If you’re trying to calculate probabilities, the probability of a suicide in a dichotomous decision hospitalized or not hospitalized, [00:37:00] and either deferred to the base rate or combine it with the diagnosticity of signs or indicators. What I mean is to combine the base rate full help with, say, results on some kind of tester or index.
One of the questions you can ask is, do I know what the base rate is? If I know what the base rate is, and I have to make a dichotomous decision, say hospitalize or don’t, or refer or don’t refer, whatever it is. If the base rate and the tests point in the same direction, there’s nothing to choose out.
If they point in opposing directions, you have to choose one or the other choice. You can go with the stronger indicator. If the base rate is vastly more accurate than the test, why in that [00:38:00] circumstance, would you go with the test?
Dr. Sharp: Great question.
Dr. David: I was involved in a situation at a metropolitan hospital and about 100,000 patient admits a year medical loss. One to two people per year were sadly jumping out of windows, horrible. Hospital administrators came down to the psychiatry department and wanted help; how do we identify these individuals, which matched their credit, not only was it bad PR, but of course, it’s hard.
Dr. Sharp: Yes.
Dr. David: And does someone put forth the following proposal, which is, we have this screening approach. I’m not sure I agreed with this statement, but they said they did anyway. That was 90% accurate, 10% false positive. On the face of it, it sounds great.
Let’s assume for the minute in the [00:39:00] present year we’re considering, there was one suicide. I don’t mean to reduce the value of human lives to numbers, I don’t. Obviously, you want to do everything you can to avoid that. If you use a method with 100,000 patient admissions, that is a 10% false positive rate, you will misidentify 10,000.
If you think of the cost of, first of all, testing 10,000 people, what do you put in their medical records? They’re at risk and so on and so forth. Again, not reducing human lives to numbers, if you took the money that that would have cost, which as you can imagine, was in the millions, you could save hundreds of them for that.
So if you consider all life of equal value, utilitarian perspective, I’m not sure that’s a very good [00:40:00] solution, but no one had raised the problem with the number of false positive errors and the consequences of that. So I was a little punk intern and I was sure as hell not going to speak up in that context but contacted the administration later at great risk.
Dr. Sharp: Okay.
Dr. David: I went through this. And then what they had not considered was safety for windows. Safety screens are expensive, cheaper than the screening. If you put them starting on the second or third floor, because most of these are impulsive momentary acts, you can prevent that pretty much from ever happening.
So much less expensive and more so more effective. [00:41:00] Because even if you identify someone, you’re not necessarily going to prevent it anyway and you avoid 10,000 errors. That’s simply a base rate problem, 1/ 100,000, you’re just not going to come up with a method. It doesn’t mean in certain context you wouldn’t screen anyway.
Military came to me later on my career with a similar problem but also the intervention involved dedicating more resources to mental health issues and trying to inculcate here is what this idea that you’re doing people a favor to hear something alarming to report.
Anyway, it gives you an idea of the profound potential impact of base rates. And when you’re making dichotomous decisions, the base rates are much more accurate, you don’t have to go through the other stuff, generally.
In estimating probabilities, people think [00:42:00] of validity as a fixed rate in some sense. I understand it varies when you go from person to person and so on but if you get, let’s say, an 80% accuracy rate, that’s going to be altered by the base rates. So if the test says 80% yes, but the base rates 1:1000, the probability is going to be about 1 or 2% and so on.
So when it’s not just a dichotomous decision but you’re trying to estimate probabilities, the base rates can alter that probability profoundly; twofold, fivefold, tenfold, a hundredfold. And so the best estimate of probability in that circumstance is to combine valid diagnostic signs and indicators with base rates.
When you check out at the supermarket, [00:43:00] eyeball the heap of purchases is what said, say, the clock wall looks like about nowadays, $210.11 worth, what do you think? It’s a time when you just can add up the numbers. And so to get accurate probability estimates, you may be able to do that in your head, but it’s just a few simple calculations and you’re all set. And so at every level, knowledge of base rates or at least good estimates can do a lot to enhance diagnostic and predictive accuracy.
One last quick thing, I used to do rounds with the neurologist when I was an intern. For an intern, I held my own but there was only one reason, only one that’s nothing to do with intelligence, nothing. I knew when they presented, they usually are going to present ambiguous science and teach you something but [00:44:00] sometimes, the base rates are very different, diagnostic possibilities always go to the base rates.
I had one of the neurologists about three months into the rotation, ask me if I was some kind of weird freak. Yes, but whether I had originally been trained in the role. I told him I’m the [inaudible 00:44:23] He did not believe, but that’s true.
They didn’t take any particular smarts, honestly but if they’re saying Pick’s versus Alzheimer’s, and Alzheimer’s is 10 to 20 times more frequent, I’m not going to be right every time, but if the signs are ambiguous, or even tend to favor Pick’s, they better damn well do that in a strong way before they’re going to overcome the base rates.
I think base rates are a good example. That shouldn’t threaten anybody, I don’t think. [00:45:00] It’s not that complicated to apply. It’s not just positive. There are times to bet against the base rates. Sometimes they’re not helpful at all. Sometimes the base rates are not very accurate, but when you have them available, and there’s much more base rate information than people think, they’re just a handy tool to assist us.
Dr. Sharp: So let’s talk about that. It sounds like it’s going to be helpful for us to know base rates. That’s a place to start essentially in good decision making. Just know the base rates, that’s going to have a lot of value.
Dr. David: You don’t have to memorize it.
Dr. Sharp: Sure. You can look them up. My question though is, when are they less helpful? Like you said, when do you take other information into account? When is it reasonable to question or [00:46:00] look for other options aside from base rates?
Let’s take a break to hear from a featured partner.
Y’all know that I love TherapyNotes, but I am not the only one. They have a 4.9 out of 5-star rating on trustpilot.com and Google, which makes them the number one rated electronic health record system available for mental health folks today. They make billing, scheduling, note taking and telehealth all incredibly easy. They also offer custom forms that you can send through the portal.
For all the prescribers out there, TherapyNotes is proudly offering ePrescribe as well. And maybe the most important thing for me is that they have live telephone support seven days a week so you can actually talk to a real person in a timely manner. If you’re trying to switch from another EHR, the transition is incredibly easy. They’ll import your demographic data free of charge so you can get going right away.
So if you’re [00:47:00] curious or you want to switch or you need a new EHR, try TherapyNotes for two months, absolutely free. You can go to thetestingpsychologist.com/therapynotes and enter the code “testing”. Again, totally free, no strings attached. Check it out and see why everyone is switching to TherapyNotes.
Y’all know that the BRIEF2 has long been established as one of the leading forms for measuring executive functioning in children. Now comes the latest addition to the BRIEF-2 to family, the BRIEF2A. This update will allow you to use the gold standard in executive functioning to assess adult clients. You don’t have to wait, it’s available for pre-order right now. Learn more at parinc.com/products/brief2a.
Let’s get back to the podcast.
Dr. David: Yeah, that’s a really good question, Jeremy. Sometimes you just don’t have base rate information. That’s one of the reasons why [00:48:00] people who want to batch clinical judgment are making a mistake because often that’s what you have, and sometimes it’s pretty damn good.
Other times, there are serious reasons to question the accuracy of the base rates, and the margin of error might be very large. So things that people don’t want to report and what’s the incidence of child abuse? It’s really hard to get a handle on that.
There’s stuff floating around about the frequency of malingering in legal cases. There are a lot of reasons to think, first of all, the estimates vary enormously so how much is that going to help you? Often, there’s not much of a solid basis for it.
As two general rules of thumb, in general, the closer base rates are to chance frequency, 50/50, the less [00:49:00] useful. So if I know it’s a dichotomous choice, the base rate is 50%, it’s not going to be very useful. There’s some exceptions to that, but that’s a good rule of thumb.
However, to the extent they deviate from chance in either direction, the more useful they are because if you’re going with or being assisted by knowing the most likely occurrence or non-occurrence, that can be very helpful.
I go back a number of years and just guessing the likelihood my daughter would be back on time before a curfew, it’s a pretty safe guess that that would not happen. I’m kidding. My daughter is a very responsible person, but it’s a deviation in either direction. If the base rates exceed 50%, especially when they’re getting up into the 80s and 90s, then you’d guess base rate events.
So if you’re in [00:50:00] the Olympic Peninsula in Oregon and you want to guess whether it’s going to rain the next day, just say yes but if you’re in Death Valley, just say no, because their base rates are low, so you’d guess non-occurrence. One very important rule of thumb in deciding whether to use base rates or to consider them is how much they deviate in either direction from 50% likelihood.
It’s not an ironclad rule of thumb, but it often works pretty well. Sometimes in psychology, we deal with very high or very low frequency events. And with rare exception, the base rates will be the most accurate predictive variable.
We have to get over this idea of variable prejudice, or intuition judgment is never useful, or I don’t believe in projectives at all so I would never use that. [00:51:00] Does that mean every single projective indicator has no value? And so on and so forth. So you just go with whatever is most helpful within the bounds of ethics, and sometimes that’s a bland base rate.
The other thing is you have to balance the use of base rates. Let me back up a little for a second, if a variable’s invalid, if a variable doesn’t beat the base rates, say a test results, diagnostic sign or indicator, aside from situations in which you combine it with the base rating, sometimes you’re just going to bet the base rate may be more accurate, but you may want to bet against it.
For predicting suicide risk, even if there’s a 5% risk, the base rate’s low, but the cost of a false negative error is so great so that you’re always looking at utility as well in the benefits and [00:52:00] costs of false positive and false negative errors. The base rate validity is almost dispositive. Something’s invalid, why would you use it? You have strong evidence or has almost no diagnostic or predicted value.
Other diagnostic signs and indicators, including base rates can be pretty probative, but you always have to examine the potential costs and benefits. Do you think someone can live alone and won’t burn down the house but you’re only 40% sure that is nowhere near enough?
Dr. Sharp: It’s not great.
Dr. David: Fair enough. Yeah, that’s right. I just want to mention one other thing, which is, there’s a lot of base rate information available. It’s not going to be listed as a base rate. All outcome studies, for example, generate base rates; how often do people get better from concussion or mild head injury? What is the typical period of recovery? So [00:53:00] epidemiologic research gives you base rates for different conditions.
There are other resources; the Caffrey has a text out on sources of base rates. We try to cover, in our book, various, I’m not trying to plug the book. I’m really not. I’m one of the least self-promotional individuals, but we have stuff on sources of base rates.
It’s all over, but you have to think of it, if in neuropsychology, we often use a very narrow condition like, how often do people have X number of abnormal results? It’s a generic term frequency of occurrence. When you think of it that way or frame it that way, is base rate information all over?
Dr. Sharp: I want to make it super concrete and think about diagnostic base rates, or occurrence of different diagnoses, would you consider those high and [00:54:00] low frequency? The majority anxiety and depression are maybe higher than 10% but I think the majority of our diagnoses are occurring in less than 10% of the population. Would you consider those useful base rates?
Dr. David: Certainly the ones that are very low and that don’t involve dangerous conditions are extremely useful. The other thing is that, generally, you’re not using base rates in the general population. If you have someone who’s a multiple murderer, you’re not going to say in the general population.
Your question is very important because you’re right, most of low frequency conditions, relatively speaking, and therefore it wouldn’t be an irrational position to say, I’m going to miss every condition that’s present pretty [00:55:00] much, I defer to the base rates.
You may want to combine it with the diagnosticity of signs and indicators but if you narrow it, it changes things. For example, suppose the differentiation is between global general anxiety disorder and PTSD or something like that, PTSD and post-concussion syndrome, because base rates are very helpful in differential diagnosis.
So the rule of thumb with base rates is to use the narrowest applicable base rate. What I mean by that are indicators or characteristics or attributes that influence the base rates and that apply to the person you’re seeing. For example, if you’re thinking stroke and [00:56:00] you’re seeing a 70-year-old; age is going to make a difference.
If you’re seeing a 70-year-old who has horrendous dietary habits, and who’s had a stroke, that applies to what I was saying, because you’re narrowing in relationship to change in base rates for the condition you’re looking. In other words, if they love purple socks, that may apply to them, but that’s probably not going to change the base rate.
If you’re looking at a knee injury and you’re looking at an NFL player versus 6-year-old child, narrowness is defined by variables that change base rates and that apply to the person. I may have all kinds of information about base rates of adolescence, but if I’m seeing an adult, it doesn’t apply.
Dr. Sharp: It doesn’t matter.
Dr. David: So as you [00:57:00] can narrow, now you’re dealing with very different base rates, often higher, of course, but also when you’re trying to choose among say two or three conditions, they may have themselves very different base rates.
In that case, going back to Pick’s versus Alzheimer’s, you pick someone off the street, they’re going to have neither usually, but if you’re picking 75-year-olds in repeated car accidents, who’ve got a genetic vulnerability, you narrow and narrowing changes base rates, and also in performing differential diagnosis, picking from among competing treatments that looking at outcomes now, you’re in a different situation where the base rates, it’s more likely base rates will be helpful to you.
And so this is a very common misconception. There’s [00:58:00] a well-known author, they say, see if it’s forensic versus clinical context. If your practice is all forensic, assuming, say, a base rate of 30 or 40%, use that for frequency of malingering. It’s clinical, use a much lower base rate. If half your clients are clinical, half are forensic, pick the average.
But that doesn’t make any sense, if you know it’s a clinical case, use the base rate for the clinical because if you average, you’re going to increase false positive error. If for the forensic case, if you use the average, you’re going to have more false negative errors. That principle, not using general population base rates but base rates that are narrowed and variables that are relevant and change base rates, that’s the idea. And that just changes the whole thing.
Dr. Sharp: I appreciate that. Base rates are super important and like you said, [00:59:00] often overlooked, I think, but they’re a big component of good decision making. I would love to talk about more strategies to help us with our, as you say, clinical prediction and diagnostic accuracy. I think a lot of us are interested in that. What other components might be involved here and how might we walk through and enhance our diagnostic accuracy specifically?
Dr. David: The other broad method I’ll mention is increasing use of properly validated statistical decision methods. Never a popular topic. And then maybe talk about some other related strategies.
In my [01:00:00] opinion, for what it’s worth, this is one of the most misunderstood issues in psychology and
neuropsychology. I’m not sure why. So I want to take a step back just to find what these methods are; clinical judgment and statistical decision-making.
Dr. Sharp: I think that’d be helpful. Yes, because I’m not sure people even know what you mean.
Dr. David: Fair enough. Just to initially dichotomize, you can make decisions or interpret, let’s talk about interpretation a bit, that’s part of the issue. You can do it in your head through whatever means you choose. If I’m a neuropsychologist and looking at test data, I may have a vast research knowledge, I may know about certain decision rules and so on, but ultimately the data [01:01:00] integration is done in the head.
Dr. Sharp: Yes.
Dr. David: In clinical judgment, you can call it impressionistic judgment, subjective judgment. People may be, I hope they’re not, offended by those terms. So that’s one approach.
At the other end of things, it’s defined by two things. One, same data, always same interpretation. If you will automate it, pre-specify it based on empirically established relationships. I hate to use that word empirical because all it means is by observation but anyway, I will use it for now.
If I’m a baseball scout and I’m trying to pick out a ball player, I may know a lot about them and ultimately put that information together in my head and reach a prediction or evaluation. Alternatively, I may have five variables, whatever they are, and it can be qualitative like, how much heart do they seem to have?
[01:02:00] When you develop statistical decision procedures, anything is a fair candidate and often qualitative indicators are very important. I just would have to code it and then I see whether when I combine that information in some way statistically, it might just be adding them together, looking at configurable relations, or whatever. If I look at them statistically, how well do they predict?Typically, if I’m doing this properly, I’m getting rid of variables that are not predictive, getting rid of the variables that are strictly redundant, just the same measurement twice, basically. And then I have to cross validate, because as we know, multivariate statistics capitalize on chance and there’s study on statistical methods of decision making that don’t cross validate or validate really is a better statement just to throw them out the window.
So then the question is, given a fair [01:03:00] horse race, both methods; clinical judgment, statistical method are given access to these same information, how does that turn out if you have both methods predict? What you have to do with qualitative information is coded in some way.
Let’s say I have a feeling, a patient strikes me as odd, which actually turns out to be a pretty good diagnostic indicator for the kinds of schizophrenic condition. So it’s not an argument about type of data. And this is almost always confused. It’s an argument or debate about how to integrate or interpret information.
Dr. Sharp: Yes.
Dr. David: Okay. So lots of statistical decision methods incorporate qualitative impressions, but only to the extent they turn out to be helpful. And many of them do incorporate [01:04:00] qualitative information. What happens when both methods have the same information?
So this has been going on for about 70 years now. I would say there are 300 good quality studies. When I’m eliminating studies here, and I’m not just the one who does this, many of them are studies in which the statistical decision making method has an unfair advantage by not cross-validating. So no fair research would go in.
If I could just say this, I think like most people, all of us are passionate about trying to find answers that help us do our job. So are there some people who go into this literature with biases? Yeah, sure. But many of these decision researchers are very invested in enhancing accuracy.
[01:05:00] Here’s the outcome of this study initially and over the decades; number 1, often misstated by the advocates of statistical methods, drives me crazy, used to drive Paul Neal crazy, they say something like statistical methods always beat clinical method. That’s nonsense. It’s not true. A statement to that effect is counterfactual.Many of the studies, less so now, generate ties. And so it’s inaccurate, it’s unfair, and it’s demeaning to clinical judgment to say statistical methods are always better. That’s wrong. It should not be stated.
Dr. Sharp: Okay.
Dr. David: Where there’s a difference, depending on how you count, about [01:06:00] 95% of the studies, give or take 2 or 3%, either way, where there’s a difference, the statistical methods do better. Sometimes that edge is very small, 3 or 4%, sometimes it’s substantial and in 20%.
If say, clinical judgment is starting at 60, 70% accurate in a study, again, way better than you might do just by tossing coins, say we have 30% error, if you improve that by 10%, you’re reducing error by a third. If you increase by 15%, you reduce cutting it in half. And over the course of a career in hundreds, thousands of decisions, that makes a real difference.
And if you classify all the studies into level of judgmental proficiency, I think the simple take home is that if you have a category of [01:07:00] good or very good, there are considerably higher percentage in the statistical decision method that fall within that category.
If you look at poor chance or even below chance, systematic bias can lead to below chance performance. That’s rare in both cases, but about three times more often in the studies on clinical judgment. It may be a difference of 6 and 18% or 4 and 12%, but obviously that’s very important.
Dr. Sharp: Absolutely.
Dr. David: The original reasons for rejecting those studies, they are questioning them, which are very legitimate, maybe not the best clinicians. We have people predicting great point average, psychologists don’t sit around worrying about great point average so they’re given deficient information. All those things are very reasonable [01:08:00] questions.
What’s happened literally over the decades is two things; when I first heard Paul Neal lecture, he was saying he was trying to do the new edition of his clinical statistics book, is a classic book. He said, it’s too boring. I can’t stand it. I’ll never do it. Because he said all the studies come out the same way, which is ties or disadvantages.
Dr. Sharp: Okay.
Dr. David: And there’s a lot of truth to that. So what’s happened is many of these very reasonable questions have been tested the experience of the clinicians, the amount of information they’re giving, and so on, judging familiar tests, having them identify the tests, making sure they’re of high clinical relevance, suicide risk, presence of brain damage and so on. None of those things generally seem to matter very much [01:09:00] or at all freely calling exceptions, countervailing same thing.
The other thing is if you trace that literature across decades, literally decade by decade, only crazy people do that. I’m one of them. You do see a trend towards increasing accuracy with the statistical methods where it studying. Prediction of violence, for example, is a really good example. Relatively short term and other things of that nature. Suicide prediction over shorter intervals.
If you look at, it’s going to be disturbing to hear, clinical judgment over the decades, some exception neuropsychology has some of those exceptions aid about the same in a state about the same. It shouldn’t be happening because our research knowledge is so much better.
[01:10:00] Psychology like many of the more advanced sciences, it’s not yet there, but the turnover of information is about five to seven years, if defined by 50% of the citations become less relevant. That’s a phenomenal rate of growth. It’s a tremendous credit to psychologists. It promises.I’ve given a talk marked accomplishments, bright past, brighter future, so many indications of advances in our field that are so promising. I tell students, this is a really good field to get into, because it’s in the process of being revolutionized but subjective judgment stays about the same.
And then also, when you look at sources of information, as more information is provided, there’s valid information for clinical judgment [01:11:00] purposes, the actuarial edge grows. If you add interview, again, a disturbing thing to hear, to testing, I’m not saying you shouldn’t. Interview generates valuable information.
There is no question about that. If you combine interview and testing through statistical methods, it often will enhance actors. I think that’s a clear issue.
If you do it subjectively, though, on average, you will not improve accuracy. In fact, you may decrease it. An invalid variable or a weaker valid variable added to the mix will diminish the impact of more valid variables.
Valid variables can decrease actors.
One of the reasons why you shouldn’t integrate all the information should combine all the information that enhances accuracy. What happens with interviews, because they’re so dramatic and salient relative to a [01:12:00] bunch of scores is in most of the studies, and this is really true in industrial psychology, adding unstructured, flexible interviews to testing overall tends to decrease overall accuracy because of dilution of valid variables.
People aren’t doing this on purpose, including in this subjective integrative mix, variables that are either not valid but appear to be illusory correlations and variables that are valid. So across conditions, if you’re just to rate accuracy, testing plus interview interpreted statistically is best. Testing alone interpreted statistically, second best. [01:13:00] Interview alone interpreted statistically, next best.
Tests alone interpreted clinically, next best. Test and interview, more can be less, next best. Interview alone, i.e. the average psychiatric evaluation, least accurate. If you look at frequency with which methods are used across the country, it’s a little obscure.
Dr. Sharp: That is a really illuminating point because we hear a lot that the interview is the most important part of the evaluation and folks rely on interviews, myself included, a lot, quite a bit, heavily. I just want to put a fine …
Dr. David: Jeremy, if I could just say, interviews are very, they’re essential. They generate very useful information. The difficulty is it’s very hard for the human [01:14:00] brain to either realize what’s really impacting our judgments and it’s very different given all the noise in our data, conflicting indicators, the limits of the human mind to handle multiple lines of evidence simultaneously that often aren’t even aligned together but it’s not because it doesn’t generate useful information.
It’s almost like that discovery and validation, it’s subjectively incorporating it together. It’s very challenging. That’s where that overconfidence is a problem.
Dr. Sharp: I want to revisit and put more of a fine point on how do we do that? How do we integrate this information statistically versus clinically? How does one integrate interview data statistically when [01:15:00] it’s so tempting to do it subjectively or clinically, like you said?
Dr. David: The other thing, Jeremy, to keep in mind is that if you ask individuals what influences their judgment, just like anything else, we don’t generally have very good insight. For example, this started with research setting up sock displays. They asked people what makes them want to stop and shop for socks.
And so they listed a bunch of variables. And then the researchers varied a variable at a time. Where was in this store, the color and things like that. It turned out that self-report often had a little correlation with behavior. And so this started a huge line of research, what’s called subjective versus objective peer utilization.
There are lots of studies in psychology and psychiatry, fewer in neuropsychology, unfortunately. So if you ask people who are reporting sincerely, what variables do you take into account? They may specify it. Psychiatrists may [01:16:00] specify 8 to 10 variables. These are all very important in judging depression.
And then you set up cases, I can’t go into this too much now, in which you vary one at a time, so maybe psychomotor retardation present versus absent, all else the same, and so on. You can actually study the impact of those variables on diagnostic conclusions. Those studies show a number of very important things.
One is subjective belief about the impact of cues. What we wait heavily often does not correspond well with the impact of those variables on our decisions. Things that people think really influence them may not really influence them much at all, things that people think don’t influence them, maybe gender or race, I sincerely believe in my heart, may influence them a lot.
Usually it’s one, two or three variables that account for most of the variants and decisions. Not 20, not 50, [01:17:00] not 100. An extra set of thousand wasn’t option, one or two or three can reproduce their decisions most of the time in most circumstances.
So it’s very sobering but it’s actually very helpful because, what did Gandhi said, I’m aware of my own limitations. Therein lies all the strength I possess because it’s a rare thing for a person who were aware of their limitations. We’re fine telling our clients that, hey, don’t get uptight about it. So that’s a shocking line of research and never in a million years would have guessed that.
But that’s why we have all this help. The use of basic physical decision procedures, if properly developed are not going to include variables that are not predictable and so on. The message here is don’t trust your [01:18:00] judgment.
Again, if you go back to clinical judgment, usually it’s pretty decent and sometimes very good. There may be some people who are exceptions. However, if you believe factors of learning almost invariably create overconfidence. Secondly, we don’t have great awareness of the variables that truly impact our judgment and that just calls for a little shift in the way we go about some of these.
For example, going back to what you said, it’s not a terrible thing to ask a question to yourself. If I’m aware of, say, a statistical decision procedure, make something as general deficit scale, just not familiar with it. It’s an old decision room, Russell came up with others and they exist in psychology quite a bit, actually, in neuroscience.
What is the best reason I can [01:19:00] generate not to ignore that? That is what is the best reason I can think of why that might be correct. And then if you’re going to countervail based on interview impression, then you might ask, what’s the best alternative reason?
Anyway, if people are overconfident, if you’re thinking of reasons why you might use a decision or why you know your conclusion might not be correct, we look for confirming evidence often, primarily bring to mind disconfirming of considering the opposite. It can bring confidence levels into better alignment and then people won’t bet against statistical decision rules as often. And so it doesn’t mean they never should, of course, you have to at times, but that’s a whole topic.
That’s the hardest one. And then also because [01:20:00] sadly, if you look through psychology, there are almost no studies on incremental validity. That is what is the maximum combination of variables and there you include every possible source of information, but primarily testing and interview, right?
Dr. Sharp: Sure.
Dr. David: Okay. How else would you identify what’s an optimal interpretive practice without studying it? The beauty of science is only to test our belief, but it helps us. People have sometimes misconceptions and a lot of scientific debates take a long time to resolve.
We could be dealing with some little issue in digestion and we can take 20, 30, 40, 50, 70, 100 studies. So how many studies are there in the MMPI? How many studies are there on incremental validity, diagnostic and predictive practices? Because the overconfidence, and sometimes I can be so bold as to [01:21:00] say, underappreciation of the strains but also our boundaries in decision making, people don’t worry about that very much.
There are lots of ways of approximating this. I know we don’t probably have a lot of time to get into it, so I don’t want to leave with a doom-and-gloom message. I want to say that there’s a lot of statistical information available, hundreds, thousands.
300 comparative studies approximately, but think of it this way, if in 95% of the cases, when there’s a difference, it’s between statistical and clinical judgment, statistical does better. You don’t need comparative studies everywhere. You always have the option of using subjective judgment but if you go into any journal in neuropsychology, about 1/3 to 3/4 of the studies are going to be [01:22:00] on statistical analysis of prediction of something, disease 1 versus disease 2, development of dementia, on and on.
Neuropsychology is amazing for the number of direct statistical decision rules that have been generated. They’re just not identified as such. So if you realize the prior odds of beating those, your subjective judgment isn’t great, you always have that.
Almost any diagnostic and predictive tests you can think of in neuropsychology, there will be statistical decision methods. It won’t be comparative studies, but it’s a pretty good base rate. The ones that look at optimizing incremental validity, there’s some out there, but shockingly little. Problems taken for granted are often problems not studied. I hope and believe, and especially now with the great databases, we [01:23:00] could easily be studying this more and make very rapid progress in doing a little bit of a better job there.
Jeremy, really quickly, some of the ones where as we’re waiting for Godot, I guess you’d say, I have a few other tips or rules of thumbs, if you’d like me to go over them.
Dr. Sharp: Great. Yeah. I know we’re getting close timewise, but I think it’d be helpful to close with some tips for folks.
Dr. David: Okay, sure. If you were to ask what’s the best predictor of decision accuracy, it’s meticulous adherence to the best-validated methods, that’s it. It’s not years of experience. It’s often not brilliance, but both those terms are essential, meticulous [01:24:00] taking great care in practice.
Dr. Sharp: Yes.
Dr. David: Adherence to the best-validated methods. So that’s the key. Second, insight alone doesn’t do much. You can be aware of the decision literature. You can be aware of racial biases. You need to equate those. You need to take active steps. Given both overconfidence and lack of insight into what most influences our judgment, just telling yourself not to be biased or to do differently by itself is notable.
Consider the opposite as a very simple little strategy that has a lot of application on my notes, but what’s the strongest piece of contrary evidence to the decision I’ve made? What is the best reason I could think of for an alternate diagnostic possibility for following not countervailing a [01:25:00] statistical decision rule?
Because we tend to search for confirming evidence, we then pay insufficient attention to disconfirming evidence. That tends to feed our overconfidence. That doesn’t always work, but you’d be surprised how often that improves things.
Dr. Sharp: Sure.
Dr. David: Anything that helps with meticulous practice neuropsychologists are often good that way. You can build in redundancy double keyboarding. It’s a pain, but that will reduce keyboarding errors to almost 0%. The tests that depend highly on configural analysis, scale A being high, B being low and so on. Even two keyboarding errors can, especially with scales that have few items like some of those MMPI-2-RF scales, it’s devastating pattern analysis.
Pattern analysis is highly unreliable because it all depends, [01:26:00] I won’t go through the mathematics of that, but some of the strategies occasionally using neuropsychology have a reliability approaching zero because the more you look at complex relationships, the more vulnerable they are. Lack of reliability is not even additive, it’s often exponential and more complex.
I’m a big fan of checklists. Icing how effective they are, it’s one of the reasons I gave you an example of one of our checklists. So that’s free for anyone who wants it. If you want to use it for kindling wood, that’s fine, too but it’s shocking how helpful that is in many branches of health care.
Recognize the limits of experience; experience can increase accuracy. It will certainly increase proficiency in certain areas, especially in certain kinds of learning [01:27:00] environments or with procedural knowledge. I wouldn’t want someone giving their first IQ test, but there is a point of diminishing returns and the saying experience is the best teacher is not what it stands for.
Franklin is also often credited for that. The old English dear meant cost and the full statement is, experience is a dear teacher, i.e. often costly, and fools will learn from no other. We have a lot of great research in neuropsychology, and if you look at research as a way of accumulated experience, so to speak, observation, the fact that you can have control groups, comparison groups, four cells of a co-variation table, single and double blinding, quantification where helpful, those are powerful, brilliant.
Dr. Sharp: Absolutely.
Dr. David: It’s the logical methodological [01:28:00] advantages. I think it’s important to appreciate this is not negative commentary on the human brain. Complex data integration, and there are hundreds of studies on this, is very difficult in our field. Part of the reason is we don’t yet generally have potent theories. There is a great integrators of data.
Theories need three qualities to be powerful predictors, better than, say, a statistical decision rule. You have to have a good handle on virtually all the important variables that determine outcome. You have to have very accurate measuring tools and you need a damn good theory that has what the philosophers would be, a high level of very similar to if you like this.
That is very hard to come by in the mental health field. Getting there in some areas, it’s not a negative reflection on the brilliance of so many [01:29:00] individuals. And then don’t try to integrate it all the data. A lot of the data will not help you. It will be counterproductive.
In business, they often say, take the best variable and go with it. I’m not advising that. But very often a small set of two to three variables for specific decision combined properly, or even weighted equal would do better than this almost classic textbook examples, trying to integrate all the data.
Often because of measurement error and other things, points and opposing directions, there is measurement error. You can’t integrate two opposing points of view properly. Always the best. I want to add a bunch of stuff in, try to go with validated variables and it’s worth considering making in pre-choice of base rates. That’s my short list.
Dr. Sharp: I like it. I like a good list, I think other [01:30:00] folks do, too. Yes. I appreciate you taking the time to get into these topics a little more deeply. It’s clear that these things are important and that you have pretty extensive knowledge on an area that a lot of us, I don’t want to say feel mystified by, but it sounds good in theory, maybe harder in practice to bring all this together. And so thank you for having the conversation.
Dr. David: Sure. Jeremy, if I could just say two things, one is learning neuropsychology is harder than learning the decision.
Dr. Sharp: Okay. That’s good to know.
Dr. David: It is. It clearly is. It’s there to be a helpful device, not an enemy. It’s worth considering and it might be worth looking at the checklist. There are many good sources on decision making. [01:31:00] I hope we’ve added 1% to the literature with our recent book, but there are many other good sources and it’s very interesting.
Dr. Sharp: Yes, it absolutely is.
Dr. David: It’s where science and practice meets. It’s a really good example, in my opinion, of science practice integration. And speaking personally, I have been humbled by reading this literature many times over.
Dr. Sharp: It is humbling.
Dr. David: I have benefitted.
Dr. Sharp: Yes. I know that your book does add more than 1%, I’m sure. We’ll encourage people to check it out. We’ll list it in the show notes, for sure and engage in a practice that’s going to benefit us and our clients as much as possible. So it was great to connect with you.
Dr. David: Thank you so much, Jeremy. It’s an honor and pleasure speaking with you today.
Dr. Sharp: [01:32:00] Likewise. All right, y’all, thank you so much for tuning into this episode. Always grateful to have you here. I hope that you take away some information that you can implement in your practice and in your life. Any resources that we mentioned during the episode will be listed in the show notes, so make sure to check those out.
If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes or Spotify or wherever you listen to your podcasts.
If you’re a practice owner or aspiring practice owner, I’d invite you to check out The Testing Psychologist mastermind groups. I have mastermind groups at every stage of practice development; beginner, intermediate and advanced. We have homework, we have accountability, we have support, we have resources.
These groups are amazing. We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at thetestingpsychologist.com/consulting. You can sign up for a pre-group [01:33:00] phone call, we will chat and figure out if a group could be a good fit for you. Thanks so much.
The information contained in this podcast and on The Testing Psychologist website are intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric or medical advice, diagnosis, or treatment.
Please note that no doctor-patient relationship is formed here and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. [01:34:00] If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with an expertise that fits your needs.