Author: Dr. Jeremy Sharp

  • 505 Transcript

    [00:00:00] Dr. Sharp: Hello, everyone. Welcome to The Testing Psychologist podcast. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

    Many of y’all know that I have been using TherapyNotes as our practice EHR for over 10 years now. I’ve looked at others, and I just keep coming back to TherapyNotes because they do it all. If you’re interested in an EHR for your practice, you can get two free months of TherapyNotes by going to thetestingpsychologist.com/therapynotes and enter the code “testing.”

    This podcast is brought to you in part by PAR.

    The NEO Inventories Normative Update is now available with a new normative sample that is more representative of the current U.S. population. Visit parinc.com/neo.

    Hey, folks. Welcome back to The Testing Psychologist podcast. Glad to be here with you as always. I have a fantastic clinical [00:01:00] episode with a return guest who you all likely know and love. Dr. Stephanie Nelson is a pediatric neuropsychologist who specializes in complex differential diagnosis. She is board certified in both clinical neuropsychology and pediatric neuropsychology, and she is currently the president of the American Board of Pediatric Neuropsychology.

    She has a private practice in Seattle, Washington. She also has a consultation practice called The Peer Consult, through which she provides consultation to psychologists and neuropsychologists who specialize in pediatric assessment. If you listen to today’s episode and you want to reach out and talk to Stephanie more, you can find her at thepeerconsult.com.

    Today, we are talking about many things. The title of the episode is Beyond Diagnosis. What do we mean by this? That’s a little bit of too brief of a description because it’s not just beyond diagnosis, but I would say [01:02:00] based on the content, this should be titled Beyond Diagnosis, Numbers, and Accommodations.

    So Stephanie talks about the ongoing evolution from Reports 1.0 to Reports 2.0, where we are not just focused on diagnosis and the testing data in our tried and true recommendations, but we’re trying to make a shift toward a more qualitative, immersive, holistic experience of the evaluation and learning more the nuances and more of a depth-based approach to assessment than what we’re used to.

    So we talk about shifting toward what Stephanie calls the four E’s: experiences, expectations, exceptions, and experiments. As usual, Stephanie brings a fantastic style to this podcast where she punctuates with stories, quotes, and [00:03:00] examples from the literature, and there’s a ton to take away from this. You’ll hear on the interview that I talk about how there are many points that Stephanie brings up that I feel like we could implement this afternoon or tomorrow in our practices to improve the way that we are conducting our assessments.

    Before we get to the conversation with Stephanie, I think when this is released, there still might be time to register for one of The Testing Psychologist’s mastermind groups. These are group coaching experiences aimed at helping you build and grow your private practice. So if you would like some support there and some accountability to get some things done that you have had some trouble with, check it out. You can go to thetestingpsychologist.com/consulting and schedule a call to chat with me about whether it might be a good fit or not.

    But in the meantime, let’s get to this conversation with Dr. Stephanie Nelson about going beyond diagnosis, [00:04:00] numbers, and accommodations.

    Stephanie, hey, welcome back to the podcast.

    Dr. Stephanie: Thanks for having me.

    Dr. Sharp: I’m glad to have you. It’s been a little while, but you are a celebrity podcast guest, and here we are again.

    Dr. Stephanie: Fantastic.

    Dr. Sharp: I’m excited to chat with you. As always, we have a brand new topic today to talk about that is going to be super relevant and interesting for folks. But I’ll start with the question that I’ll always start with, which is, especially for you, out of all the things that you could be spending your time and energy on, why are we talking about this today? Why is this important?

    Dr. Stephanie: I would like to start by saying that this podcast is [00:05:00] entirely your fault. I was talking with you and some clinicians recently, and you threw off a question right at the very end where you pointed out that our reports now, we’ve moved from report revision 1.0, where people have made their reports shorter, more streamlined, more concise, getting to the point.

    And then you said, so what do reports 2.0 look like? What’s the next revolution there? And I thought that was such an amazing question, and I stumbled off an answer about my hope for reports that focus less on what I sometimes call the DNA; the diagnosis, the numbers, and the accommodation.

    I think I fumbled through some sort of answer about how I hope to focus more on personalized explanations, real strengths about the child or the young person, and a [00:06:00] more expansive view of the client’s capacities, but I’m not sure I articulated it very well because the other clinicians that we were sitting with gave me a little bit of a look as if I’d suggested replacing reports with Hipster wheels full of radishes or something like that.

    And I thought, based on their expression, which said something like, Oh, that’s really interesting, Stephanie, but what on earth are you talking about? It would be fun to come on your podcast and see if I could articulate that a little bit. So today is my impassioned plea for reports that might focus a little bit less on the diagnosis, the numbers, and the accommodations, and a little bit more on the other things that we have to offer.

    Dr. Sharp: This sounds great, and if this is my fault, then I will take responsibility for that because it’s important to chat about.

    [00:07:00] A lot of this is born from my experience, I think we’ll touch on some of these things, but my experience of writing reports that are “strengths-based” but not knowing exactly what that means, and feeling like it’s a little bit empty, and then getting the feedback sessions and telling people many things that they probably already know or maybe are already doing. I feel like we could do better. I feel like there’s something more.

    Dr. Stephanie: Exactly. I think most experienced clinicians get to that point where you feel like there must be something more than this, but the path towards it is a little bit unclear.

    Dr. Sharp: Hopefully, we’re going to illuminate that path a bit here today.

    Dr. Stephanie: We’ll see what we could do about that. I wondered if maybe I could start with a story. Would that be all right?

    Dr. Sharp: Of course.

    Dr. Stephanie: So I want to tell you a story about the late 1940s and the [00:08:00] U.S. Air Force. I got this story from the Todd Rose book, The End of Average. I don’t know if people may have read that. It’s a very quick read if anyone wants to.

    He opens with this great story, which is that the U.S. Air Force has a problem in the 1940s, which is their pilots cannot keep the planes in the sky. They keep crashing. On the worst day, 17 pilots crashed all in the same day.

    They were having a problem with this, and they couldn’t figure it out because it wasn’t the planes that were malfunctioning. So they kept saying, oh gosh, it must be pilot error. The pilots very strongly disagreed with that and thought, no, it’s definitely not pilot error, but no one could figure out what the problem was.

    Eventually, attention turned to the design of the cockpit. The way the cockpit had been designed in 1926 was that they took 100 Air Force pilots, all men, of course. The idea of a [00:09:00] woman or non-binary pilot did not exist at that time.

    They took 100 men, they measured them all in a million different dimensions. They averaged them and they made the cockpit for the standard pilot. They were feeling like this wasn’t working. So they were wondering, gosh, have pilots gotten bigger in the last 20 years? What has happened that the cockpit doesn’t seem to be working for these pilots?

    So in 1950, at one particular base, they decided to get a huge sample of 4,063 Air Force pilots. They measured them in 140 different dimensions to see how many of them were average? How many of them could fit into the cockpit design?

    They picked the 10 most important dimensions, and they decided to see how many of these pilots are [00:10:00] average on these 10 dimensions? What percentage of their pilots do you think they found were average?

    Dr. Sharp: I don’t know. 17%

    Dr. Stephanie: They were expecting it to be quite a bit. Their most pessimistic member of their testing brigade thought that it would be maybe about 30% of pilots, and it turned out to be 0. They didn’t find a single pilot who fit the standard cockpit.

    In fact, when they reduced it down to only three dimensions to see how many of their pilots were average in all three dimensions, the number was 3.5% of their pilots fit into the standard cockpit. By designing a cockpit for the average pilot, they’d actually designed something for no one.

    Dr. Sharp: Okay. [00:11:00] How do we connect this to what we do as psychologists?

    Dr. Stephanie: So I started thinking, when I read this story about what I would do based on my training? What I would do based on my training is pretty simple. I would bring in all of the pilots that crashed. I would get a lot of numbers and data about all of them.

    I would pretty much ignore all of the times that they had managed to stay in the sky and only focus on the 0.1% of times when they crashed. Really, wouldn’t be paying attention to, wait, why isn’t this problem worse? How on earth are you managing to stay in the sky so often? Instead, I’d label every time the pilot wasn’t meeting the cockpit’s expectations as a diagnosis.

    Then, to be kind, I would probably label every time the pilot’s arms were long enough to meet the [00:12:00] controls as a strength, even if that didn’t help the pilot fly, or even if it wasn’t really something that he had any control over. And then I would probably give a list of accommodations. Honestly, probably not paying that much attention to whether or not they were realistic, feasible, or helpful in that situation. Maybe I’d recommend extra time for this pilot to reach the controllers, and then I would write that all in a fancy report.

    I think that’s the kind of report that you have figured out feels empty. That focus on DNA; that focus on the diagnostics, the numbers, and the accommodations, and not really on a more expansive view of how that pilot fits into the cockpit.

    So what if instead, we had brought [00:13:00] in the pilots and said, what is this experience of flying in this cockpit like for you? Where are your pain points? Where are your successes? What is your expectation of flying in this cockpit? How does it compare with other planes you’ve been in, with your history, with what you thought you were going to get? What’s the experience like? What are your expectations like?

    And then what are the exceptions? When are you able to make it work somehow? Why isn’t this problem worse? What other planes have you been able to fly that worked out fine? And then asked about maybe doing some experiments. What do you think or feel that you might need? Can we try that? Does it work?

    Or hey, the last 10 guys all said this works for them. [00:14:00] Are you open to trying that? Let’s see if it works. Let’s do an experiment right now to see if this accommodation is useful. So what I want to talk about is assessment that focuses on experiences, expectations, exceptions, and experiments.

    Dr. Sharp: Fantastic. I’m excited. This sounds great. That’s a great story. I had no idea where you were going, and then all of a sudden it became clear. Yes.

    Dr. Stephanie: Why are we talking about the Air Force? Well, I’m getting around to it eventually.

    Dr. Sharp: No, I’m right with you. You always have these cute little shortcuts and things, this DNA approach; diagnosis, numbers and accommodations. The story illustrated some of the limitations there, but are there any others to talk about, just to put a fine [00:15:00] point on it?

    Dr. Stephanie: Absolutely. I do want to emphasize that the diagnosis, the numbers, and the accommodations are going to be important. I’m not suggesting you write a report that never includes a diagnosis that’s appropriate, or that doesn’t have any numbers in it, or that doesn’t make accommodations. I want to make sure that I’m clear about that.

    I want to talk about ways to reduce the emphasis on those things because they do come with some concerns. They do come with some problems and they are leaving us feeling a little bit empty. I want to talk first about some of the limitations of diagnosis and numbers, and then maybe save some of the limitations of accommodations for later, because I think that’s maybe my most controversial point.

    Let’s talk about some of the limitations of diagnosis. I think you were hinting at some of those, which you were talking about your experience with evaluations. I think [00:16:00] everyone who’s been doing evaluations for a while figures out pretty quickly that with at least some clients, the diagnosis is not a main issue.

    For example, you are asked to evaluate a child who is in the middle of a crisis. The diagnosis is clearly not important, given that we need to put out this fire that’s right in front of us. Or you’re asked to give a diagnosis of a learning disability on a 3-year-old or a 4-year-old, and you’re like, that’s not possible.

    Or they’re already getting all of the services and doing everything that you would in the case of that diagnosis, and so a diagnosis won’t change anything. Or you’re being asked to give a diagnosis to a child, and you can tell that it’s really a family systems issue. The wrong person is in your office, or [00:17:00] there’s a systemic issue, a person who’s facing marginalization or some other type of systemic concern that makes a diagnosis for the individual inappropriate.

    Or a diagnosis that’s statistically not very likely, a young adult who thinks that they have a diagnosis that’s usually diagnosed 20 years earlier, and often in a more extreme presentation. I sometimes refer to this as the tornado problem or the tornado presentation.

    And by that, I mean not all bad weather is a tornado. You can have really bad weather, and it can still be something that is distressing, that’s worthy of our attention, that’s worthy of sympathy, and it is worth trying to understand, predict, or plan [00:18:00] around. And still it isn’t necessarily a tornado.

    In the same way, you can have symptoms that are distressing, worthy of attention, understanding, sympathy, and planning that still doesn’t quite meet criteria for a diagnosis. It can sometimes feel uncomfortable when you want to tell the person that the diagnosis is probably going to be unlikely, but you’re not sure what else you would have to offer. Have you had any of those experiences?

    Dr. Sharp: Oh, sure. Yeah. I’ve had all those experiences at different points, and a lot of the last experience that you described here over the last 2 to 3 years. I would add another circumstance there, and I’m curious if you would lump this in with this discussion, but I’ve done many feedback sessions just over the [00:19:00] last 10 days or so, for varying reasons.

    In a lot of those feedback sessions, I’m finding myself saying the same thing over and over, which is, this doesn’t fit neatly into a diagnostic box, so let’s muddle our way. I’ll try to describe it in a way that captures something meaningful. So there’s, I don’t know what you call that, the inaccuracy or imprecision of our diagnostic system that doesn’t, in my mind, do a great job capturing different presentations.

    Dr. Stephanie: Absolutely. That is what I sometimes call the sunset problem. I was going to transition to talking about that, which is, that you’re pointing out that the diagnoses that we have available to us are heterogeneous in their presentation. They have low reliability, low [00:20:00] validity.

    We all know this if we’ve been in the field for a while, but the DSM is a grab bag of all different types of diagnoses. Allen Frances, who is the chair of the DSM-IV Task Force, points out how some of our diagnoses are lifelong, while others are transient. Some focus on emotions, some are on behavior, or social relationships. Some are things you develop early. Some are things you only develop very late in life. Some are about inner misery; some are about bad behavior.

    They’re this huge grab bag of things that we can notice about a person, and because of that, they don’t have a lot of ability to stand up to scientific scrutiny. We don’t have very good inter-rate route agreement. There’s a lot of different ways a person can meet a [00:21:00] diagnosis. Two people can meet the diagnosis and even have identical symptoms, and still have different causes for why they’re experiencing that diagnosis.

    Co-occurrence is more the rule than the exception. Trying to figure out where one ends and the other begins is really difficult. What I’m speaking about here is that diagnoses are nebulous and hard to distinguish.

    The author of that book that I got the story from at the beginning, Todd Rose, talks about how diagnoses are a problem because they’re one dimensional. They focus only on one aspect of the person or even one dimension of a person, but don’t really embrace their dynamism and complexity.

    They also are essentialists. They assume that that same person is going to show up the same way in [00:22:00] every situation. That if you’re an introvert, you’re going to be an introvert in every single situation. If you have social anxiety, you’re going to be anxious in every social situation that you’re in, when that may not be the case.

    Even when we frame them as strengths, even if we say your diagnosis is actually your strongest point, or a superpower or something like that, they still embrace a normativeness. They still assume there’s a normal way to be, a normal pathway to take, a normal way that skills are developed, or that personality unfolds.

    Any of us who have worked with patients for a long time realize none of these things really hold. We are dynamic, contextual, and idiosyncratic. I think it makes about as much sense to talk about a normal person as it does to talk about a normal [00:23:00] sunset.

    I’m stealing the sunset idea from Carl Rogers. Some of the young folks listening might not know who Carl Rogers is, which is a little bit horrifying, but he is a very famous psychologist. Go read some of his work. Look, can I read you a quote from him?

    Dr. Sharp: Sure.

    Dr. Stephanie: So he says, “People are just as wonderful as sunsets if we let them be. Perhaps the reason we can truly appreciate a sunset is that we cannot control it. When I look at a sunset as I did the other evening, I don’t find myself saying, Oh, soften the orange a little on the right-hand corner. Put a little bit more purple on the base, and add a little bit more pink to the cloud color. I don’t do that. I don’t try to control the sunset. I watch it with awe as it unfolds, and I like myself best when I can appreciate people in that same [00:24:00] way.”

    Dr. Sharp: That’s beautiful.

    Dr. Stephanie: Isn’t it beautiful?

    Dr. Sharp: It is.

    Dr. Stephanie: And I think about diagnosis sometimes, I can feel as if I’m trying to diagnose a sunset. I feel as if I can be saying, Ooh, you see that subtle shift from saffron to burnt orange right there, what do I call that? Where am I deciding that one color ends and the other begins? How much of the sunset is due to the sun versus the cloud cover versus the atmospheric conditions versus the landscape?

    If I went tomorrow, even to the same place at the same time, would the sunset look the same? It wouldn’t

    because we’re nebulous. We’re dynamic. We’re contextual. Even if you can narrow your focus down onto a specific [00:25:00] feature that you want to look at. You and I can both look at this specific spot in the sky, 30 degrees up, and focus on it, but we still run into the limits of labels, and we risk getting distracted by definition and disagreement.

    I feel like you and I might get into an argument about blue versus green. When we live in this world of turquoises and aquamarines, I feel like we’re trying to diagnose sunsets a lot of the time.

    Dr. Sharp: That’s such a good example. That resonates, and especially here recently. Maybe it’s just timely, coincidence or something, but lots of these cases where I just feel like it’s trying to paint individuals or kids into specific boxes that just don’t fit.

    Dr. Stephanie: And it might be [00:26:00] that the world is getting more complex or the children that we’re seeing are more complex. It might also be that we’re becoming more complex as clinicians, as you see more and more individuals and start thinking more about their problems and start hitting the limits of our testing, for example.

    What can happen though, I think and you can let me know if this is happening for you. It’s certainly happening to a lot of the people that I speak with in my consultation world is that when we’re trying to argue over blue versus green in a world that’s full of turquoises and aquamarines, it narrows our vision and it stresses us out. We start doing what I call fear-based decision making.

    We start worrying that we’re going to get the diagnosis wrong, and that can take over our entire evaluation. Either we start worrying that we [00:27:00] don’t know enough or we feel that we have some imposter syndrome. Isn’t there some other continuing education class I could take, a weekend seminar, a test I could learn, a book I could read, a person I could consult?

    Or we start over-relying on our own judgment. We start feeling contempt for our colleagues who never see the blue when we see it everywhere and are constantly calling things green when we’re sure that it’s never green. Or we get into this place where we can’t even make a decision. You do 12 hours of testing, you spend days with a child or a young person, and you still can’t make a decision, or you’ve made your decision in the first minute, and nothing could dissuade you from that.

    That way of talking about what fear-based decision making does to us reminds me of a quote from the poet, James Richardson, who says there are two kinds of people in the [00:28:00] world, and who among us is not both. And I often feel that way. I feel like I’m both of these people. I have impostor syndrome because I’m worried that I won’t get the diagnosis right, but also a little bit of judgment about other people who are seeing the client and saying that they see the color green when I see the color blue.

    Dr. Sharp: Yes. Two sides of the same coin.

    Dr. Stephanie: Exactly. I know once that worrying starts, it just takes over. We’re concerned that we’re going to miss it or get it wrong, and we start chasing diagnoses, or chasing rare possibilities, or trying to confirm our hypothesis, no matter how much of the data doesn’t seem to be pointing in that direction.

    We start thinking about what our colleagues would think about our diagnostic decision [00:29:00] making rather than the person right in front of us. There’s always someone who would diagnose it differently or diagnose a case that you said was aquamarine as a clear case of blue, or someone who will make accommodation recommendation that you didn’t even think of or that you wouldn’t have thought was appropriate. And this makes us chase after shadows and miss the person who is right in front of us.

    Dr. Sharp: Well said. I would imagine that we’ve sufficiently raised the anxiety level of everyone listening through the roof at this point. We could reliably diagnose panic in nearly all of the audience at this point. And so if we’re overemphasizing diagnosis, and it’s fraught with all these concerns, I wonder where we go from here.

    [00:30:00] I’m reflecting on our conversation, Stephanie, and I’m often asking myself this question as we talk, which is, what am I supposed to be doing with my life then? Because you raise these questions and these concerns with the work that we do, that makes me examine what we’re doing very closely.

    Dr. Stephanie: I do want to reassure everyone who’s listening that if you resonate with some of these ideas or musings, you are certainly not alone. If you’re not resonating with them, that’s fine. If you feel good about making your diagnoses, definitely continue to focus on that. If you’re starting to question it, so are a lot of people. These arguments about diagnoses are not new.

    There’s a whole new book out called Conversations in Critical Psychiatry [00:31:00] by Awais Aftab. He also has a great Substack called Psychiatry in the Margins, and it’s a whole bunch of his interviews with other thinkers in this field who are worried about the weight that we’re putting on diagnosis and some of the other features. So you can read that for solace and constellation.

    Dr. Sharp: Fantastic

    Dr. Stephanie: But in terms of solving the problem, what do we do? If I’m not supposed to over-focus on diagnosis, what am I supposed to focus on? I think that the next thing that most of us turn to is, we think, oh, okay, the numbers are going to save me. I’ll focus less on the diagnosis and more on creating a blueprint or roadmap of the child that I get through this profile of numerical strengths and weaknesses.

    That this [00:32:00] will allow me to either see the diagnosis more clearly, have more backing for it, or even if I just have to give a not otherwise specified diagnosis or no diagnosis at all, at least I’ll have numbers to rely on. Is that something that you resonate with?

    Dr. Sharp: Oh, sure. Yeah. It’s interesting, I’m just coming off of a million internship interviews for next year. And this is something that comes up a lot in internship interviews because I think a reason that a lot of us are drawn to testing is that it has the illusion of being more concrete, “analytical,” and data-driven. And the numbers are a comfort; sometimes a false comfort.

    Dr. Stephanie: A comfort, exactly. I sometimes talk about that as [00:33:00] the psychology assessor self-selection problem is you take a lot of people who want to get A+ on things, and then promise them that the answer is really in there, and you can find it if you get enough numbers and tweak them in enough different ways that you’ll be able to find it.

    And then many of us start realizing that the comfort, the concreteness that was promised by numbers, is a bit of an illusion. And specifically, I’m going to talk about three quick shortcomings with numbers. I think most people who are still listening, who haven’t turned this off in horror, will recognize some of these illusions, which is that numbers give us the illusion of inevitability, the illusion of impartiality, and the illusion of importance.

    And by illusion of inevitability, I mean we often treat our numbers as if they [00:34:00] are stable, reliable, and will always be the same. That if you test a child’s fluid reasoning right now that you got a good read on it and that it will be similar in the future, and that if you subtract his fluid reasoning weakness from his Full Scale IQ strength, you’ve learned something important and stable about him.

    Unfortunately, the literature is not suggesting that this is the case. If you look at, for example, I’m just going to pick one here, which is, Watkins, Canivez, Dombrowski, McGill, Pritchard, Holingue, and Jacobson 2022, but you can find lots of others if you go to that one and follow their work. They looked at, for example, WISC-V scores in a clinical [00:35:00] sample.

    What they found is that you might be able to use the Full Scale IQ, the VCI, and the VSI for clinical decision-making purposes. They might be reliable enough over two years’ time to make clinical decisions, but the other indexes, fluid reasoning, working memory, processing speed, and the subtest scores and the scatter that you find, even if it’s really unusual scatter, is not consistent.

    It does not hold over time. You won’t get the same score the next time. You won’t necessarily find the same discrepancy the next time. And that feeling of permanence, solidity, and inevitability that the numbers were suggesting probably is an illusion.

    But I think [00:36:00] more importantly, you start realizing that even if the numbers were right, is that really important? Is it useful? Does it give me everything that I want to know about this person? It feels objective and scientific, but is that the same thing as useful to the family or important, or reflecting their lived experience?

    I saw a headline in a Forbes article that I thought the headline was amazing, which said, storytelling beats statistics, but only 100% of the time.

    Dr. Sharp: That’s great.

    Dr. Stephanie: Which is because we can’t understand numbers without context. If you talk about a 20% chance of rain tomorrow, is that a higher or lower chance of rain? That same 20%, if I’m saying there’s going to be a tornado in the next year [00:37:00] that’s going to hit your house, is that same 20% now high or low?

    Dr. Sharp: Sure. It’s such a good point. There are a lot of layers.

    Dr. Stephanie: There are so many layers. Once you get that information, it doesn’t necessarily translate immediately to insight or the person’s priorities or their goals. It doesn’t necessarily lead to anything important about the person that you might want to know. It starts becoming a little bit of the tail wagging the dog. Your test stores can start telling you, Oh, because you can do this test, it’s important to measure, or because you got a high score, it’s automatically a strength.

    Can I give an example here? I’m asking specifically because I’d like to use you as an [00:38:00] example.

    Dr. Sharp: What? Sure.

    Dr. Stephanie: Okay. Because you and I have known each other in some capacity for probably 10 years now.

    Dr. Sharp: Yeah. Getting real close.

    Dr. Stephanie: In that entire decade, I have never, not once, wondered what your visual short-term memory score is. Not even once. Never.

    Dr. Sharp: Not even once.

    Dr. Stephanie: Never crossed my mind. I’ve never wondered if your expressive vocabulary is high average, or exceptional. Never once crossed my mind.

    Dr. Sharp: Okay.

    Dr. Stephanie: And by offering us tests that say we can measure those things, we start measuring those things. We start being like those researchers who are measuring the cockpit and the pilot in 140 different [00:39:00] dimensions, none of which were useful to understanding the problem, and we can start saying that a high score is a strength, whether or not the person can do anything about it, whether or not it helps them, whether or not they even understand what you mean, whether or not it’s even really a thing, and we can get misled by that.

    And so, as a little bit of a teaser to where I’m going, I want to give some examples of the types of things I’d rather be able to say about someone like you. For example, if you came to my office and I was thinking about your experiences, your expectations, your exceptions, and the experiments, those four E’s. I’m going to take a guess at some of these things. So we’re not going to hold me that these all have to be perfect, but I’m just going to give some [00:40:00] examples of things that I might say.

    So I might think about all those stories that I know about you, of everything that you have started; the podcast, the testing Facebook page, some report writing software, crafted retreat in the summer, all of those things and say that one of your dynamic trade-offs, one of the real strengths that you have is that in the face of uncertainty, you favor taking action. You’ll do something even when the outcome is not guaranteed or not sure, even if you have to then retreat, reconfigure, figure it out, if it doesn’t work out right.

    Dr. Sharp: Sure. Okay. I can get on board with this.

    Dr. Stephanie: Okay. And then I’d like to talk about a core belief that you might have, and I know this from your coaching or from conversations that I’ve [00:41:00] heard you have on the podcast, which is that in most situations, I think you truly think there are small tweaks that will make things better, but that’s just a core fundamental belief that you carry around.

    Dr. Sharp: Yeah, I think that’s probably true.

    Dr. Stephanie: And then maybe talk about some of your coping patterns, like your focus that you’ve talked about on previous podcasts on exercise, nutrition, a healthy lifestyle that includes breaks. And then here’s a guess, a coping pattern that you might do under stress, which is maybe overfocusing on optimizing, overfocusing on improving. You’re nodding.

    Dr. Sharp: Oh yeah.

    Dr. Stephanie: Okay. And then maybe be able to talk about some of your core values, like making people feel heard and connecting people, getting them having conversations with each other. [00:42:00] And then also some of the resources that you have that are not even necessarily centered in your own person, but that you have access to.

    So, from attending one of your crafted retreats, for example, I know that you have access to your spouse’s amazing group facilitation ability and her incredible aesthetic eye for colors, websites, and things like that, right?

    Dr. Sharp: Sure.

    Dr. Stephanie: So instead of getting a sense of your numbers, your score on a visual short-term memory task, and an expressive vocabulary task and trying to assign some sort of static label, I’m talking about trying to think about how we get more personalized insights about the person that feel like real strengths, that maybe they could then rely on or understand about themselves as they tackle the [00:43:00] challenges they’re facing or the challenges that they’re going to face in the time ahead.

    Dr. Sharp: I like this question. What are your thoughts on the overlap between the two? Is there research, we’ll just say research as a shortcut, or some support for the idea that these cognitive abilities that we can measure, that they do actually flow into “strengths” or some of these more personalized qualities, attributes that you’ve outlined?

    Let’s take a break to hear from a featured partner.

    Y’all know that I love TherapyNotes, but I am not the only one. They have a 4.9 out of 5-star rating on trustpilot.com and Google, which makes them the number one rated Electronic Health Record system available for mental health folks today. They make billing, [00:44:00] scheduling, note-taking, and telehealth all incredibly easy. They also offer custom forms that you can send through the portal. 

    For all the prescribers out there, TherapyNotes is proudly offering ePrescribe as well. And maybe the most important thing for me is that they have live telephone support seven days a week, so you can actually talk to a real person in a timely manner.

    If you’re trying to switch from another EHR, the transition is incredibly easy. They’ll import your demographic data free of charge so you can get going right away. So if you’re curious or you want to switch or you need a new EHR, try TherapyNotes for two months absolutely free. You can go to thetestingpsychologist.com/therapynotes and enter the code “testing”. Again, totally free. No strings attached. Check it out and see why everyone is switching to TherapyNotes.

    Our friends at PAR have released the NEO Inventories Normative Update. The NEO [00:45:00] Inventories measure the five major dimensions of personality and the most important facets that define each. Now with an updated normative sample that’s more representative of the current U.S. population and fewer components for easier purchasing. Visit parinc.com/neo.

    Let’s get back to the podcast.

    Dr. Stephanie: Absolutely. This is a circle that a lot of people are trying to square, that a lot of people are trying to map the things that we naturally do in assessment, to trying to make meaning about how we understand the person.

    One of the probably most notable examples would be the personalized medicine field, personalized psychiatry, personalized assessment, where they’re trying to get a sense of the individual’s personal correlations, personal patterns, personal causes. So if people are interested in [00:46:00] that, you could look up personalized assessments.

    People are trying to look at ways to use the test data to inform about things that are meaningful. Certainly, people are trying to develop new tests or new assessments that look at some of these other factors. So I have resources if anyone wants them on HiTOP-friendly assessment measures, or ways to measure values scientifically, or ways to measure core beliefs scientifically, things like that that we could talk about if we have time at the end, or that I can give people resources to.

    But people are trying to look at this because I think what you’re hitting on is that we do still need the numbers. [00:47:00] I’m not suggesting you just go into an assessment with some vibes and come out with no numbers, but that we put numbers a little bit more in their place and recognize that even though we often use testing and assessment as synonyms, we are not technicians, we are consultants who have a lot of information that we gather and analyze about a person through our understanding of our interview, our record review, our observations, other data that we gather, the videos that the parent sends you of their child at home or homework that you assign to monitor something in between testing session 1 or 2, that we have a lot of data other than just the testing that we can [00:48:00] use our principles of scientific assessment, scientific analyzing, that don’t necessarily have to be just numbers.

    We are not technicians, and more importantly, we’re not just an expensive substitute for artificial intelligence. We don’t want to reduce testing to the point where AI could do a better job of it than we can. We don’t mind AI helping write our reports, but if AI could just get some numbers about a person and then spit out a diagnosis and a templated list of accommodations, that reduces our valuable role in helping people.

    Dr. Sharp: You’re right. Just reflecting on that, but that’s another can of worms to open.

    Dr. Stephanie: Isn’t it?

    Dr. Sharp: But for now, we’ll leave it closed, and I’ll just say that I agree. [00:49:00] I think the context and these soft skills, so to speak, are very important. But I would love to talk about how we operationalize this. You’re starting to dip into that.

    Dr. Stephanie: What I am thinking about here is returning to those four Es. Returning to assessing people’s experiences, their expectations, and their exceptions, and then trying to do some experiments.

    And so with our pilot, if you’ll recall, we asked, what was the experience of being in the plane like for you? How did it compare with your expectations? Well, we can do the same thing when the question is about a 3rd grader who struggles with completing homework, or a college student who is experiencing academic difficulties for the first time, or a teen who is struggling in social [00:50:00] situations, or a 12-year-old who’s physically aggressive at home, or a 23-year-old who’s having psychotic experiences.

    We’re getting information about what that experience is like for you? How does it compare with what you expected? What are the exceptions? What are the times when the problem’s not present or when things are going better? How does this compare to the past? What are some experiments that we could try doing right now to see if we can relieve some of this distress?

    And so I thought maybe we could talk about those four E’s in a little bit more detail and I can flesh out a little bit more of what I’m talking about.

    Dr. Sharp: Yeah, that sounds great.

    Dr. Stephanie: Okay. So when I’m thinking about experiences, I’m often trying to get a sense from the individual of more of their transdiagnostic experiences. I’m trying to [00:51:00] ask less about a specific symptom, especially if I have a specific diagnosis in mind that’s leading me down a path that might not be useful, and trying to ask more about symptoms and experiences that people have that cross diagnostic boundaries.

    So the way I do it in my mind is I think what’s going on with this person’s homeostatic layer? What’s going on with their social layer? What’s going on with their emotional layer? So by homeostatic …

    Dr. Sharp: Can I ask you real quick, what do you mean by…

    Dr. Stephanie: Absolutely.

    Dr. Sharp: Oh, you’re answering it. Go ahead.

    Dr. Stephanie: Did you just say homeostatic? Where are you going? Yes, thank you.

    Dr. Sharp: Do I have that? Where is that on me?

    Dr. Stephanie: Is that on my bingo card? I don’t remember. Exactly. So the homeostatic layer, by that, this is what, if you’re familiar with the Research Domain Criteria, RDoC, they have seven different [00:52:00] domains that they assess as part of that. One of their domains they call the arousal and regulatory layer.

    And that’s basically what I’m talking about here. How sensitive is this person? What is their nervous system like? What upsets it? What signals danger? What makes them excited? How quickly do they return to baseline? Do they have control over that or is it just something that happens? Can they accept co-regulation from other people, like their parents? Just on a nervous system perspective, what’s going on with this individual?

    And then that social layer, again, RDoC calls this system for social processes. I’m thinking about what the person’s attachment is like? How much do they trust other people? How interested are they in status? How interested are [00:53:00] they in belonging to a group? How much do they monitor what other people think about them? What’s going on in that layer for that person?

    And then the emotional layer, which is what RDoC calls Negative Valence Systems and Positive Valence Systems, I’m thinking about what direction do this person’s characteristic emotions go in? How intense are they? How much emotional granularity does the person have? Can they name all these different emotions? Do they have the emotional vocabulary equivalent of aquamarine and turquoise, or do they only just know blue, or just a cold color? Can they understand their emotions on a really granular level? How well can they regulate those things?

    So I’m trying to get at those more [00:54:00] transdiagnostic experiences, mostly, as you can see through my interview. But I do want to point out that RDoC and also HiTOP have good lists of some of these transdiagnostic experiences as well as whole pages on their website. If you go to hitop-system.org, they have this whole list of measures that you can use to try and get at some of these things if you want numbers about them.

    And then, when I’m thinking about what RDoC calls the Cognitive Systems layer, or the sensorimotor systems layer, I’ve been framing it in my mind as developing processes or developing competencies. What skills does this person have to meet their goals?

    [00:55:00] Specifically, I’m thinking about them in the Ross Greene model of lagging skills rather than a static truth about this individual. I’m trying to get a sense from stories that I hear about the child, observations of the child during testing, their test scores, work samples of can this person meet the expectations that other people have for her, or that she places on herself, or some combination thereof.

    And these are going to be just your regular things that you assess during testing. You’re just thinking of them a little bit differently. So can this person develop an approach when faced with a novel problem? Does she feel confident in pulling up information that she already knows and applying it to this situation? Can she direct her attention to what she wants to focus on? Can she control her impulses and delay [00:56:00] gratification? Things like that? In the service of meeting her goals, can she do these things?

    I’ve also really gotten into trying to think about the test scores less as static and more as trade-offs. So a lot of the things that we do, we talk about them as if they’re unidimensional, but they aren’t necessarily. For example, probably the most famous trade-off in neuropsychology is speed versus accuracy. You’re faced with, you have to draw as many strange wiggles in two minutes as you can. Do you do it as fast as you can, or do you do it as accurately as you can, or do you try to balance those?

    When we talk about processing speed, we tend to talk about it as if it’s a unitary skill. He can process [00:57:00] quickly or he can process slowly. It’s often a trade-off. For example, gifted individuals are known to favor accuracy over speed.

    So, how many gifted kids have you tested where their processing speed score is their lowest score because they try to be accurate during that? How many reports have you read that have called that a weakness or a disability when it’s a trade-off, a conscious way that the child is trying to solve a problem?

    Other examples would be focusing on the big picture versus focusing on details, approaching sources of a problem or avoiding sources of a problem, getting along versus getting ahead using that spotlight attention where you focus your attention [00:58:00] all on one thing versus a more what Alison Gopnik calls lantern consciousness, where your attention is more broad and diffuse and you’re taking everything in.

    Whether you persevere or whether you switch gears and try a new strategy, all of those would be more trade-offs. And here I’m going to stop and shamelessly plug that I’m going to be talking about these trade-offs as informed by functional and neuroanatomy at the AAPdN Virtual Conference on April 25th.

    Dr. Sharp: Cool.

    Dr. Stephanie: I’m going to use vignettes from the game show TaskMaster. Have you seen that game show?

    Dr. Sharp: I haven’t.

    Dr. Stephanie: Oh gosh, you have to watch it. It’s amazing. It’s all on YouTube. Anyway, I’m going to use vignettes from that to show in real time people making these trade-offs to try and solve problems that they’re faced with.

    Dr. Sharp: Awesome. That sounds super fun. We’ll put the link in the show notes.

    [00:59:00] Dr. Stephanie: So these are some examples of ways that I focus on what the experience is like for the person. I also try to get a sense of what their characteristic problem-solving strategies are for them. But I want to move now more to their expectations and how we get a sense of a person’s expectations.

    Dr. Sharp: That sounds good. So, can I clarify just for a second before we dive into the expectations?

    Dr. Stephanie: Absolutely.

    Dr. Sharp: I get the sense in learning about someone’s experiences, a lot of this is happening through the interview, but then especially in the context of these trade-offs, you’re looking at how that might happen during testing as well, how they’re approaching problems, and speed versus accuracy, and so forth. Is that fair?

    Dr. Stephanie: Exactly. That’s exactly right. I made a bit of an outline for our discussion today, as you might be able to tell [01:00:00] if you’re listening. I wasn’t sure where to put in the, okay, but where do you do this? What’s the practical, concrete things? Even though having known you for 10 years, I knew that you would ask, okay, but how do you concretely do this?

    Dr. Sharp: I’m nothing if not concrete.

    Dr. Stephanie: It’s one of your core beliefs that you believe that making things more concrete helps people have practical, actionable takeaways for leaving this. And so you’re right that I do a lot of these things through trying to hear stories.

    I try to get actual examples, actual data about experiments that the person’s done in their own life, or actually see it happen, if I can, or walk through an example of that, or talk with them after testing and ask what that was like for them. Ask if we could try it a different [01:01:00] way. See what happens if I provide more structure. See what happens if they stop and think about their problem-solving approach first. See what happens if they talk themselves through the task. Things like that.

    But a lot of it I’m getting through stories. I’m trying to hear what the person does when faced with ambiguity.

    The reason that I’m trying to do that is because that’s often where our personality, the things that we might have some control over, really tends to bubble up more. Things that we might have some leverage over, that we might be able to change, adjust or receive help with, that’s the area where that impacts it.

    You’re reminding me of a story that I heard. This is from the book, the Upside of Your Dark Side from Todd Kashdan. [01:02:00] He talks about the best way to test who’s going to do well in the elite special forces. The way they do it is pretty simple. They tell all their recruits to show up with all of their gear on and just go for a jog. It’s down a remote road. There’s nowhere else around. They just tell them to go for a jog.

    The trick is they don’t know how long they’re jogging for. They have no idea how long this run is going to be. So do they sprint and try to beat everyone? Do they pace themselves because it might be 30 miles? Do they focus on themselves, retreat into themselves and try to pull on their own reserves of strength? Do they run with others and try to be encouraged by them? Do they try to show leadership skills and encourage the rest of the group?

    That ambiguous situation is how [01:03:00] they see what the person’s real strengths are, what the experience of running is like for them, what they draw on, what they do. And I try and get stories like that, but about 8-year-olds without having to put them in full gear.

    What did they do in this situation? What did they try? How did that work? Then what did you do? What else did you consider that didn’t work or that you wish you’d done? Things like that. Get those types of stories.

    Dr. Sharp: 8-year-olds on an ambiguous jog. Noted.

    Dr. Stephanie: That is what the new tagline for my assessment business is.

    Dr. Sharp: Perfect.

    Dr. Stephanie: That would be … I was going to say that would be terrible, but then I thought about some of the actual things that I make kids do, like the CPT.

    Dr. Sharp: It’s better than some of the things we make kids do.

    Dr. Stephanie: Maybe they’d [01:04:00] prefer it.

    Dr. Sharp: Yeah. Just go run.

    Dr. Stephanie: Exactly.

    Dr. Sharp: Nice. Okay.

    Dr. Stephanie: These are also ways that I’m trying to get at their experiences, their real strength. It’s also how I’m trying to get at the exceptions to the rule. I’m trying to get at when the problem isn’t present. Those are real-life experiments.

    When you hear that a child is inattentive at home and not at school, you think, oh, one of the raiders must be wrong, but the research says that’s not actually true. We are contextual. Different parts of ourselves show up in different situations.

    Bram and Peebles, in their book, Psychological Testing That Matters, which I will never stop recommending. So people who’ve heard me talk have heard me recommend this book 1,000 times. They [01:05:00] call that the conditions under which the person thrives, struggles or is having difficulties.

    I’ve also heard them called if-then signatures. If you’re in a situation by yourself, then you are more inattentive, and it’s harder for you to stay on task, but when you’re with other people, you have an easier time paying attention. We’re not collections of static abilities. We’re dynamic, strange webs of if-then signatures.

    I’m trying to get a sense of this person’s if-then signatures so that I can then start combining them with the trade-offs and being able to write reports that say things like in situations when he’s relying on external motivation [01:06:00] instead of internal motivation, he favors speed over accuracy. Or when she’s with her peers, her fears of being overwhelmed by not being in control intensify, and in those situations she favors dominance-based problem solving strategies, getting ahead over warmth-based problem solving strategies, getting along, things like that.

    Dr. Sharp: Yes.

    Dr. Stephanie: Does that make sense?

    Dr. Sharp: Sure.

    Dr. Stephanie: I’m also trying to get a sense with the expectations of the person’s values, what meaning they place on this. Of course, the contextual family, systemic, dynamic, and cultural factors. I’m giving those a little bit of short shrift because you’ve had so many guests on your podcast talk about all of those things, and I’m not going to say anything in [01:07:00] those areas that is necessarily going to be too new.

    So I’ll highlight two things that I do that I think might be a little bit different, which is that I’m also trying to get a sense of the person’s underlying beliefs about the world. Their core beliefs about themselves, their core beliefs about other people, their core beliefs about the world and any of their central tensions or preoccupations.

    So do they think the world is safe? Do they think other people are dangerous? Do they think they’re a source of help? Do they think they’re fundamentally broken? Do they think that revealing any kind of shortcoming would be dangerous and unsettling, destabilizing for them? Some of those types of things.

    I borrowed those from the Psychodynamic Diagnostic Manual, which is, [01:08:00] Lingiardi and Nancy McWilliams. The first chapter is worth the price of the book alone. So definitely read that, but also from the UPenn research on Primal Beliefs. They have a scale that you can use to measure your primal beliefs if you’re wondering. It’s myprimals.com. Things like that to get a sense of that.

    And then I’m observing the kid during testing and trying to put it all together by using scientific thinking principles and making sure that I’m looking for repetition. I’m not just basing this off of one story or one test score. When I talked about your dynamic trade-off of being willing to take action in the face of uncertainty, I have a lot of stories about that to support that that’s something that you regularly do and have done and could draw on in the future.

    I’m looking for [01:09:00] convergence. I’m making sure that I’m not getting too far outside the data and drawing my own fanciful conclusions, but occasionally I’m emphasizing what Brahmin people call the singularity.

    Sometimes you hear a story that’s so unique, so interesting, that you don’t need more than one. You just hear that one story of what they did and have some interesting ideas about what their core beliefs are, what they value, or things like that.

    I want to be mindful here of the time and basically just emphasize this call to action of broadening our evaluations to really focus on those four E’s instead of the DNA, and seeing if we can’t together come up with ideas of ways to make testing meaningful.

    I also just want to say that if anybody is still listening to this [01:10:00] and is interested in these ideas, please talk to me. Let me know that you’re out there, and if you want to collaborate on something, please let me know because this is basically the animating passion of my professional life right now.

    Dr. Sharp: Talk about a call to action, if anyone wants to join the animating passion of Stephanie’s professional life, the invitation is out there.

    Dr. Stephanie: That sounds like a great name for my autobiography.

    Dr. Sharp: Yeah. Seriously.

    Dr. Stephanie: Yes.

    Dr. Sharp: Write that down.

    Dr. Stephanie: Yeah.

    Dr. Sharp: This is so good. There’s so much that we could talk about here and so many paths we could go down. I know this is just scratching the surface and it has been good. I appreciate how you come at it from this philosophical or idea perspective, but then make it concrete. I love making things concrete.

    The measure for me of how helpful something is, is could I [01:11:00] use it tomorrow or this afternoon if I wanted to? And that’s 100% true with this material. I’m thinking, okay, how am I going to do my interview tomorrow differently? What might that look like? What can I tell my tech for the rest of the testing today? And that’s going to be super helpful. So I appreciate you being here.

    I’m going to try to bookmark or reserve some of your time to talk about rethinking recommendations because I feel like that is important and deserves a lot of time.

    Dr. Stephanie: I have so many thoughts on that in particular, and that’s the part that I was like, well, we’re running out of time, maybe I won’t necessarily talk about what to do instead of the accommodations. I am definitely thinking a lot about that as well because I think many of us feel like we get to the end of a really long assessment, and even if we have gotten those four E’s rather than just the DNA, and you feel like [01:12:00] you have mapped this child, that you understand him like a sunset, and then you still just have that list of 42 templated recommendations for ADHD, that’s all I’ve got. So I have a lot of thoughts on that and would be more than happy to talk with anybody about some of the things that I’m trying to do or trying to think about.

    Dr. Sharp: That sounds great. That’s so funny. It’s like this sunset needs preferential seating. That’s important.

    Dr. Stephanie: Somehow, it needs more time to unfold.

    Dr. Sharp: This sunset needs time and a quarter based on the processing speed. Yes. It’s a pleasure, as always. Thank you for being here and sharing your ideas. I always love hearing what you’re thinking about.

    Dr. Stephanie: Thanks so much for the time and for accidentally sparking this discussion in my brain.

    Dr. Sharp: Sure. Anytime. [01:13:00] All right, Stephanie. Until next time.

    All right, y’all. Thank you so much for tuning in to this episode. Always grateful to have you here. I hope that you take away some information that you can implement in your practice and your life. Any resources that we mentioned during the episode will be listed in the show notes so make sure to check those out.

    If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes or Spotify or wherever you listen to your podcasts.

    And if you’re a practice owner or aspiring practice owner, I’d invite you to check out The Testing Psychologist mastermind groups. I have mastermind groups at every stage of practice development: beginner, intermediate, and advanced. We have homework, we have accountability, we have support, we have resources. These groups are amazing. We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at thetestingpsychologist.com/consulting. [01:14:00] You can sign up for a pre-group phone call, and we will chat and figure out if a group could be a good fit for you. Thanks so much.

    The information contained in this podcast and on The Testing Psychologist website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment. 

    Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this [01:15:00] podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

    Click here to listen instead!

  • 505. Beyond Diagnosis, Numbers, and Accommodations w/ Dr. Stephanie Nelson

    505. Beyond Diagnosis, Numbers, and Accommodations w/ Dr. Stephanie Nelson

    Would you rather read the transcript? Click here.

    In today’s episode, Dr. Stephanie Nelson returns to explore the evolution of evaluation reports, emphasizing the need for a shift from traditional diagnostic approaches to a more contextual understanding of clients. We discuss the limitations of relying heavily on diagnoses and numbers and try to broaden the field for consideration of assessment results and recommendations. The conversation also highlights the need for a holistic view of individuals, considering their core beliefs and others factors, while also addressing the practical implications of these insights in assessment and recommendations. As always, Stephanie brings a fresh perspective that will give you lots to think about and potentially change in your practice!

    BONUS: Stephanie is doing a talk about functional neuroanatomy at the upcoming AAPdN conference! There’s still time to register and catch this discussion, as well as many others.

    Cool Things Mentioned

    Featured Resources

    I am honored to partner with two AMAZING companies to help improve your testing practice!

    PAR is a long time supporter of the Testing Psychologist podcast, and I will continue to bring you featured items from their catalog! Listen to each episode for specific product recommendations and links. Learn more at www.parinc.com.

    TherapyNotes is the leading EHR system for mental health practitioners. I’ve used TherapyNotes for over 10 years, and it just keeps getting better. Use this link and the promo code “testing” to get two free months and try it for yourself! www.therapynotes.com/testing

    The Testing Psychologist podcast is approved for CEU’s!

    I’ve partnered with At Health to offer CE credits for podcast episodes! Visit this link to access current and past episodes available for CE credit. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!

    About Dr. Stephanie Nelson

    Dr. Stephanie Nelson is a pediatric neuropsychologist who specializes in complex differential diagnosis. She is board certified in both clinical neuropsychology and pediatric neuropsychology, and she is currently the president of the American Board of Pediatric Neuropsychology. She has a private practice in Seattle, WA. She also has a consultation practice called The Peer Consult, through which she provides consultation to psychologists and neuropsychologists who specialize in pediatric assessment.


    Get in Touch

    About Dr. Jeremy Sharp

    I’m a licensed psychologist and Clinical Director at the Colorado Center for Assessment & Counseling, a private practice that I founded in 2009 and have grown to over 20 clinicians. I earned my undergraduate degree in Experimental Psychology from the University of South Carolina before getting my Master’s and PhD in Counseling Psychology from Colorado State University. These days, I specialize in psychological and neuropsychological evaluation with kids and adolescents.

    As the host of the Testing Psychologist Podcast, I provide private practice consulting for psychologists and other mental health professionals who want to start or grow psychological testing services in their practices. I live in Fort Collins, Colorado with my wife (also a therapist) and two young kids.

    Ready to grow your testing services? Click below to schedule a complimentary 30-minute pre-consulting call!

    [x_button shape=”square” size=”large” float=”none” href=”https://app.acuityscheduling.com/schedule.php?owner=13476033&appointmentType=2843005″ target=”blank” info=”none” info_place=”top” info_trigger=”hover”]Schedule Your Call[/x_button]

  • 504. Virtual Adult Autism Assessment w/ Dr. Rachel Loftin from Prosper Health

    504. Virtual Adult Autism Assessment w/ Dr. Rachel Loftin from Prosper Health

    Would you rather read the transcript? Click here.

    Today, I welcome Dr. Rachel Loftin, Chief Clinical Officer at Prosper Health, to discuss the evolving landscape of online autism evaluation and therapy. We dive into Dr. Loftin’s career journey, the neurodivergence movement, the challenges of remote autism assessments, and the importance of maintaining clinical rigor while using digital tools. This conversation covers both the ethical and practical aspects of online autism services and provides a lot of takeaways on an increasingly popular topic! Here are a few topics that we discuss:

    • Dr. Loftin’s professional background and journey into autism evaluation
    • The rise of the neurodivergence movement and its impact on clinical practice
    • The challenges and benefits of online autism assessments
    • How Prosper Health conducts autism evaluations remotely with clinical fidelity
    • The role of masking and camouflaging in autism diagnosis
    • Strategies for maintaining rigor and reliability in remote psychological evaluations
    • The importance of community in therapy and the benefits of peer support through digital platforms
    • How psychologists can be involved in online autism assessment and therapy

    Cool Things Mentioned

    Featured Resources

    I am honored to partner with two AMAZING companies to help improve your testing practice!

    PAR is a long time supporter of the Testing Psychologist podcast, and I will continue to bring you featured items from their catalog! Listen to each episode for specific product recommendations and links. Learn more at www.parinc.com.

    TherapyNotes is the leading EHR system for mental health practitioners. I’ve used TherapyNotes for over 10 years, and it just keeps getting better. Use this link and the promo code “testing” to get two free months and try it for yourself! www.therapynotes.com/testing

    The Testing Psychologist podcast is approved for CEU’s!

    I’ve partnered with At Health to offer CE credits for podcast episodes! Visit this link to access current and past episodes available for CE credit. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!E credits for podcast episodes! Visit this link to access current and past episodes available for CE credit. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!

    About Dr. Rachel Loftin

    Rachel Loftin is the chief clinical officer of Prosper Health, a company that provides remote therapy and evaluation for neurodivergent adults. Rachel is a clinical psychologist and autism specialist based in Chicago. She has nearly 20 years of experience in clinical and academic settings.


    Get in Touch

    About Dr. Jeremy Sharp

    I’m a licensed psychologist and Clinical Director at the Colorado Center for Assessment & Counseling, a private practice that I founded in 2009 and have grown to over 20 clinicians. I earned my undergraduate degree in Experimental Psychology from the University of South Carolina before getting my Master’s and PhD in Counseling Psychology from Colorado State University. These days, I specialize in psychological and neuropsychological evaluation with kids and adolescents.

    As the host of the Testing Psychologist Podcast, I provide private practice consulting for psychologists and other mental health professionals who want to start or grow psychological testing services in their practices. I live in Fort Collins, Colorado with my wife (also a therapist) and two young kids.

    Ready to grow your testing services? Click below to schedule a complimentary 30-minute pre-consulting call!

    [x_button shape=”square” size=”large” float=”none” href=”https://app.acuityscheduling.com/schedule.php?owner=13476033&appointmentType=2843005″ target=”blank” info=”none” info_place=”top” info_trigger=”hover”]Schedule Your Call[/x_button]

  • 504 Transcript

    [00:00:00] Dr. Sharp: Hello, everyone. Welcome to The Testing Psychologist podcast. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

    Many of y’all know that I have been using TherapyNotes as our practice EHR for over 10 years now. I’ve looked at others, and I just keep coming back to TherapyNotes because they do it all. If you’re interested in an EHR for your practice, you can get two free months of TherapyNotes by going to thetestingpsychologist.com/therapynotes and enter the code “testing.”

    This podcast is brought to you by PAR.

    The Neuropsychological Assessment Battery offers the combined strengths of a flexible and fixed neuropsychological battery. Now, you can score any of the NAB’s six modules on PARiConnect, PAR’s online assessment platform. Visit parinc.com\nab.

    Hey, [00:01:00] folks. Welcome back to the podcast. I’m glad to be here with you and glad to be here with my guest, Dr. Rachel Loftin. Rachel is the Chief Clinical Officer at Prosper Health, a company that provides remote therapy and evaluation for neurodivergent adults. Rachel is a clinical psychologist and autism specialist based in Chicago, and she has over 20 years of experience in clinical and academic settings.

    Many of you may have heard of Prosper Health. Like you heard, they do therapy and evaluation for neurodivergent adults, specifically, potentially autistic adults. And so our conversation today centers around the virtual autism assessment space and what Rachel and her team are creating over at Prosper Health.

    I’ve heard some questions and discussion about Prosper, specifically on some other online platforms. I think this conversation will provide a lot of insight into [00:02:00] the process and give you a lot to think about in terms of the status of online assessment and best practices and things like that.

    So we get into Rachel’s history as an autism practitioner and researcher. We talk about Prosper’s process for online autism assessment and how to bridge that gap between in-person services and jumping to fully online. We talk about bias in assessment. We talk about neurodivergence as a construct and many things. So there’s a lot to take away from this conversation, as always. I hope that you enjoy it.

    If you’re a practice owner and would like a little support with your practice, I am happy to jump on and do a strategy session with you. So these a la carte hours where you can send me some information about what you’d like to work on and we just jump onto a call for an hour, dive deep and figure out as many things as we can, and I’ll [00:03:00] send you away with some little bit of homework and hopefully point you in the right direction. So you can go to thetestingpsychologist.com/consulting to sign up and book your call right from the website.

    For now, let’s get to my conversation with Dr. Rachel Loftin.

    Rachel, hey, welcome to the podcast.

    Dr. Rachel: Thanks for having me, Jeremy.

    Dr. Sharp: Thanks for being here. We got a lot to talk about. I think you’ve got a lot going on in your life that my audience is probably really interested in, so I’m grateful for your time. I know you’d likely have a full schedule these days, so thanks for being here.

    Dr. Rachel: I’m happy to be here and share information. I hope it’s helpful to your listener.

    Dr. Sharp: Yeah, [00:04:00] absolutely. I have a lot of questions for you and plenty for us to talk about, but we’ll start with my familiar question, which is, why spend your time on this out of all the things that you could do with your life?

    Dr. Rachel: It’s always been important to me to look at where we are as a society, what we’re doing, how the world’s working, and figure out what I could do to make it better. I know that sounds incredibly naive, but if I look at my career at any point, that’s where I’m coming from.

    For a long time, I’ve been focused on autism. It was always my professional focus. I have autistic family members, and it’s always been something I cared about.

    I think neurodivergence more broadly is where we are in society right now and what we need to start looking at and thinking more about as we think about [00:05:00] doing the right thing for people, making sure that we have equity in society, making sure that everyone can access everything. It becomes increasingly important that we look at and think about neurodivergence. I think that’s what it means to me right now and why it’s so important to me to be putting my time and energy into it.

    Dr. Sharp: That’s powerful. We’re right in the middle. I feel like the neurodivergence movement was like a slow-building wave. It started a long time ago -20, 25 years, something like that formally, but then it feels like over the last 4 or 5, maybe 6 years, it’s really gathered steam, and now we’re immersed in it.

    Dr. Rachel: I think it’s so interconnected with other identities and other ways that people express themselves. And so I think as [00:06:00] we make progress in those other areas, neurodivergence is something that gets more and more attention. But yeah, there’s momentum now that I haven’t felt in my 20+ years of doing this work professionally.

    Dr. Sharp: Yeah. So that might be a good place to start. I know I did the introduction in the beginning part of the podcast, but I think your history is important for people just to have a really clear sense of, especially what we’re going to talk about. So, tell me a little bit more about your experience with autism and what your careers look like up to this point.

    Dr. Rachel: Sure. I’ll do that from a professional angle. I mentioned I have family members, and I can talk a little bit more about that, but professionally, I always knew I wanted to specialize in autism. So even going into my doctoral [00:07:00] program, that was already part of the objective.

    I also was interested in adolescent girls, and I had some other specific interests in things, but autism has always been a big drive for me. Initially, I did school psychology thinking that, as we thought back then, autistic people are young people and there are kids who need diagnoses.

    So, I started off with school psychology. I did that at Indiana University. They had a really strong special education program. I was able to work closely with people there, like Sam Obeng, who did great autism research and learned a lot, and start there.

    And then as I did that work, I did school placements, and I did hospital placements and so forth, I realized, oh, I really like this clinical stuff. This is the draw for me. So, I chose an internship and a postdoc that were more clinically focused. I went to the Yale Child Study Center, where there was [00:08:00] amazing autism research happening.

    The way their system works is there’s only one autism fellow, or at least there was at the time. I don’t know how it’s changed. You get to be the autism fellow for 2 years. It’s just a real immersion. I loved it. I just soaked in whatever I could. That was amazing.

    And then from there I went to the University of Illinois, Chicago. They had just gotten a huge autism research grant. I worked there with Edwin Cook on his genetics research, and did a lot of clinical work and lots of other stuff. I love that too.

    But being a university, there was some shifting, shake up, and funding issues. A whole bunch of things happened at once where that position wasn’t too secure. So I branched out into clinical work for a while

    because I didn’t know what was going to happen with [00:09:00] the university, and I had a baby, so I wasn’t taking any risks.

    I found that I also loved the clinical work, 100% clinical. It was a big shift from these research projects and things, but I really loved it. While I was doing that, I found out that Rush University here in Chicago was going to start an autism program and I had some connections there.

    People reached out to ask if I wanted to be part of that. So I was the Clinical Director when they started their autism program and helped get that started. I was there for a few years. I went back and did clinical work again for a little bit, and that’s where I was when Prosper Health reached out.

    It wasn’t Prosper Health yet. It was a young business guy looking to see if this was something to maybe go into. He asked me [00:10:00] he wanted to pick my brain as an autism specialist and someone who saw primarily young adults.

    I didn’t say this earlier, but the age range I focused on has just gotten older and older over time. When I was at Yale, I had to be in the baby clinic. So a lot of us, we’d see literal toddlers and babies. I’ve just only gone up through the age range over time and shifted focus based on where the needs were. We didn’t have clinicians with the expertise. So I became an adult specialist because that’s really what was needed, and I loved it.

    I talked to this business guy. I’m not a business person. I’ve never had a very business brain, but I really liked him. I liked talking to him. I liked the focus of what he wanted to do. And so in talking with him [00:11:00] over time, it’s evolved into my full-time thing. So what I do now full-time is I’m the Chief Clinical Officer at a business called Prosper Health.

    Dr. Sharp: Isn’t that wild? Did you ever think that you would end up in this position?

    Dr. Rachel: No. I would’ve thought the opposite. I would’ve thought that I would hate it. I didn’t want to do anything. For-profit business, it seemed probably inherently evil. I just didn’t want anything to do with it. And so I’ve had a huge shift in that.

    Dr. Sharp: Right. I have a lot of questions within that information that you shared, but I’ll go backward from the most recent. Tell folks what a Chief Clinical Officer does in a business like this and how a psychologist fits in.

    Dr. Rachel: It’s a startup, and one of the interesting things about startups is you don’t [00:12:00] usually get a very detailed job description because everything’s getting figured out as we go. So what I try to do is poke my nose into absolutely everything because I think there’s a clinical component to everything we do.

    I want to know about billing. I definitely want to know how people are interacting with our therapists and our psychologists. How are our operations; people are asking them for information, or what are they being asked to do, or required to do? I want to know about client communications; how are people interacting with our clients?

    So I’ve got my nose in everything, but the most important aspect of my job is coming up with our approach for evaluation, our approach for therapy, setting those clinical protocols, making sure that what we’re doing is valid, making sure that our evaluations are also reliable, making sure that our therapy is effective.

    [00:13:00] I also oversee research projects. There’s so little research in adults who are autistic. You know that I want to contribute to that research literature and also expand that, and if it’s going to happen, it’s got to come from me because people on the business side don’t care. Not that they don’t care about the people and they understand the need for research, but they’re just not the people who are going to make it happen.

    I think that’s most of it, but there’s just an unlimited list of things that can happen at time. So it’s a constant every week prioritizing and making sure that I’m putting my time and my attention on those things that are most important.

    Dr. Sharp: Sure. That makes sense. I appreciate you diving into some of the history. I did that on purpose just to set a little bit of context because I’m guessing some folks are probably looking at this conversation through [00:14:00] a little bit of a skeptical lens, like, oh, here’s this online autism evaluation and therapy startup. What are they doing? Can this happen online? What’s the clinical fidelity here? So I wanted to outline all that to let people know that you do have the background and the chops, so to speak, to come at this business in the right way.

    Dr. Rachel: Like I said earlier, at a university somewhere still working in a hospital or whatever, I would have that same attitude about anything that popped up. I think there are a number of really terrible online evaluation options available where people are just given a bunch of rating scales, maybe talked to for 20 minutes, and told they’re autistic or they’re not autistic.

    They skew badly in both directions. Some of them are just [00:15:00] giving everybody an autism diagnosis across the board. Some of them are going by the old school, you looked me in the eye, there’s no way you’re autistic, and not diagnosing anybody. And so we’ve got a bimodal distribution of evaluations out there.

    And that was part of what motivated me to want to come up with something because it’s already happening. We’re not going to roll back technology. We’re not going to roll back people wanting remote evaluations. We’re not going to reduce the need in remote areas. We’re not going to shorten the wait list in the best places. So we’ve got to figure out how are we going to help these people, and how are we going to do it soon?

    I thought, well, I’d like to be involved then to make sure valid assessments are happening, that they’re done reliably, that there’s a real quality to it. So that was what motivated me to get in the game.

    I think we all already knew that you can do decent remote therapy, but there was that question of, can [00:16:00] these evaluations happen and be good? I wanted to be involved in that and make sure that they were good if they were happening.

    Dr. Sharp: It’s a good way to think of it. It’s part of the problem or part of the solution. If it’s going to be happening, people are going to be doing it. I get at least five questions a week about how to do remote evaluation practices in private practice. So people are going to be doing it. You made the choice to “be part of the solution” and actually shape it on a little bigger scale.

    Dr. Rachel: I can’t remember what document it was, but APA put out something at some point about using technology in evaluation, and they used the word unavoidable. I think that’s right. It’s unavoidable. It’s 2025. We’re going to use technology, so let’s just make sure we’re doing it in smart ways, in ways that help the client and don’t cause harm.

    Dr. Sharp: That makes sense. [00:17:00] Why don’t you tell us a little about Prosper, what y’all do, what the business is, just for folks who may be a little unclear.

    Dr. Rachel: Sure. We do autism evaluation and we do therapy. The diagnostic evaluations are really only, yes, no, this is autism. This isn’t autism. It doesn’t go any further.

    Of course, nearly everybody coming through has other mental health diagnoses, ADHD, or things happening. We don’t do formal diagnosis of those at this point, but we make recommendations for things to work on in therapy or places to go if additional evaluation is needed.

    And then we do therapy. The therapy is primarily CBT with some DBT elements, some act elements, and then a lot that we just learn from what autistic people tell you works. And what we know from even the special education literature, we know [00:18:00] visual supports are going to be useful for a lot of people. We know having a lot of structure and organization to the sessions is going to be helpful for a lot of people. So what can we incorporate that’s just known in general about autism?

    Dr. Sharp: Right. You talked about having an affirming approach when we started this conversation with your investment in just neurodivergence and supporting those folks. And so I’m curious how you are threading the needle with therapy for autistic folks and balancing that whole, hey, we’re not trying to change you necessarily, but here are some strategies that might help “assimilate” into the society. I don’t know, that’s maybe not the way you’d phrase it, but I think you see where I’m going with this question. How does that all play out?

    Dr. Rachel: For starters, our therapy is never focused on assimilation or trying to mask signs of autism or anything like [00:19:00] that. The focus is, what mental health things are you dealing with?

    Sometimes, mental health issues can arise just from being an autistic person in a world that’s not set up for autistic people. How can we help you adapt to this identity, which might be new for you? How can we help you accommodate yourself or ask for the right accommodations, advocate for yourself to get what it is that you need?

    But the CBT I talked about, often we’re helping people with the anxiety and the depression that they’re dealing with. So it’s not so much targeting poor signs of autism, but rather helping people with the co-occurring stuff that happens.

    And then sometimes it is a little more directly related to autism. For instance, somebody with sensory sensitivities that interfere with things they might want to be doing and goals that they might have for themselves. So how can we help them [00:20:00] problem solve, work through, and meet their own goals?

    We’re very clear in training our therapist to always be very client-centered and client-focused in the goals. I think there’s a really bad history of autistic people, people with developmental delays, being beholden to the goals that their parents want for them to have, or a teacher wants them to have, and that’s definitely not what we do. It’s what the client values and what the client wants to work on and focus to growing from there.

    Dr. Sharp: That makes sense. Are you seeing a variety of goals from autistic folks in terms of coming in with, hey, I do want to work on these ancillary factors, the anxiety, the depression, the identity component, versus, oh no, I would like to practice my social skills, for [00:21:00] example, and things like that. Is it pre-varied, or what do you see?

    Dr. Rachel: I would say probably 80% or 90% of the goals we see are the same six or eight things that do have something to do with anxiety, depression, executive function, kinds of daily living things. There are people who will say that they want to practice their social skills, but it’s often more coming back to social anxiety. It still comes back to some basic CBT skills that end up being useful.

    I talked about assessment. I talked about therapy, but there’s another huge component that we have, which is a community. We have a Discord community. Everybody can participate online. It’s only for people we’ve diagnosed or people we know well in therapy. So it’s free of trolls. There are no bullies. Everybody’s really nice. It’s the [00:22:00] most lovely, supportive community.

    And so we also have that as an option to help when people are working on social things. They have at least a digital place they can go and they can practice. Some of the people who are in there belong to a local group in their state, and they will get together. They will meet up out in the world. So there’s some in vivo social practice happening, too.

    But the point of that wasn’t originally therapeutic, it’s just what would you enjoy? What’s fun for you? Because I think so many people who are struggling with depression, struggling with anxiety, don’t have that focus on their own pleasure and enjoyment, and they start to miss out on these social opportunities that the rest of us might take for granted. So nice to have some of that built back in.

    And then, as we know from CBT, just having those regular pleasurable moments and social moments is going to naturally help with [00:23:00] their mental health too.

    Dr. Sharp: Sure. This is a pretty detailed question, but I’m curious how you handle the confidentiality aspect and inviting people to that Discord. I’m guessing they all maybe know that they’re all Prosper clients. How did y’all work that out?

    Dr. Rachel: They’re told in advance, here’s who’s in the discord. Here are the circumstances under which they join. And then most people who join elect to use a name that doesn’t match their name and isn’t very traceable to them. And so that’s discussed with them too, that they have options for maintaining confidentiality if they want that way.

    I’d say very few of the clients seem very concerned about it. I think they’re so driven to meet others and engage with other people. Of course, it’s still important for us to have all the cards on the table and make sure they understand the risks and everything, but there’s so much excitement [00:24:00] about that contact and that ability to be around others that we haven’t had any issue with people not joining for privacy concerns.

    Dr. Sharp: It strikes me as a relatively perfect use of discord. That’s a fantastic community, and already seems so popular. People know how to use it.

    Dr. Rachel: Sadly, a lot of the autistic spaces can get taken over by negative voices or people who maybe aren’t even really part of that community, but they come in to say rude things. So it’s nice to have a positive option to offer people.

    Dr. Sharp: I love that. Let’s switch to the evaluation side. I’m so curious about the evaluation process. So maybe we just start there. What does the evaluation process look like here at Prosper?

    Dr. Rachel: There’s a lot of parts to it. One thing, and the thing when [00:25:00] I’m interviewing psychologists that they get the most excited about is all the rating forms are obtained upfront. So there’s never chasing down a rating. So we have all that taken care of. We get …

    Dr. Sharp: Wait, can I pause you quick? I am excited, like the psychologist you talked about. How does that happen?

    Dr. Rachel: We don’t schedule until the initial rating forms are all complete.

    Dr. Sharp: I got you. So, someone reaches out, and this is step 1. You’re like, here’s some paperwork and some rating forms. We’ll book you for the evaluation when these are done.

    Dr. Rachel: That’s so.

    Dr. Sharp: Okay. I’m assuming this is all happening at some kind of administrative level. The clinicians are not involved in this.

    Dr. Rachel: The clinicians aren’t involved. Clients can do all this for themselves online. So it’s all automated.

    Dr. Sharp: Great.

    Dr. Rachel: But we get early history from someone. We like to [00:26:00] have parents or a caregiver who can report about early history. We have standardized ways of getting that. We also have very open-ended, probably some anecdote kinds of ways we can get that information.

    I like having both because different people can give you information in different ways. It’s interesting to see there are people who maybe don’t report anything in a rating form, but then they’ll tell you these specific stories that really illustrate scenarios. And so it’s been great at levels we can.

    And then there are also rating forms that someone who knows the person well currently. We have the same thing. We have a standardized rating form, and then we have something that’s just anecdotal, and then we get adaptive measure from that same person. Something that gives us a little bit of information about the functional impact that these things might be causing for the [00:27:00] person.

    And then the person does a rating form from themselves too, and fills out a full intake thing where they can also provide more anecdotal information and description. So all that happens before the psychologist sees the person.

    So going into the session, they already have some data, they have some information, and it can help steer them, but they go into a very structured two-part interview process. Part one screens for co-occurring mental health conditions that might interfere with the assessment. So, making sure someone isn’t actively psychotic, making sure someone isn’t manic or anything that’s not addressed that could flare up and be difficult to differentiate from autism.

    It’s rare that that happens. It’s rare that we have to discontinue with somebody, but every once in a while, somebody needs a higher [00:28:00] level of care, we need to get them to the emergency room, something can happen from time to time.

    The other thing we do in the first session is screen for intellectual disability. It doesn’t come up very often at all with our clientele. Our clientele tends to be people who have been to college or are working, doing well, and thriving in a lot of educational and financial ways but maybe struggling in other ways. We have a screen for that.

    We have a lot of questions in the first session about getting to know them, building some rapport, some emotion questions, and asking how they feel in different contexts. And then there are also a few conversational presses, just ways of getting a little bit of a sample of how is this person in a social situation.

    [00:29:00] A lot of our clients can manage that. These are people who have made it through college, who have had anybody talking to them about autism, so to have a little chit-chat about how’s the weather in Chicago, they can do that.

    But we still do it. And sometimes interesting comes out, sometimes you can see unexpected gestures, or you can see the person’s completely turned away from the interaction, even though you know where their camera is, you know where their screen is. So we can collect a little bit of observational data that way.

    At the end of that first interview, there is a mental status form that the clinician goes through and fills out based on all their observations and data collected from that session. And then they can make whatever other notes they want, of course.

    And then there’s a second interview. We have them spaced apart in time so that you’re seeing the person across multiple days. I think seeing a person once, even if you’re with [00:30:00] them for 2 hours, you’re still only seeing them once on a certain day. And so at least, we’re trying to get more data. So we require multiple appointments.

    In the second appointment, we ask a lot about camouflaging and masking. I am not thrilled with the existing rating scales and tools for this, so I like to do it via an interview. This gives our clinicians a chance to tease apart things that might be social anxiety, things that could otherwise account for some of the behavior a bit more. So that’s why we do that in the interview. And then …

    Dr. Sharp: Can I jump in quickly? I think that’s probably important just to spend a little bit of time on. I was going to ask you about this, and you went in the right direction on your own, which is, I also agree. I don’t know that our measures for camouflaging and masking are great. [00:31:00] I would love to hear your perspective on that and what you mean when you say you don’t think a lot of them are right.

    Let’s take a break to hear from a featured partner.

    Y’all know that I love TherapyNotes, but I am not the only one. They have a 4.9 out of 5-star rating on trustpilot.com and Google, which makes them the number one-rated Electronic Health Record system available for mental health folks today. They make billing, scheduling, note-taking, and telehealth all incredibly easy. They also offer custom forms that you can send through the portal. For all the prescribers out there, TherapyNotes is proudly offering ePrescribe as well. And maybe the most important thing for me is that they have live telephone support seven days a week so you can actually talk to a real person in a timely manner.

    If you’re trying to switch from another EHR, the transition is incredibly easy. They’ll import your demographic data free of [00:32:00] charge so you can get going right away. So if you’re curious or you want to switch or you need a new EHR, try TherapyNotes for two months absolutely free. You can go to thetestingpsychologist.com/therapynotes and enter the code “testing”. Again, totally free. No strings attached. Check it out and see why everyone is switching to TherapyNotes.

    The Neuropsychological Assessment Battery, or NAB, is a comprehensive instrument offering the combined strengths of a flexible and fixed neuropsychological battery. It lets you administer the full battery, any of its six modules, or individual tests. Now, the NAB modules are available for scoring on PARiConnect, PAR’s online assessment platform. Learn more at parinc.com\nab.

    Let’s get back to the podcast.

    Dr. Rachel: My perspective [00:33:00] is some degree of camouflaging and masking is pretty damn adaptive and we’re all doing it all the time. I’ve seen some people, especially online, equate any level of masking with

    being autistic. And that’s just not how it is.

    We’re all doing it to some degree. It’s part of functioning in society. I think what can happen is autistic people with the bimodal distribution could be at either end of. I have a lot of autistic people tell me they absolutely never mask because why would they hide any aspect of themselves and how they feel?

    Of course, there are autistic people tying themselves into knots because they’re working so hard to camouflage and mask, and it really becomes an issue. I think we have to be incredibly careful about it and thoughtful about it.

    What we’ve learned from all the research on this topic to date [00:34:00] is that it’s incredibly hard to tease apart from social anxiety. And so people who give you answers, and you hear this all the time, even in questions about autism features.

    If you’re asking about social emotional reciprocity in that give and take conversation, and you have someone tell you, oh, I never know what to do or say in conversation, you don’t know that that’s coming from an autistic place. It very well could be someone who’s very socially anxious and underestimates their own ability to navigate those situations. So I think we have to be really cautious and understand that we really don’t know a lot about that until we get details and specifics from the client.

    So what I always encourage people to do is to get as many examples of things that happen in real scenarios. As many details as you can from the person. I think that’s a [00:35:00] lot more illuminating than just the overall, oh yeah, that’s so hard for me. I can’t do that.

    Dr. Sharp: Oh, sure. I like the way you frame all of this. I would love to see better measures for these behaviors. I think we’re in the beginning of figuring out what these constructs are and how to measure them. So it seems like we’re aligned on that, but this is also a fraught topic. I think we all want to do the best work we can and not minimize anyone’s experience, and at the same time, we don’t have great standardized tools for this right now.

    Dr. Rachel: That’s exactly right. And then, as with so many things, the more academic debate on this can go very polar. I think that’s not super helpful, either. I think there’ve been academics who just shut down any talk [00:36:00] of masking and camouflaging, and I think that negates the experience, especially of a lot of girls and women, which isn’t helpful, but at the same time, we just don’t know enough to be overconfident in what these tools are telling us.

    Dr. Sharp: That’s so interesting. Are you referring to the discussion around, can someone mask to the point that they could hide a theoretically socially driven diagnosis? Can you mask so well that you can hide symptoms that are primarily social in nature? Is that what you’re talking about when you say the debate?

    Dr. Rachel: Yeah. That’s the core of the debate. I think in contrived testing situations, it’s a lot easier than it is in natural in vivo things that happen in the real world. But unfortunately, we’re not getting to see clients out in the real world. We’re only [00:37:00] seeing them in these contrived, unnatural situations where it’s a lot clearer what’s expected. It’s a lot clearer how to fake it.

    Dr. Sharp: That’s true. Full disclosure, I wrestle with this a lot. I have a hard time with this. It is tough. I do see both sides and I honestly want someone to tell me what the right way to think of it is.

    Dr. Rachel: We’re not there yet.

    Dr. Sharp: Okay.

    Dr. Rachel: I usually, in fact, be suspicious of anybody who’s overconfident about the right way to think of it at this point.

    Dr. Sharp: That’s fair. There’s nuance to everything. I’m, like I said, still thinking through my own process with how can someone mask well enough if they truly have these social concerns. It’s hard to reconcile sometimes, but like you said, it is nuanced.

    So I’d [00:38:00] interrupted. You said the academic debate gets polarized.

    Dr. Rachel: It definitely does. To jump back into what happens in the second day of the interview, after we do a masking and camouflaging interview, and one other thing about that, part of the reason for including it, besides, I think it’s important just to find out more, and hear what they say about things. It also gives you a little bit of information into the person’s social insight and how they think of all this.

    It also helps build rapport because it shows people that you’re listening to this, you want to hear. A lot of the people who come for autism evaluation have been told many times, they’ve been dismissed, and they haven’t felt validated when they’ve brought up mental health concerns.

    And so going through that process is also therapeutic. It gives us good clinical data that we need for evaluation, [00:39:00] but I also think it’s inclusion that’s pretty therapeutic for a lot of clients.

    I have it structured so that that conversation happens before going into a detailed interview about autism traits as defined in the DSM. And this is the bulk of the interview, but we do a very deep dive into every symptom category with a lot of questions and pulling for specifics.

    And that’s the meat of the evaluation. It’s where I put the most importance. Of all the data we collect, we look at all the data, I’ve come up with frameworks for how we think about and take all the information in, and weigh it with each other, but I encourage people to think most about what the client’s actually telling you. What are the details? What are the [00:40:00] specifics? How does it really play out in their day-to-day life? Because that is the key. That’s the gist.

    And then there’s the oral feedback. That’s all scripted and structured so that we’re giving people a lot of psychoeducation and walking them through and helping them not just know I have a diagnosis or I don’t, but what does it mean for me? How does this show up for me? What do the features mean?

    There’s that overused phrase of if you’ve met one person with autism, usually it’s in person first language, so if you’ve met one person with autism, you have met […] A person’s a person, whether they’re autistic or not, and we need those specifics to help us understand those descriptions to help the people understand what does this mean for me? And then what do I do with it? Where do I get support? Where do I get whatever the recommendations are that might be the things I need?

    [00:41:00] The cool thing about working with a startup is we’ve been able to come up with some very neat tech that helps take the notes that we’ve taken during the interviews, helps take rating form data and everything, and populate it into a report. We’ve got recommendation banks that are really easy to pull from.

    There’s a lot of cool things on the tech side that I wish I had 20 years ago, I would’ve been so amazing. All the waste of time on reports, but I think it just gives us higher quality reports. It’s even in the last year improved exponentially over where we were about a year ago, just because we’re able to build and things keep growing and evolving with the tech. That’s the whole evaluation process.

    Dr. Sharp: Thanks for diving into that. I’m curious, personally, I’m guessing, given your [00:42:00] history, that you probably came up in training environments and work environments where you were doing a bunch of in-person autism testing. I’m guessing you’ve given a bunch of ADOSes and any number of other things as part of the batteries.

    Dr. Rachel: Yeah. From graduate school on, it’s been the standard research battery of a cognitive, an ADOS, adaptive, and an ADI-R and then whatever neuropsych you do on top of that. But that would’ve been my core battery my entire career, even for adults.

    Dr. Sharp: Right. That’s where my question lies, was there any personal hurdle to jump over to get to this place where it’s primarily interview and rating-scale-driven without those more classic [00:43:00] measures and a classic process? I’m curious what that was like for you.

    Rachel: It was complicated for me, for sure. There was a period of testing it out. So taking people who I had done full evaluations on, or who I already had seen for years in therapy and knew really well, and having them go through this shorter evaluation process, and seeing what happens with that. So seeing that we were getting really good data and really helpful data, that was useful.

    And then I think also all these years I’ve been seeing adults, I have been frustrated with the standard process. When I evaluate a university professor, and I’m trying to do an ADOS Module 4, it feels pretty stupid, hard, and awkward.

    Obviously, the questions are still valid and useful, and there’s a lot that comes out of it that’s good, but it’s a much stronger and more [00:44:00] comfortable tool when you’re giving a Module 2, when you’re giving a Module 3 than it when trying interact with adults who have intact intellectual ability, who have strong verbal skills.

    When I was training years ago and working with Ami Klin, who was my mentor at Yale, we would do the standard research battery, but he would also come in and do this interview magic of being there clinically with the person in the room and having that interaction.

    I think even though I’ve always been in the research world and doing these tighter evaluations, I also saw how powerful and useful a purely clinical experience with a person could be. I think there still is structure to that. There still can be rigor with that. There still could be a systematic approach, [00:45:00] but being able to be the other person and use yourself in that way as a tool can be really valuable.

    I think what happens though is when people get incredibly clinical, they tend to let the rigor fall away. They tend to stop being systematic. I think when we do that, things get really sloppy and messy.

    And so that’s been part of my concern in this autism world is we’ve got people who are incredibly affirming, warm and lovely people, but some of them, not all of them obviously, but some of them have lost the rigor and have lost that approach to having a system for going into evaluation and a system for making sure we’re ticking the boxes and doing the steps.

    And I wanted to make sure we didn’t do that. I wanted to make sure that we were as affirming as we can and celebrating the people we’re [00:46:00] evaluating and all the strengths that they bring to the table, but also going in with rigor and structure so that what we’re doing meant something, it would hold up for them over time. It wasn’t just a one-time feel-good experience but was a tool to help them in their progress as they’re trying to work on their mental health or understand more about themselves or whatever.

    Dr. Sharp: I think that’s super important. I was wrestling with how to frame these thoughts or feelings, but a lot of us, I’ll speak for myself, I don’t want to project onto anyone, but I think part of the deal with doing autism assessment is that it is inherently looser than a lot of other testing that we do because there’s a subjectivity even with the ADOS or the way that we’re interpreting ADI-R answers, or how we see the examples people give us and whatnot.

    It’s already inherently a little [00:47:00] looser. And to get away from what appear to be standardized instruments, I don’t want to take away from that with the ADOS or some of the other tools; that feels scary.

    I like what you’re saying about the process where it feels like you’ve instituted a fair amount of rigor and structure to these interviews, to the rating scales and whatnot that you’re administering. I would guess that does feel a little more comforting and gives you more to fall back on.

    Dr. Rachel: Yeah, that’s right.

    Dr. Sharp: I want to ask this question around. These are messy questions, but there’s always stuff around bias in situations like this, where you are a self-proclaimed autism testing business. And so I’m curious how you work through or [00:48:00] deal with that knowing, hey, people are seeking us out for an autism diagnosis. How do you talk with your clinicians? As the CCO, how do you work through that to combat what must be pretty inherent confirmation bias and any number of other biases?

    Dr. Rachel: I think one of the most powerful things we can do is check reliability. Everybody knows that there are clear cutoffs for things, and everybody knows that there are going to be reliability checks and that we need to make sure that everything’s in line, but we keep a pretty careful eye on diagnosis rates too.

    It’s going to be higher anywhere I’ve ever worked because I’ve always worked in autism places. It’s going to be well above 50%, but I think we have to keep a careful eye on that and watch on that because I think that there’s drift over time. [00:49:00] So we have reliability projects, we have times where everybody needs to come and coat things, and I’ll make sure that we rotate in cases where there isn’t autism. So that we’re seeing the non-examples.

    I think we need to over time, we haven’t gotten here yet because we’re still pretty new, but over time, we also need to have video examples of interviews with people who aren’t autistic, but are saying some things that might be in line. I’m constantly giving examples of ADHD type fidgeting that doesn’t count as a repetitive behavior in autism, for example.

    I’ve sat here nonstop moving and fidgeting, and if you ask somebody, do you have any repetitive behaviors, I might say, yeah, I move my chair back and forth all day. We’re really careful to train people then what are the follow up questions? [00:50:00] How do you differentiate that from a repetitive behavior that’s autistic?

    Of course, there’s going to be limits to that, but we’re very careful about the non-examples, not just the examples, and really careful about the reliability. We are working and establishing relationships with external reliability partners too.

    I think that’s a big fear of mine is that maybe with blinders on, it would be really easy to drive right off a cliff thinking I’m going the right direction and doing the right thing. And so putting a lot of time and effort into building those external relationships and having some external checks and balances too is crucial.

    Dr. Sharp: Agreed. Even the practice of monitoring your diagnosis rate is pretty powerful. Data can tell a lot of stories and give you some great information.

    Dr. Rachel: One of the most [00:51:00] interesting things I didn’t expect to have come out of that, when people want to schedule with a clinician, they can go look at their bio and see their picture first and then schedule. I do think certain clinicians are pulling for different kinds of client populations.

    And that wasn’t something I had anticipated, but we had one clinician with a higher than average diagnosis rate and one with a lower than average diagnosis rate. We took a check at both of their recent cases, and they were accurately diagnosing people, but they had very different caseloads.

    It’s interesting, I think there be something there we can maybe figure out over time is what’s drawing clients to specific clinicians? There’s some really interesting stuff to learn about that.

    Dr. Sharp: Oh, 100%. This is the big data I can get behind. I love those kinds of statistics and those relationships. Are y’all [00:52:00] doing anything with the data that you’re collecting? You mentioned some research, but I don’t know what that actually looks like.

    Dr. Rachel: We get a standard release when people come in for evaluation or therapy that things might be used in the future for research. So we didn’t have anything planned when we started and we’ve started getting that release, but I am now doing a perspective or an analysis of data we already have for an upcoming conference in May.

    We are going to look at two things: One is, on the therapy side, we collect quality of life data. And so we’re going to look at how does quality of life change over time for therapy clients? And is having the addition of some kind of community activity, like the Discord or some community groups we have, does that have an additive [00:53:00] benefit?

    I think that will be really interesting. I like using quality of life measures as opposed to insurance companies want us to be using the PHQ-9 or the GAD-7 or whatever rating. I like knowing that this person is feeling like they’re seeing a significant change in an actual aspect of their everyday life as opposed to here’s a little movement on a symptom. So that I’m excited about.

    And then we also are looking at evaluation data for the same autism conference. We’re going to do factor analysis to see all out of all these different data points we collect, what’s most predictive of diagnosis. I have ideas of what that’s going to be but I’m curious to see what the data tell us.

    Dr. Sharp: That sounds great. This is the part that I miss, research is hard to do in private practice. We’re [00:54:00] doing a little bit with some of our evaluation data, but the stuff is fascinating. The fact that you’re gathering so much data from so many people is powerful as long as you are using your powers for good.

    So tell me a little bit more. How do clients find you at this point? What is the main funnel here? Are they getting referrals from their physician? Are they doing online searches? How do clients come to you?

    Dr. Rachel: We definitely get referrals from physicians, psychiatrists, therapists, other friends who have recently been evaluated, but primarily it’s an online search. We do have Google ads that run. There are ads in Meta that run, but not a ton. I think that’s something that probably will grow and change over time, but right now it’s primarily through Google.

    Dr. Sharp: Sure. [00:55:00] And what’s the financial situation? Is all this out-of-pocket? Do you process insurance? How does that work?

    Dr. Rachel: Oh yeah. We work with insurance. We have people who are credentialed with most of the panels out there. There are two weird exceptions where it’s been hard to get a contract or whatever, but we work with insurance.

    We have a private pay option, too. I don’t remember offhand what it is, but it’s significantly less expensive than traditional in-person testing. We are in 12 states currently, but then also the PSYPACT states.

    Dr. Sharp: Great. Maybe we start to close just talking about the clinician experience. I’m curious about what it looks like for your psychologists who are doing these evaluations.

    Dr. Rachel: I think it’s different for different psychologists. So we have some people who do nothing but [00:56:00] this, and they might do six evaluations a week. And so it’s a very full-time thing. And then we have other people who have a private practice but wanted to learn more about autism, wanted to do a little bit of evaluation, and so might give us two or three evaluations a week. So I think it’s very different for different people.

    One thing I’ve tried to put a lot of effort into is making sure that there’s a community for psychologists. One of my biggest worries is about people in isolation getting their own ideas and drifting away from what’s best practice. What do we think is the right approach?

    And so, almost every day there’s some kind of opportunity to drop in and meet with other psychologists and talk about cases. On a monthly basis, there is an ECHO Case Conference that uses the ECHO model. It does a deep [00:57:00] dive into one particular case. And that’s really helpful.

    And for those, I always record a didactic that goes along with that so that they’re also learning about something and then can get a little bit of continuing education for it.

    Dr. Sharp: Can I jump in? What’s the ECHO format? That’s maybe a naive question, but I’ve never heard that before.

    Dr. Rachel: No, I should have explained it. It’s a model that comes from Medicine. It’s a model of expanding and educating the group of people who can provide a given service. So it’s common, for example, if you’re trying to treat HIV in rural Africa and you want to reach all of these physicians in different places, you have a regular case conference where you do a deep dive on a case, and then there is a short presentation.

    There’s a hub team that is a panel of specialists who can chime in. And so when we do [00:58:00] ours, for the hub team we have somebody from psychiatry, somebody from occupational therapy. We have a trans-autistic person to talk about their lived experience. We have a few different people who can give different takes on the same case.

    It has been a great educational thing, but ECHO autism by itself is a big and active thing. And if anybody who’s listening wants to check them out online and they put ECHO autism, they’ll find a lot of different things that are offered through ECHO autism. We’re separate from that, but it’s a connection that we have, and we use that model.

    Dr. Sharp: Fantastic. Very cool. Community is super important. I hear that just in my consulting, psychologists doing testing tend to feel pretty isolated. It is tough wrestling with so much complex data and making decisions about folks’ [00:59:00] lives, futures, and interventions. It’s a lot to carry, and community goes a long way.

    Dr. Rachel: It is. And then the experience of psychologists in different states right now is so different because different things are happening in different states. And so we do also have opportunities like our Texas group has one chat and our Florida group has another chat so that they can also talk about those things, especially if they’re trying to advocate for legislation or different things happening in their areas.

    Dr. Sharp: Absolutely. I’m curious where things are headed from here. Are there any future projects or exciting developments that you can talk about?

    Dr. Rachel: Yeah. I think we’ll just keep growing. Right now, I said we’re in 11 or 12 states, and the plan is to be completely national and also to expand what we offer. So we only do individual [01:00:00] therapy at the moment, but there are other options there where we might expand.

    I’d also like to evaluate things that commonly co-occur with autism. I think it’d be fairly straightforward to add mood and anxiety to what we’re doing. I think there are some other places to do ADHD, but I’d like to dive more into that and see if there’s anything that we could improve upon or expand upon with that. There’s a lot of directions we can go in the future.

    Dr. Sharp: It’s exciting. I appreciate you being here and talking through all of this, bearing with all my questions, wrestling with some philosophical dilemmas, and all of this. Thank you so much for being here, Rachel.

    Rachel: Yeah. Jeremy, thank you for having me. I’ve enjoyed it.

    Dr. Sharp: All right, y’all. Thank you so much for tuning into this episode. Always grateful to have you here. I hope that you take away some information that you can implement in your [01:01:00] practice and in your life. Any resources that we mentioned during the episode will be listed in the show notes, so make sure to check those out.

    If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes or Spotify or wherever you listen to your podcasts.

    And if you’re a practice owner or an aspiring practice owner, I’d invite you to check out The Testing Psychologist mastermind groups. I have mastermind groups at every stage of practice development: beginner, intermediate, and advanced. We have homework, we have accountability, we have support, and we have resources. These groups are amazing. We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at thetestingpsychologist.com/consulting. You can sign up for a pre-group phone call, and we will chat and figure out if a group could be a good fit for you. Thanks so much.

    [01:02:00] The information contained in this podcast and on The Testing Psychologist website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment.

     Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you [01:03:00] need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

    Click here to listen instead!

  • 503 Transcript

    [00:00:00] Dr. Sharp: Hello, everyone. Welcome to The Testing Psychologist podcast. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

    Many of y’all know that I have been using TherapyNotes as our practice EHR for over 10 years now. I’ve looked at others, and I just keep coming back to TherapyNotes because they do it all. If you’re interested in an EHR for your practice, you can get two free months of TherapyNotes by going to thetestingpsychologist.com/therapynotes and enter the code “testing.”

    This episode is brought to you in part by PAR.

    The Personality Assessment Inventory Bariatric compiles the results of the PAI into a useful report for bariatric surgery candidates, available on PARiConnect, PAR’s online assessment platform. You can visit parinc.com/products/pai-bariatric.

    Hey, folks. What’s going on? Welcome back to [00:01:00] The Testing Psychologist. Today’s a business episode, and we are going to continue our discussion of software and technology. So, we are talking about going beyond the basics.

    Last time, we talked about the EHR and all the things that it could do, but today I’m going to go a little bit beyond the EHR for those of you who maybe mastered the basics and realized that it may not be enough for your practice.

    We’re going to talk about more of a complete tech stack, I would suppose, for your practice, breaking down some of the tools that could be most helpful for you to automate workflows, enhance patient engagement, and maybe optimize your financial management as well. And we’re going to stay HIPAA-compliant and ethical.

    So, going to try to tackle those questions: What is worth investing in? What do you really need? How do you implement these tools without completely overwhelming yourself? So if you are [00:02:00] ready to implement some additional tools in your practice beyond your EHR, stay tuned.

    All right, let’s do it. We’re going to start off with the hub. I’m not going to spend a lot of time on this, but I’m going to reemphasize. You do need some kind of practice management system. You could also call this the EHR. This is your hub of operations. So if you didn’t listen to the previous episode on the EHRs, what they include, and what they can take care of for you, I would suggest going back and doing that.

    This should seamlessly integrate with your other systems and handle scheduling, billing, patient records, and all those kinds of things. [00:03:00] So it does matter. Practices that use an EHR spend about 30% less time on administrative tasks and experience way fewer errors in billing and scheduling. So anything you can do to automate is great. So, check out the EHRs.

    If you haven’t seen my YouTube channel with the playlist of EHR reviews, I do review probably seven or eight EHRs at this point, specifically through the lens of being a testing psychologist. So go check those out. So you need an EHR. That’s the base.

    Next is telehealth. Telehealth is interesting. You need a telehealth platform somehow. So you got two options for this. You actually have a lot of options for telehealth. You can do built-in telehealth through either Teams or Google Workspace. So, if you use either of those file management systems, they will come with HIPAA-compliant telehealth options as long as you have signed up for the plan that gives you a BAA.

    [00:04:00] Many of the EHRs will also offer a telehealth option. I know IntakeQ, TherapyNotes, and SimplePractice. They all offer telehealth solutions. So just make sure that you have your telehealth dialed in. People do appreciate telehealth, although we’re getting a lot more calls to try to return in person.

    That said, you still need a solid telehealth platform. Doxy is an old, classic mental health platform. It’s not integrated with the EHRs natively, but it’s an option. Zoom, same way. I’m getting tired of Zoom. To be honest, I don’t like the functionality. I don’t like the interface as much. I feel like the updates that they made over the past few years or maybe several months are not working for me. So I’ve switched over almost entirely to Google Meet and just use that for telehealth.

    Okay, so those are the basics. Let’s get into [00:05:00] options that are a little more fancy. 

    Let’s talk AI. Everybody loves AI. Everybody’s curious about AI. Some people are scared of AI. There’s a lot to say about AI. I’m sure we’ll do a whole podcast episode on AI, or maybe several at some point in the future, but I’m going to couch the AI discussion here under the umbrella of extra automation and advanced tactics that you can use beyond your EHR.

    So, at this point, a lot of EHRs are not incorporating AI. I don’t know of any EHR right now that’s publicly available that incorporates AI specifically for testing folks and also doubles as an EHR. That said, there are some tools that are still helpful for testing folks.

    Just to say right off the bat, I don’t think AI is going to replace us at any point. We’re already getting backlash on the therapy side of mental health [00:06:00], chatbots and use of AI, and how it’s harmful to people in some cases. And so I don’t think that’s going to be happening.

    And as the co-founder of Reverb, which is an AI platform to help with report writing, I can say that I don’t think AI is anywhere near the place where it can totally replace psychologists with writing reports and integrating testing data and things like that. So not trying to replace us; it’s just about making life easier and trying to automate the things that don’t require clinical expertise or a ton of brain power.

    So what are some of those tools? Let’s talk about transcription and data synthesis, not data in the testing sense but information synthesis. So there are tools like Heidi. Heidi is a popular AI option that will help you with note-taking and summarizing interview notes, for example, [00:07:00] listening in on an interview and then transcribing the content and synthesizing the content. Heidi’s a great option.

    NoteZap is another option that will listen in and record the session, HIPAA-compliant of course, and then transcribe and even synthesize some of the information. There are a lot of report writing options out there at this point, all the way from do-it-yourself options. For things like that, you can just wing it with ChatGPT, which is not HIPAA-compliant.

    You could take a little bit of a step up in terms of privacy at least and sign up for something like Bastion, which is a healthcare-minded ChatGPT wrapper that’s HIPAA compliant. The technology is a little dated at this point. Bastion is not updating to the most recent language models, so it’s not quite as [00:08:00] good as it used to be, but that’s an easy do-it-yourself approach.

    You can also look at Liz Angoff’s resources, which I will link in the show notes. She has a lot of resources on using AI in assessments without necessarily investing in a more advanced platform, which works for a lot of folks who are trying to keep costs low and trying to take that do-it-yourself approach.

    If you want to go the route of a more full-featured platform, there are certainly some options out there. Like I said, I’m a co-founder of Reverb, which is a report-writing software. It’s a full-featured solution that will take all of your data, integrate it, and essentially write the draft of the report for you. We think at this point, it’s saving people at least 50% of the time it takes to write reports.

    There are other options out there, and I would encourage you to explore them and see [00:09:00] which ones fit most for your needs, your price point, and things like that. There’s one called Parchment. There’s one called neuroaide. I think there’s one called Assessment Assist. So, there are some options out there.

    PsychAssist, at some point, will be on the market and out of beta. So, there are a lot of options out there. I would encourage you to try all of them and see which one fits your needs the best, but I don’t think AI is going away by any means.

    The research will say that it is pretty good at reducing the time that we spend on these tasks. So if you’re afraid of AI or you think we shouldn’t be using it, I think you’re going to get left behind, just being honest.

    Let’s take a break to hear from a featured partner.

    Y’all know that I love TherapyNotes, but I am not the only one. They have a 4.9 out of 5-star rating on [00:10:00] trustpilot.com and Google, which makes them the number one-rated Electronic Health Record system available for mental health folks today. They make billing, scheduling, note-taking, and telehealth all incredibly easy. They also offer custom forms that you can send through the portal. For all the prescribers out there, TherapyNotes is proudly offering ePrescribe as well. And maybe the most important thing for me is that they have live telephone support seven days a week, so you can actually talk to a real person in a timely manner.

    If you’re trying to switch from another EHR, the transition is incredibly easy. They’ll import your demographic data free of charge so you can get going right away. So if you’re curious or you want to switch or you need a new EHR, try TherapyNotes for two months absolutely free. You can go to thetestingpsychologist.com/therapynotes and enter the code “testing”. Again, totally free. No strings attached. Check it out and see why [00:11:00] everyone is switching to TherapyNotes.

    The Personality Assessment Inventory Bariatric, or PAI Bariatric, compiles the results of the PAI into a useful report just for bariatric surgery candidates. Log into PARiConnect, PAR’s online assessment platform, where you can administer the PAI and select the bariatric score report to view the result based on bariatric presurgical candidate norms. Learn more at parinc.com/products/pai-bariatric.

    Let’s get back to the podcast.

    All right, let’s talk about something even more advanced. I’ll be honest, y’all, I have not gone this route, but I’ve done a lot of research into it. So this will be something like integrating, for example, an AI chatbot on your website or an AI agent to help with client scheduling and administrative tasks.

    So there are some HIPAA-compliant chatbot services [00:12:00] out there. There’s Simpler AI, there’s Tars, there’s Windy, and there are a lot now. I think Windy was one of the originals, but these others have come on the scene as well.

    And so how this works is you essentially connect one of these AI chatbots to your website and also connect it to your EHR. So you would need an EHR that allows a chatbot to integrate with it. You can use the chatbots to answer simple questions like, What is your address? What is your fax number? When is your earliest available appointment? How do I book an appointment? How do I cancel an appointment? Things like that.

    You can set it up basically like an FAQ so that it can handle any of those common questions that people are asking about. What kind of work do you do? What kind of evaluations do you specialize in? Do you test for autism? Things like that.

    And then you have to monitor it and tweak it here and there to make sure that it stays dialed in, but [00:13:00] some of the chatbots can learn from themselves. So as you get client questions, they can learn and get better and better.

    So if you are at this place where you’re ready to automate a little bit more, go to the next level beyond just an EHR, spend 2, 3, 4 hours over the next week or so, diving into chatbots and see if there are any that can actually fit for you. I’m certainly going to be looking at this over the next quarter. It’s a Q2 project for our practice.

    In the next section, we’re going to talk about patient engagement solutions, so keeping clients involved and happy. So, this can work in a lot of different ways. When I say patient engagement, it’s essentially, how do we keep our patients well dialed in and connected to our practice at different points?

    So, engaged patients are going to show up more, they’re going to invest in more services, and they’re going to have better outcomes. It’s [00:14:00] not just about sending emails, though; it’s ongoing communication, education, and convenience. So it definitely helps with no-shows. Also helps if you are trying to “upsell” clients after the evaluation. Keeping them engaged is a good thing.

    There are a few examples of how you might do this. The one that comes to mind right away is if you have a waitlist where clients are waiting, let’s say 2, 3, 4, 6, 8 months to get in for an evaluation, you can keep them engaged by adding them to an automated email sequence. That should be pretty easy to set up. You could use MailChimp or Constant Contact or ActiveCampaign, any of those email platforms.

    And as soon as someone books their intake, you add them to this email campaign. That’s essentially sending them one email a month. It’s keeping them engaged. The first one might be, [00:15:00] hey, thanks for scheduling your appointment. We look forward to seeing you.

    The second month it might be, all right, you’re still on our wait list. Please let us know if you have any questions or if anything has changed. The third month it might be a piece of content that is tailored to their concern. So it might be an info sheet on neurodevelopmental disorders or something like that.

    So, you see where I’m going with this? It’s not really about providing super valuable content that’s helpful, but it’s more about staying engaged and keeping them present with the practice, because we found that a lot of people will try to look for other solutions that can offer a shorter wait list, and we want to keep them engaged and let them know we haven’t forgotten about them.

    Another way that you can keep clients engaged is with post-evaluation feedback. We [00:16:00] can’t solicit reviews, but we can ask for feedback, and that’s what I’m talking about. So there are a ton of automated patient engagement tools out there. You can do a Google search and find it.

    This is that thing like when you leave your dentist’s office within like 10 minutes, you get a text that says, how did we do? I haven’t seen many mental health practices using it, but there are options out there, and you can also create your own.

    For example, we built a simple tool within Google Sheets that lets us essentially add the client name and email address right after we do feedback, and then it is automatically programmed to send out a feedback survey within four weeks. And then we get that customer feedback automatically.

    There are other ways to do it. You can use something like Jotform, you can use SurveyMonkey as long as it’s the HIPAA-compliant version. So there are lots of options [00:17:00] out there for patient engagement, but I think that’s a great option to keep people engaged and connected to your practice.

    The third option is that once you finish up a feedback session, you can also add that client to another email campaign depending on their concern. So you might have a separate campaign for ADHD diagnosis, autism diagnosis, and learning disorder diagnosis.

    I’m thinking off the top of my head, but you add them to this email campaign, and then they get three to six automated emails once a month or every two weeks where you’re providing them some helpful content. You’re checking in, you’re saying, hey, have you connected with any resources I recommended? Let me know if you need help. Hey, do you need another feedback session to discuss the report? I’d be happy to meet with you. And you include your scheduling link.

    So this is just another way to keep clients tied to your practice [00:18:00] and invest in services at your practice if you’re willing to provide those things, of course. Like I said, you’ve got to walk the line, and we can’t solicit reviews, but we can ask for feedback, and we can try to keep patients engaged.

    A lot to digest from this episode. If you’re listening and thinking, oh my gosh, this sounds completely crazy, I don’t have the time to do this. You don’t have to do it immediately. Like I said, these are slightly more advanced techniques, but these are the things that will set you apart from other practices as well. And so it’s worth putting a little bit of time into.

    And if you can’t do it yourself, hire somebody to do it. There are plenty of college students who’d be happy to figure this out for you. This is in the marketing realm, college student, Upwork, Fiverr, there are folks out there who can research and set these things up for [00:19:00] you. So you don’t have to do it all yourself.

    I hope this has been helpful. If you want to talk through any of these options, do some implementation and some research on tools that can help you, let me know. You can book a strategy session at thetestingpsychologist.com/consulting. We can get really hands-on, dive in, and try to figure this out together. Good luck, happy researching, and happy technology testing.

    All right, y’all. Thank you so much for tuning into this episode. Always grateful to have you here. I hope that you take away some information that you can implement in your practice and in your life. Any resources that we mentioned during the episode will be listed in the show notes, so make sure to check those out.

    If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes or Spotify or wherever you listen to your podcasts.

    And if you’re a practice owner or aspiring practice owner, I’d invite you to check out The Testing Psychologist [00:20:00] mastermind groups. I have mastermind groups at every stage of practice development: beginner, intermediate, and advanced. We have homework, we have accountability, we have support, and we have resources. These groups are amazing. We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at thetestingpsychologist.com/consulting. You can sign up for a pre-group phone call, and we will chat and figure out if a group could be a good fit for you. Thanks so much.

    The information contained in this podcast and on The Testing Psychologist website is intended for informational and educational purposes only. Nothing in this [00:21:00] podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment. 

    Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

    Click here to listen instead!

  • 503. Beyond the EHR: Tech Stack Ideas for Private Practice

    503. Beyond the EHR: Tech Stack Ideas for Private Practice

    Would you rather read the transcript? Click here.

    We all know that an Electronic Health Record (EHR) system is essential for running a practice, but here’s the thing—it’s not enough.

    If you want a truly profitable, efficient, and stress-free practice, you need a complete tech stack that goes beyond your EHR. Today, we’re breaking down the best tools to help you automate workflows, enhance patient engagement, and optimize financial management—all while staying HIPAA-compliant and ethical.

    We’re also tackling the big questions: What’s worth investing in? What do you really need? And how do you implement these tools without overwhelming yourself or your team? Let’s dive in.

    Cool Things Mentioned

    Featured Resources

    I am honored to partner with two AMAZING companies to help improve your testing practice!

    PAR is a long time supporter of the Testing Psychologist podcast, and I will continue to bring you featured items from their catalog! Listen to each episode for specific product recommendations and links. Learn more at www.parinc.com.

    TherapyNotes is the leading EHR system for mental health practitioners. I’ve used TherapyNotes for over 10 years, and it just keeps getting better. Use this link and the promo code “testing” to get two free months and try it for yourself! www.therapynotes.com/testing

    The Testing Psychologist podcast is approved for CEU’s!

    I’ve partnered with At Health to offer CE credits for podcast episodes! Visit this link to access current and past episodes available for CE credit. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!it. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!redit. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!

    About Dr. Jeremy Sharp

    I’m a licensed psychologist and Clinical Director at the Colorado Center for Assessment & Counseling, a private practice that I founded in 2009 and have grown to over 20 clinicians. I earned my undergraduate degree in Experimental Psychology from the University of South Carolina before getting my Master’s and PhD in Counseling Psychology from Colorado State University. These days, I specialize in psychological and neuropsychological evaluation with kids and adolescents.

    As the host of the Testing Psychologist Podcast, I provide private practice consulting for psychologists and other mental health professionals who want to start or grow psychological testing services in their practices. I live in Fort Collins, Colorado with my wife (also a therapist) and two young kids.

    Ready to grow your testing services? Click below to schedule a complimentary 30-minute pre-consulting call!

    [x_button shape=”square” size=”large” float=”none” href=”https://app.acuityscheduling.com/schedule.php?owner=13476033&appointmentType=2843005″ target=”blank” info=”none” info_place=”top” info_trigger=”hover”]Schedule Your Call[/x_button]

  • 502 Transcript

    [00:00:00] Dr. Sharp: Hello, everyone. Welcome to The Testing Psychologist podcast. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

    Many of y’all know that I have been using TherapyNotes as our practice EHR for over 10 years now. I’ve looked at others, and I just keep coming back to TherapyNotes because they do it all. If you’re interested in an EHR for your practice, you can get two free months of TherapyNotes by going to thetestingpsychologist.com/therapynotes and enter the code “testing.”

    This podcast is brought to you in part by PAR.

    The NEO Inventories Normative Update is now available with a new normative sample that is more representative of the current U.S. population. Visit parinc.com/neo.

    Hello, everyone. Welcome back to The Testing [00:01:00] Psychologist. Today we’re talking about one of my favorite topics, which is systems and automation.

    I think this is one of the most crucial topics for practice owners, especially as we move into the next few years. Everything is AI-driven. Technology is everywhere. Everything’s on the internet. Clients expect everything to be easy.

    So we’re going to talk about how to use automation and software systems to keep you from drowning in admin work, or constantly being behind on notes, or struggling to keep up with scheduling and billing. This is the episode for you.

    So a big part of this is the burnout factor. It is real. Studies are showing that clinicians are spending more and more time on administrative tasks. And when you are doing administrative stuff, you are not making the money that you should be. You’re also using valuable [00:02:00] brainpower that you could be using on other tasks and translating into good client work.

    So if you’re ready to get some of your time back, build your efficiency and free up a little brainpower, this is for you. Let’s get to it.

    Okay, folks. Let’s get right into it. Just like usual, talking about technology, automation, and systems, we’ve talked about these things independently on podcast before. I just want to say it bears repeating that every time I think that we have crossed the threshold into using an EHR or using some software to help automate your practice processes, I’ll bump into someone who does [00:03:00] not have an EHR and is taking care of everything on paper and pencil, using Excel sheets, Word documents or whatever it may be in these disparate systems. And so, I’m always happy to revisit this idea of systems and automation in practice.

    So I’m going to start with what I would consider to be the backbone of any practice, which is your Electronic Health Record system or EHR. So, a good EHR can streamline everything in your practice. It can help with documentation, it can help with HIPAA compliance and record storage, and some of them are even integrating AI-powered features to make notes and writing a little faster. On the flip side, if you have a bad EHR, then it’s going to be super frustrating and cost you more time than it’s probably worth.

    Statistically speaking, there was a recent study that was [00:04:00] published in the Journal of the American Medical Association that found that over 40% of clinicians report feeling burned out due to excessive administrative tasks. If you find yourself in that 40% group, just know that there is hope, and it can be easier.

    So like I said in the introduction, it’s not just inconvenient, it is costing brain power and can be a health risk for us as providers, and it’s not doing our clients any favors. So how do we fix this?

    Well, let’s look for an EHR that fits for you. I did a lengthy EHR review series. It’s up to about seven or eight EHR systems on my YouTube channel. You can go check that out. I’ll put it in the show notes.

    There’s a playlist just for EHR reviews for testing folks. I tried to dive in and do some of the work for you so that you don’t have to [00:05:00] do your own trials. I think there are a few out there that are pretty good for testing folks and some that are not so good.

    You all know TherapyNotes is a sponsor for the podcast. I’m obviously a big fan of TherapyNotes. There are others that might meet your needs a little bit better, so I’d encourage you to go watch those reviews and decide on which one you might need.

    There’s also some great resources at Rob Reinhardt’s website. The name is escaping me right now. I’ll look it up, and I’ll put it in the show notes. I also did a podcast with Rob way back when, and he does pretty extensive reviews of EHRs as well, and seems to be on the cutting edge in terms of new EHRs and ones that have most recently come out.

    So, choosing an EHR that works for you. You want to look for a system with as many features as you need and none of them that you don’t. I would say the basics right now are [00:06:00] calendar, accounting/billing, and to-do list notes.

    If you submit insurance claims, you’ll definitely want something that integrates billing, insurance claims, reconciling payments, and things like that. You’ll also want to look for one with a robust client portal, as we’ll talk about here in just a bit, where you can send messages back and forth and do scheduling, but it’s up to you. It’s worth sitting down and making a nice list of your personal requirements for an EHR and your personal pain points in the administrative process.

    So first is to do some reviewing, do some demoing, and choose the EHR that works best for you. And you may not need any AI-driven tools. At this point, there are no EHRs that have AI tools that are actually helpful for testing folks. There are some that are integrating AI note-writing tools. So if you do therapy, you could certainly look for that.

    [00:07:00] You also want to look for an EHR that will let you maybe create almost like a text expander, like phrases or dot phrases, for those of you who’ve used Epic. There are some EHRs out there that’ll let you program in snippets or repetitive phrases, things like that, so that you can insert them easily into your notes and documentation.

    Like I said earlier, many EHRs will also let you set up automated intake forms, appointment reminders, secure messaging, all of that kind of stuff, just to save you all the time that you might spend with back and forth emails, sending paperwork and trying to track it down, oh, does this person know how to download and fill in a PDF? Do they have to print it? That kind of thing.

    Like I said, lots to dive into with EHRs. I don’t know that anyone is perfect. I have not found the perfect EHR for testing folks at this point, but it’s a matter of listing your most important criteria and then going to [00:08:00] find the EHR with those features.

    Alright, what is the next system? So, the first system is EHR. That’s the backbone. The second system or software you want to have is some kind of automated scheduling and client communication tool. Many EHRs will cover this for you, but this another major time suck for clinicians, scheduling.

    So if you are still manually booking, rescheduling, confirming appointments, that is something that we need to fix immediately. There are plenty of automated scheduling tools. Like I said, many EHRs will handle this themselves, but if you don’t have it in your EHR somehow, you can look at tools like Calendly or Acuity. I know Acuity has a HIPAA compliant version. I think Calendly does as well.

    So these are [00:09:00] software options that can let clients book directly through your website so you don’t have to do phone tag. You can either let them book into consult calls, like we’ve talked about, or screening calls. If you’re brave and have other systems in place, you can let them book directly in the intake appointments, but the idea here is that we want to cut down on phone tag.

    And then once people are actually booked, you get to use those automated text and email reminders because they have been shown to reduce no-shows by up to 40%. And you might be saying, oh my gosh, this is overkill. I don’t need all of this. I never have no-shows anyway. Maybe that’s true. If that’s happening for you, then congratulations. You’re super fortunate.

    If you think of it just in terms of ROI, return on investment, if you get one no-show a year, most of us, that’s going to cost us $2,000 to $3,000, maybe $4,000, $5,000, 6,000, $7,000 depending how much you charge for an evaluation, and that is absolutely [00:10:00] going to pay for the software to manage no-shows many times over.

    Of course, your cancellation policy is going to come into play there too, but a lot of the EHRs will let you build in your cancellation policy, and if the person doesn’t cancel within a certain timeframe, then they get automatically charged.

    Let’s take a break to hear from a featured partner.

    Y’all know that I love TherapyNotes, but I am not the only one. They have a 4.9 out of 5-star rating on trustpilot.com and Google, which makes them the number one-rated Electronic Health Record system available for mental health folks today. They make billing, scheduling, note-taking, and telehealth all incredibly easy. They also offer custom forms that you can send through the portal. For all the prescribers out there, TherapyNotes is proudly offering ePrescribe as well. And maybe the most important thing for me is that they have live telephone support seven days a week so [00:11:00] you can actually talk to a real person in a timely manner.

    If you’re trying to switch from another EHR, the transition is incredibly easy. They’ll import your demographic data free of charge so you can get going right away. So if you’re curious or you want to switch or you need a new EHR, try TherapyNotes for two months absolutely free. You can go to thetestingpsychologist.com/therapynotes and enter the code “testing”. Again, totally free. No strings attached. Check it out and see why everyone is switching to TherapyNotes.

    Our friends at PAR have released the NEO Inventories Normative Update. The NEO Inventories measure the five major dimensions of personality and the most important facets that define each. Now with an updated normative sample that’s more representative of the current U.S. population and fewer components for easier purchasing. Visit parinc.com/neo.

    Let’s get back to the podcast.

    [00:12:00] So a lot of you, I’m guessing, are thinking, my clients like the personal touch of scheduling with me, and that is totally fair, but here’s the thing. By automating the scheduling, you free up the time to provide a high-touch experience where it matters most, which is when you’re in front of the client, in person with the client.

    I did not dig into the research on this. I probably should, but I think there is, anecdotally, a lot of evidence to suggest that the upcoming generations, let’s see, that would be, I’m a late Gen X and then Millennial, and Gen Z, Gen Alpha, we are not going less offline.

    People like convenience. Thank you Amazon and Prime Shipping. People like convenience. People like to know what they are getting [00:13:00] ASAP. They like to book very quickly. They like to be able to do everything online. They don’t like talking to people on the phone. So everything you can do to automate the scheduling and reminder process is going to be really helpful.

    Third system is with billing and financial automation. This can also happen through your EHR. That’s why I said in the beginning, EHR is really the backbone, but you want to make sure that you have some kind of billing processes in place.

    I think for a lot of us, billing is probably the least favorite part of running a practice, but it doesn’t have to be a total nightmare. If you automate your billing, that can be a complete game changer. So here’s what that looks like.

    Essentially, many EHRs and many systems let you set up your system to auto-generate invoices, auto-generate superbills, et cetera, payments. And as long as you have a credit card on file for folks, you can automatically [00:14:00] charge the cards on file.

    I know there’s some nuanced payment models out there where you’re doing a deposit at this point in time, and then half of the balance at this point in time, and the other half at a different point. With a lot of the EHRs, you can absolutely automate that. You just have to put in the work on the backend, make sure your billing codes are dialed in, your settings are correct and so forth.

    There’s a lot of capability to automate the invoicing and payment system. A lot of EHRs will even send the invoices or email the invoices automatically if you don’t want to invest in an admin person to handle that. At the very least, having a system that will process credit card payments, store credit cards, allow you to go in at the end of each day and click a simple button to charge the card and generate a receipt, if you need to do that.

    [00:15:00] If you’re on the insurance side, there are EHRs out there that can largely automate the insurance claim submission process. You may still have to manually go in and click a few boxes to submit those claims, but it should be relatively easy.

    On the insurance side, I should definitely mention the fact that many EHRs out there will also scrub the claims for you and make sure that you have the correct information on the claims to increase the likelihood that you’re going to get paid. I see a lot of discussion in the Facebook group about claims getting denied, billing being a nightmare and so forth.

    If you want to go the extra mile and dial in your finances, you can get some kind of software to track income, expenses, profitability, so kind of a financial dashboard. There are many EHRs that offer reporting [00:16:00] options where they will show you your monthly income or per-session income, things like that, per CPT code. So you could do that route.

    You could use something like PracticeVital, which also has financial dashboard capabilities, lots of tools out there. That’s a next level step. If you can automate your billing and payments, that’s a great baseline step.

    Just to put some data behind this, there was a study from healthit.gov that found the practices that used automated billing had 35% fewer billing errors and reduced their claim rejections by up to 25%. That’s a lot, especially if you’re an insurance-based practice. It also goes hand in hand for private practice where I think the more you delay with billing, if you forget to charge or don’t have good processes in place, it increases the likelihood that you’re going to lose that money.

    Before we wrap up, I can address two concerns that typically come up [00:17:00] when I talk about this stuff with people. One, people say automation is too expensive or software is too expensive. This is just one of those things; the time savings is going to far outweigh the cost in the vast majority of cases. So unless you’re going way in over your head and paying for software that is exceptionally expensive, which we really don’t need, most of the automation tools are going to pay for themselves, if not within the week, then definitely within the month.

    So most of these tools are going to be $30, $40, $50 a piece for a monthly subscription. Maybe a little bit more, $75, maybe $90. But think about it, most of us, that is less than an hour of our time.

    The research would say that healthcare practitioners are spending way more than an hour a day, even on administrative tasks. So if this is something that could save you, let’s just call it 10 [00:18:00] to 20 hours a month, that’s at least $1,000 to $2,000 a month that it’s going to save you on the time cost.

    Another big objection that people have is, Hey, I don’t have time to set this up. I get that. Of all the concerns, I feel like this is very valid. A lot of folks are busy. If you’re using paper and pencil and a non-automated system, you’re probably extra busy because you’re spending extra time on these administrative tasks.

    But my counter to that is that you don’t have time not to. What this means is take 3 hours on a weekend, take 2 hours on a Sunday, take 1.5 hours on a Monday, and start with one system, and build from there. So if you get an EHR set up, you can easily do that in probably an hour, maybe two, and then you’re off to the races.

    This is a perfect thing, [00:19:00] honestly, that I like to work with people on a strategy session. We’ll jump on a call. We can walk through an EHR together, get it set up, and have some time carved out to implement one of these systems, troubleshoot a little bit, and make sure that you’re good to go.

    The other concern that people ask about a lot is data security. Two thoughts there. One, especially if you’re using paper and pencil, I would argue that your paper and pencil files are a lot less secure in a file cabinet, if they’re even in a locked file cabinet in a locked office and so forth, than one of these cloud-based HIPAA-compliant platforms.

    They’re all going to be HIPAA compliant, if it’s an EHR for mental health, and most of them are going to require what’s called two-factor authentication, which means that you can’t log in to the software without the password and an extra measure of security, like sending you a text or sending you an email to confirm your identity.

    So [00:20:00] of all the concern, data security is probably the least concern at this point. There is the chance that somebody’s going to hack into TherapyNotes and steal the client data. I think it’s really low, though.

    A quick recap. Three essential systems that you need to help with your automation. One, get an EHR. This is the umbrella system that will likely encompass the other two, which are scheduling and communication and billing and financial management.

    If you have a good EHR, it’s going to go a long way toward helping you with the communication piece, letting your clients book online, do their rescheduling online, and get automated reminders. All those things are going to be super helpful to reduce no-shows and remove friction from the process, as I put it in an episode two weeks ago. And then on the billing side, automating the invoicing and the charging, it’s a no-brainer. It’s going to save you a ton of money if you have that process set up.

    [00:21:00] You’ll notice I didn’t talk at all about AI. There’s a whole separate episode that will be coming about AI and report writing and how to leverage AI in your practice to get more efficient. So stay tuned for that.

    In the meantime, if you don’t have these systems in place, just pick one. Commit to yourself that you’re going to watch some EHR review videos and demos. One EHR this week. That’s all you got to do. Just pick a small step, start there, and see how it works for you.

    And then, if it doesn’t work, you can iterate. You can try another one and ask for some feedback in The Testing Psychologist Facebook community, or, like I said, book a strategy session. We’ll talk about EHRs. We can demo some and see what fits best for you.

    All right, y’all. Thank you so much for tuning into this episode. Always grateful to have you here. I hope that you take away some information that you can implement in your practice and in your life. Any [00:22:00] resources that we mentioned during the episode will be listed in the show notes so make sure to check those out.

    If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes or Spotify or wherever you listen to your podcasts.

    And if you’re a practice owner or an aspiring practice owner, I’d invite you to check out The Testing Psychologist mastermind groups. I have mastermind groups at every stage of practice development: beginner, intermediate, and advanced. We have homework, we have accountability, we have support, and we have resources. These groups are amazing. We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at thetestingpsychologist.com/consulting. You can sign up for a pre-group phone call, and we will chat and figure out if a group could be a good fit for you. Thanks so much.

    [00:23:00] The information contained in this podcast and on The Testing Psychologist website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment. 

    Please note that no doctor-patient relationship is formed here, and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need [00:24:00] supervision on clinical matters, please find a supervisor with expertise that fits your needs.

    Click here to listen instead!

  • 502. Three Systems You Need in Your Practice for 2025

    502. Three Systems You Need in Your Practice for 2025

    Would you rather read the transcript? Click here.

    In this episode, we’re diving into one of the most crucial topics for private practice owners: Automation. If you’ve ever felt like you’re drowning in admin work, constantly behind on notes, or struggling to keep up with scheduling and billing—this episode is for you.

    Burnout is real. Studies show that clinicians spending more than an hour per day on administrative tasks outside of session time are significantly more likely to experience burnout. That’s why today, we’re talking about The 3 Essential Systems to Automate & Scale Your Practice (Before You Burn Out).

    If you’re ready to reclaim your time, increase efficiency, and still provide top-notch care to your clients—let’s get into it.

    Cool Things Mentioned

    Featured Resources

    I am honored to partner with two AMAZING companies to help improve your testing practice!

    PAR is a long time supporter of the Testing Psychologist podcast, and I will continue to bring you featured items from their catalog! Listen to each episode for specific product recommendations and links. Learn more at www.parinc.com.

    TherapyNotes is the leading EHR system for mental health practitioners. I’ve used TherapyNotes for over 10 years, and it just keeps getting better. Use this link and the promo code “testing” to get two free months and try it for yourself! www.therapynotes.com/testing

    The Testing Psychologist podcast is approved for CEU’s!

    I’ve partnered with At Health to offer CE credits for podcast episodes! Visit this link to access current and past episodes available for CE credit. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!redit. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!

    About Dr. Jeremy Sharp

    I’m a licensed psychologist and Clinical Director at the Colorado Center for Assessment & Counseling, a private practice that I founded in 2009 and have grown to over 20 clinicians. I earned my undergraduate degree in Experimental Psychology from the University of South Carolina before getting my Master’s and PhD in Counseling Psychology from Colorado State University. These days, I specialize in psychological and neuropsychological evaluation with kids and adolescents.

    As the host of the Testing Psychologist Podcast, I provide private practice consulting for psychologists and other mental health professionals who want to start or grow psychological testing services in their practices. I live in Fort Collins, Colorado with my wife (also a therapist) and two young kids.

    Ready to grow your testing services? Click below to schedule a complimentary 30-minute pre-consulting call!

    [x_button shape=”square” size=”large” float=”none” href=”https://app.acuityscheduling.com/schedule.php?owner=13476033&appointmentType=2843005″ target=”blank” info=”none” info_place=”top” info_trigger=”hover”]Schedule Your Call[/x_button]

  • 501 Transcript

    [00:00:00] Dr. Sharp: Hello, everyone. Welcome to The Testing Psychologist podcast. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner, and private practice coach.

    Many of y’all know that I have been using TherapyNotes as our practice EHR for over 10 years now. I’ve looked at others, and I just keep coming back to TherapyNotes because they do it all. If you’re interested in an EHR for your practice, you can get two free months of TherapyNotes by going to thetestingpsychologist.com/therapynotes and enter the code “testing.”

    This podcast is brought to you by PAR.

    Use the Feifer diagnostic achievement test to hone in on specific reading, writing, and math learning disabilities and figure out why academic issues are occurring. Learn more at parinc.com\feifer.

    Hey, everyone. Welcome to The Testing Psychologist [00:01:00] podcast. I’m here with a clinical episode today, a clinical topic anyway. We’re talking about overinterpreting our data, which is a problem that a lot of us might be aware of. Some of us certainly practice accordingly based on best practices, but a lot of us forget, and it’s easy to fall into the temptation to overinterpret data when we don’t necessarily have the statistical grounding to do so.

    So my guest today, Dr. Ulrich Mayr, is going to talk with me all about that. He is a Robert and Beverly Lewis Professor for Neuroscience at the University of Oregon, where he was department head for nearly 10 years and does NIH and NSF-funded research on cognitive functioning and decision making across the adult life-span. He has also been editor in chief of the scientific journal, Psychology and Aging.

    While his research is on the basic science of cognitive functioning, his partner runs a psychological testing [00:02:00] practice, which often leads to fantastic conversations where the theory and the pragmatics of assessment clash in interesting and often productive ways.

    So this is a good example of that, today’s conversation, where we tried to marry the neuroscience, the mathematics, the statistics behind test development and measurement with clinical practice, and bring it home and give some suggestions for what we as clinicians can do given that, as you’ll hear, a lot of the measurement and scores from our batteries cannot be interpreted or generalized the way that we think they can.

    So we talk about a lot of different things. We do some basics on measurement and test development. We talk about what we can pull from the data reliably. We do get into some math and a little bit of statistics around reliability and so forth. So there’s a little something for [00:03:00] everyone.

    And then we conclude with a discussion of, given the situation that we have right now and the measures that we have, what can we do to adhere to best practices with interpretation and gathering the data that we want to gather? Fascinating conversation. So stay tuned and I hope that you can take some things away from this discussion with Dr. Ulrich Mayr.

    Ulrich, hello. Welcome to the podcast.

    Dr. Ulrich: Hello. I’m very happy to be here.

    Dr. Sharp: Thank you for being here and willing to dive into what seems like a relatively complicated but important topic for those of us who are doing testing. We haven’t visited this topic in a long time, so I’m grateful to [00:04:00] have you.

    Dr. Ulrich: I believe it’s important. Nothing I’m saying is completely new to most people, but it deserves repeating every once in a while.

    Dr. Sharp: I totally agree. I think it’s one of those things that we probably learn to some degree in graduate school, revisit periodically, but ultimately forget about, honestly, in the day-to-day work that we do because there’s a lot of cognitive dissonance if we were to fully confront it.

    Dr. Ulrich: There is, and the testing manuals invite you to go down these routes that are not always completely kosher.

    Dr. Sharp: Right. Oh, yes. We got a lot to get into, but I’ll start with the question that I always start with, which is, of all the things that you could spend your time and energy on in your life, why care about this topic so much?

    Dr. Ulrich: It’s a little bit of a hobby of mine. From my actual profession, I’m a cognitive neuroscientist. I do [00:05:00] care a lot. That’s my actual interest, the building blocks of the mind. I’m interested in how to measure executive control functions. That’s basically what I spent my career on.

    And so I come at this from a basic science perspective, from a measurement perspective, that is what I know, that is what I do for a living.

    I should also make that clear, except for a very short stint as a student intern in a psychiatric hospital in Munich, I never actually tested patients. So, why am I here? Well, it’s mostly thanks to my wife and life partner, who is a testing psychologist. She has her own practice and specializes in testing and diagnosing ADHD and related syndromes.

    We very frequently have these dinner conversations that I really enjoy, where she presents a little bit what they call a case [00:06:00] round where she presents like somebody that showed up in her practice and has

    this unique profile, what to do with it. How should I interpret that?

    In these conversations that often becomes clear that there is a bit of a tension between what appears to be a regular practice among testing psychologists in how to interpret these profiles, these test results, and including what the handbook suggests you should do, and what from a more basic science side where you recognize the method larger constraints would seem allowable as safe and sound inferences.

    These discussions have been interesting for both of us. I’ve been able to catch her from going down some rabbit holes every once in a while, but it also got me to think more seriously about how to use what I know [00:07:00] most productively and not just saying, no, you cannot do this, but maybe this is how far you can go given what we know. And so combining both this relatively restrictive pessimistic view and getting also every once in a while to a yes, is what I’m trying to do.

    Dr. Sharp: People ask me sometimes, because my wife is also a therapist, and people will say, oh my gosh, your conversations must be so fascinating. I’m like, well, they’re pretty boring. But I get the sense. Y’all have some of the same thing going on with these conversations that …

    Dr. Ulrich: We both started on the science side and then she at some point transitioned. So we have this common interest in the basic issues that …

    Dr. Sharp: Yes. That’s always nice.

    Dr. Ulrich: … people would think we are complete nerds.

    Dr. Sharp: That’s totally okay. You’re doing it [00:08:00] together, and that’s the important thing, being nerds together.

    Let’s start, this is super important. I want to lay a little bit of groundwork just for folks who maybe haven’t tapped into this in a while or have forgotten or whatever it may be. Maybe we start talking about some of the limits of our current testing methods to provide some context. We could start with this question of, what are some of the inherent limitations of the cognitive tests or neuropsychological tests that we’re using?

    Dr. Ulrich: It comes down to one fundamental issue. I’ll start with the top-line conclusion. Let’s take the Wechsler, to the extent that’s the one that in our household is being discussed a lot, so I’m going to work with that.

    These test batteries provide two categorically different types of information. The first is [00:09:00] the general level, which is best captured in the Full Scale IQ. That is highly reliable, very meaningful, and can be used pretty much as advertised. So we have no beef with that. That’s the good news.

    Then around that level, the battery offers the tap dancing of scores around that mean level, the strengths and weaknesses, the differences between the indices. I just assumed that under the label, the profile-based scores, those are almost always misused and should be treated with greatest caution. I can back that up empirically.

    I want to highlight none of what I’m saying here is new. There are other people that have researched about that. I particularly went [00:10:00] back and read a book by a professor from Baylor called Marley Watkins, who spent much of his career addressing these issues.

    One of the studies that he reports is 400 participants are tested in the Wechsler. Then he uses the handbook-based rationale for picking out for each individual the strengths and weaknesses and the critical differences in the index scores that you would reasonably interpret if this was a patient in your practice. Then the same group of participants is tested again, I think it was 2.5 years later.

    And so now you can ask, if a participant identified this particular weakness and this particular strength, will that show up again 2.5 years later? You might be interested in that. That is [00:11:00] something that you want to see because you’re not just making an inference about this individual for right now; you hope you capture something more general about that individual.

    The bad news here is that the reliability of these inferences was essentially zero. And so that’s something to grapple with. And you deal with these. If you take this one result seriously, and there are others, if you take this one result seriously, you went through all this process of identifying those profile-based scores, and you generated, in the end, meaningless information.

    People may make recommendations on the basis of this information and placement decisions. And so that is something that needs to be taken seriously.

    [00:12:00] Dr. Sharp: I completely agree. Yes. I would guess that at this point, about 5 minutes into our interview, the entirety of the audience is completely freaked out and wondering what we are doing with our careers. I’m kidding.

    Dr. Ulrich: I do want to go back. There is still the mean level score, which is a completely reasonable piece of information. In my understanding, that is what most people start with. So then the question. I do think it’s important to understand why these profile scores are so problematic.

    Dr. Sharp: I think that’d be a good place to go. Just establishing, we know that the Full Scale IQ is largely stable. We can rely on that. We can make some inferences from that. But you’re [00:13:00] saying anything else within that, the index scores, the strengths and weaknesses, those are not going to be reliable over time from what we know.

    Dr. Ulrich: Yes. The paradoxical aspect in all of this is that it’s essentially exactly the strengths of the overall score that harms the degree to which you can interpret the individual scores, the profile-based scores.

    Dr. Sharp: Ooh, say more about that. What do you mean when you say it’s a strength of the overall score that harms the others?

    Dr. Ulrich: There are different ways in which you can elaborate or develop this. I may have it tried coming from different corners because it’s, especially if you don’t have a whiteboard where I can draw some patterns, to get this across is not a trivial issue. So I hope that listeners stay with me here.

    [00:14:00] One thing about that is, we already established the overall general ability factor is something that is very strongly expressed in these test batteries. All test batteries have this in common.

    So this is a piece of information that is inherently independent of the profile-based wiggling around the mean. So you can take that information out, and you can do that. For example, if you have a bunch of profiles in front of you and you subtract the mean level out of each one of them, they collapse onto each other. The wiggling is still there, but you have taken out the mean level. So that demonstrates that you can treat them as completely independent information.

    The problem is that when you look at the reliability of a particular, let’s say, the Working Memory Index, [00:15:00] highly reliable in itself, but because so much of that Working Memory Index score is driven by general ability, most of the reliability in that working memory score is also dependent on that general ability score. Once you’ve taken that out, there’s much less reliability left for the individual, the specific score that you then might use to detect weaknesses and strengths.

    You can put numbers on that. The reliability of the Full Scale IQ is very high. I think it’s something like 0.93 or 0.94. Once you go down to the index score, and you’ve taken out the reliability, you can do that [00:16:00] mathematically. If you take out the reliability of the general ability factor, then the remaining reliability is somewhere between 0.2 and, if you’re lucky, up to 0.6. That’s not a good range to be in. Raw, useful information about individual patients.

    We usually say that we want to have at least 0.8 reliability to draw inferences about individual people. And so that’s basically what you have to work with when you are dealing with profile scores. That’s the crux of it.

    It’s an annoying problem. I feel for the test designers because a test designer wants to have reliability. The best way to get reliability is to saturate all the different tests with general [00:17:00] ability. That’s how you drive up overall reliability, yet that same process gets in your way when you want to interpret the individual scores.

    To put in a difference, it’s assumed the Wechsler battery was one where there was no relationship between the individual index scores. They were completely related. Not that we necessarily want that because then you don’t have a general ability score anymore, but in that, that would be a case where nobody would ever have a problem with interpreting profiles and differences, because now all of the information is actually in the individual index scores, and you don’t have to worry about that problem anymore.

    So the more you have the individual index scores related to general ability, the less you have to work with in [00:18:00] terms of interpreting the wiggling of the profile.

    Dr. Sharp: The way that you described it to me during our pre-interview chat really resonated. Maybe we can dive into that a little bit. You framed it like a pie chart where, like you said, about 50% of the pie is occupied by g or the general ability.

    Dr. Ulrich: That’s one of the other ways to get at that. Every index score of every subtest contains a bucket of information. Let’s assume you can think of a bucket; you can think of a pie chart. The pie chart is the whole amount of information. Once you remove the information that is specific to the general factor, that typically removes about 50% to [00:19:00] 80% of the overall pie chart. 0.8 might be a little exaggerating; up to 0.6, 0.7.

    To give you one concrete example, just look that up, the correlation between the reasoning index score and Full Scale IQ is 0.8. So 0.8 translates, you square that. Then you get to the common variance, so it’s point 0.64. So 64% of the pie chart that belongs to the reasoning score is taken up by the Full Scale IQ.

    Once you take that out, there’s only the small sliver of the pie chart left. Now, that is potentially what you can work with in terms of identifying individual strengths or weaknesses. However, not all of that is pure, reasonable information. At least half of it is likely to be measurement error.

    And you don’t know which part of that [00:20:00] it is. You have an unknown quantity of meaningful, relatively small information left to work with to establish profile-based scores or profile-based information. Does that help?

    Dr. Sharp: Yeah, it does. I’m a super concrete person, and I think the visual does help. And in the absence of a whiteboard, I’m just going to belabor this a little bit to try to cement it for folks.

    So thinking about this pie chart, like we said, let’s just call it 60% of the pie chart is eaten up by g, so that we take that away and the remaining 40%, you said half of that is measurement error or noise, give or take. So then that leaves us with, give or take, 20-ish% or a little more, a little less that’s the ability we’re thinking we’re measuring.

    Dr. Ulrich: Yes. And you don’t know which part of that pie chart is the [00:21:00] meaningful and not. There’s no way to determine that anymore because you’re already taking out all the meaningful information by taking out g, which we know we can measure with high reliability. So it leaves you with not much to work with.

    It sometimes gets hard to understand why this is a problem, particularly given that an optimistic reliability of this remaining piece that we would like to work with is maybe around 0.5. This is not nothing. There is meaningful information there. And it can be used, for example, and that’s completely legitimate.

    Let’s say you do a scientific study where you have a group of people, let’s say with ADHD, and a group of controls, and now you want to compare profiles. The 0.5 reliability of each of the [00:22:00] individual index scores is sufficient to detect differences between groups.

    So potentially meaningful information about cognitive functioning in this type of setting where you compare groups with each other can be derived from profiles. So there’s enough information there to do that. It’s unfortunately just not enough in most cases, except for some exceptions that we might talk about later, to draw inferences about individuals.

    That’s the problem, that chomping from a group comparison to the case in the clinical practice where you have just one patient in front of you at a time and need to draw inferences about that patient, not about a group of people, that’s where you start running into problems.

    Dr. Sharp: I think that’s where people probably get tripped up. It’s one of those things that’s easy to cognitively know and understand but then hard to implement when we’re sitting [00:23:00] in front of patients and have that pressure to come up with something meaningful in the evaluation.

    And so that to me leads to two areas of discussion. Maybe the first is just to dive into that a little bit more if you can, and explain why don’t those group level differences translate to individual, just to make that super clear. And then it leaves a question of, are we overinterpreting with these individuals?

    Dr. Ulrich: It’s just a question of measurement error. You can measure a group of individuals because you aggregate across individuals, and the measurement error shrinks. Whereas for one individual, it remains relatively large. And so with large measurement errors, you need very high [00:24:00] reliability to be able to draw inferences.

    So that’s the crux of it. It’s just about if something is imprecise, you need higher reliability, and otherwise you just can’t draw inferences now. The second question is, are we overinterpreting? Probably we are often overinterpreting if we use these profile-based scores.

    I would like to add that there may be a way to get to a better place by being very highly disciplined and understanding which of your potential scores you might be interested in interpreting can be assessed reliably. [00:25:00] There’s a second problem that comes in when you try to interpret profiles. That second problem is that you’re looking at all of these wiggling ups and downs, and say, oh, here’s something interesting. This looks high, or this looks low.

    And so essentially what you’re doing is looking at all possible combinations of ups and downs at the same time. And that ignores the fact that when you think about the confidence interval that is generated from a certain reliability, it is always meant for a single comparison.

    Essentially, a confidence interval means that I am willing to accept a 5% error [00:26:00] of accepting something as a true difference that actually is not. That only works once. If you do that twice, the confidence interval has to increase so that you protect yourself from … Every time you look, you add the potential of making this measurement error. So you have to adjust your confidence interval level accordingly.

    If you want to do that for a whole battery of 10 different tests and all possible configurations of differences, your confidence interval would have to increase so much that you basically leave no opportunity anymore for getting any difference reliable and robust.

    What that means is that the way to look at profiles is in a very highly disciplined and priority-based manner. In a particular case, you might have already some [00:27:00] inkling because of the history and because of the background information that that patient may have a particular weakness in X, let’s say. Just making that up processing speed. Maybe you believe, based on the literature that people with ADHD, one diagnostic sign may be a slow processing speed. I’ve heard it said, but I don’t know whether it is true.

    So you say, I want to confirm that this patient potentially has this diagnostic sign associated with ADHD, and that is low processing speed. So then I constrain myself to a single inspection of the profile and say, I’m going to accept a potential drop in processing speed, if it’s big enough as diagnostic meaningful information. [00:28:00] But I’m only going to do that once. I’m not going to sift through the whole profile and look for differences because there’s so much opportunity for differences popping up randomly if you give differences so much opportunity to show themselves.

    Dr. Sharp: Yes.

    Dr. Ulrich: So that’s one way to use the information about statistical limitations and how much we can learn potentially from such a profile and getting to some minimum allowable inferences from this type of information.

    Dr. Sharp: Sure. This is important to talk about, for sure, and just make it super clear that this [00:29:00] is backwards reasoning, if you want to call it that. You come in with a hypothesis, and then you look at the data, too. Test that hypothesis versus just hey, let’s see what’s showing up.

    Dr. Ulrich: That’s the key. You have one hypothesis, and that’s what you test. You don’t let bottom up overvalue you with differences popping up in the profile.

    Dr. Sharp: Right. This might be a good time to mention the likelihood that there are going to be some outliers in a profile. There are going to be some pretty significant differences in index scores, subtest scores, or whatever it may be. Can you speak to that at all? Just the likelihood.

    Let’s take a break to hear from a featured partner.

    Y’all know that I love TherapyNotes, but I am not the only one. They have a 4.9 out of 5-star rating on trustpilot.com and Google, which makes them the number one-rated Electronic Health Record system [00:30:00] available for mental health folks today. They make billing, scheduling, note-taking, and telehealth all incredibly easy. They also offer custom forms that you can send through the portal. For all the prescribers out there, TherapyNotes is proudly offering ePrescribe as well. And maybe the most important thing for me is that they have live telephone support seven days a week, so you can actually talk to a real person in a timely manner.

    If you’re trying to switch from another EHR, the transition is incredibly easy. They’ll import your demographic data free of charge so you can get going right away. So if you’re curious or you want to switch or you need a new EHR, try TherapyNotes for two months absolutely free. You can go to thetestingpsychologist.com/therapynotes and enter the code “testing”. Again, totally free. No strings attached. Check it out and see why everyone is switching to TherapyNotes.

    [00:31:00] The Feifer diagnostic achievement tests are comprehensive tools that help you help struggling students. Use the FAR, FAM, and FAW to hone in on specific reading, writing, and math learning disabilities and figure out why academic issues are occurring. Instant online scoring is available via PARiConnect, and in-person e-stimulus books allow for more convenient and hygienic administration via tablet. Learn more at parinc.com\feifer.

    All right, let’s get back to the podcast.

    Dr. Ulrich: I can’t give you exact numbers, but it’s simply the case. So if you look at all the possible strengths and weaknesses and different scores, the scatter, these are all opportunities for things that look interesting to pop up.

    Dr. Sharp: That’s a good way to put it.

    Dr. Ulrich: So [00:32:00] the confidence interval that the handbook gives you are geared towards a 5% error of probability. So if you have 10 different opportunities for something like that to pop up, 10 x 5 = 50. So now you have a 50% chance that something will show up.

    I’m sure there are about 10 different opportunities for differences in everything the handbook lists about what you could potentially do with these types of profiles. So if you stay with just looking at one comparison, then you accept the 5% threshold and don’t move that around. That’s then what you work with, and that’s probably more acceptable.

    Dr. Sharp: That makes sense. This comes up in supervision a lot. We have interns and postdocs. I think a lot of us, even as licensed clinicians, get [00:33:00] tempted by these major differences. Like, oh my gosh, how could this subtest be so much different than this other subtest within the same index, for example. And that’s pretty typical.

    Dr. Ulrich: It is. And even for that, if you really believe this is a diagnostically important question, this one difference that pops up that you didn’t expect, there is a way to deal with that. The way would be to do further testing.

    Let’s say a verbal comprehension deficit pops up. You add additional tests that get at wearable comprehension and see whether that hypothesis is confirmed. So that would be some adaptive approach towards using the [00:34:00] information you get, but not run with it, but design further tests, not you as a practitioner use for the tests to confirm this potential hypothesis.

    Dr. Sharp: I want to dig into that a little bit more in a bit. I think that’s the optimism here or the solution, which people, I’m sure … One of them, certainly.

    I did want to go back to something that you talked about in the beginning and the difficulty of comparing results over time. I think a lot of us do that. A lot of us test kids multiple times, maybe 2 years apart, 3 years apart, 4 years apart, or we get an evaluation from a previous practitioner from maybe 6 months ago, and we get different results within that. And then we get stuck with this job of, oh, how do we [00:35:00] explain that? We’re trying to reverse engineer what those differences are about. Do you think that’s a worthwhile pursuit? And if so, how do we do it? If not, how do we ignore it?

    Dr. Ulrich: My own background, among other things, is in lifespan aging research. This problem comes up all the time. From the diagnostic perspective, for example, diagnosing something like the beginning of Alzheimer’s should ideally depend on seeing trends where you have to make a decision: is this downhill trend more than what you would expect if trying to interpret differences across test occasions?

    This is an extremely hard problem for which there’s, in the current testing literature, no good solution, but in the essence, it’s exactly the same problem because [00:36:00] now the profile that we’re looking at is not the profile across different tests at one test occasion, it’s a profile across the same tests in different points in time.

    And so, why is this the same problem? Because as we want it to be, these tests will be highly reliable. So, the general factor is the correlation from one measurement occasion to the next. Once you take out that common factor, the remaining information that can encapsulate the change over time is very unreliable and very difficult to interpret.

    So that’s why I generally would be very cautious about interpreting changes at all. And if you add in not just interpreting a change in the [00:37:00] overall score like a Full Scale IQ, but a change in profile, given what we already talked about at the beginning, namely that they just don’t replicate, I would be very, very careful with doing that because you add a different score over different scores. It’s a different score in the profile, and then they might change. So this is an explosion of different score uncertainty.

    Dr. Sharp: Right. Are there any circumstances you can think of where it is advisable or doable to interpret change over time in our evaluations?

    Dr. Ulrich: To some degree, it’s always a matter of degrees. If I see something that deviates from an expected pattern, then at that point, I would add additional testing to confirm whether this is a [00:38:00] one-shot occurrence that then reverts back to the mean or rather a true effect there.

    If you think in terms of long-term real-world potential solutions, the whole problem could be fixed in principle if we had relatively frequent assessments of individuals over time. Let’s come up with an ideal testing world where everybody gets a short but highly reliable cognitive assessment once a year. So now you have an individual’s timeline, and each individual now is captured with their specific timeline. You don’t have to compare to norms anymore. You just compare it to that individual. And if that individual at measurement 0.35, all of a sudden shows a drop, that is [00:39:00] potentially really meaningful because you compare it to the standard error that this individual has generated for himself through his testing history.

    Of course, I probably would do additional testing and see whether there’s something real, but that is a real signal that I would take seriously because it’s based on information generated within that individual. This is a somewhat separate problem, but people who deal with diagnosing deficits in older age often see a patient the first time being tested in their practice, and then you see a university professor like me might have above average score, but potentially that individual was way above average in his early years. And so you would not interpret that individual [00:40:00] necessary as having a deficit, even though relative to his own standard, he actually had a drop.

    And so having an individual testing history for individual people, that gets around that problem. This is, of course, in a dreamland right now, but it’s doable in principle.

    Dr. Sharp: Sure. I did an interview with some folks from a company called Boston Cognitive Assessment maybe six months ago or something. I don’t know if you know them or the test. It’s very brief. It’s a 10-minute assessment that you can repeat as frequently as you would like.

    They’re not the only ones in that space by any means. There are some options coming on the market that can tackle that may ignore the whole thing.

    Dr. Ulrich: I would [00:41:00] very much recommend doing something like that, especially in a situation where there’s some likelihood that you see a patient repeatedly.

    Ideally, it would be something that basically in a family practice can happen while people are in the waiting room just to get that type of information that would be much more useful than any large norming studies that we are basing our information right now.

    Dr. Sharp: I was just going to say that. This opens the whole can of worms of mental health and keeping it on the same level as physical health. But yeah, if we were doing an annual or semi-annual cognitive assessment with our primary care doctor.

    Dr. Ulrich: Spending time and money on so many things, why not on that?

    Dr. Sharp: That’s true. I’m with you. Before we transition, we’ve taken little dips into strategies that can help with our interpretation, but [00:42:00] just on a broad level or big picture, given the state of things now and how most people are doing assessment, what is the most sound way to interpret our test data at this point?

    Dr. Ulrich: It comes down to two things. The first one is stay as much as you can with the overall level score, the Full Scale IQ or whatever that is in the battery that you are using, and try to extract as much meaningful information relative to the other things you know about that patient from that score.

    I know that in the ADHD diagnostic practice, the questionnaire-based scores are very informative, very important, highly valid, and highly reliable. And so comparing that to the Full Scale IQ can be very meaningful.

    [00:43:00] And particularly, I’m talking a little bit beyond what I should actually know and parroting what I learned from my wife is that in those cases, something like the Full Scale IQ can really be very informative about people’s potential for compensating for the deficits they have, but I would stay almost completely away from the six acts in the profiles with the exception that I mentioned before of trying to be well-informed about the reliability of the indices that you’re really interested in. I could talk a little bit more about that, but that gets very mushy.

    There are ways to get at that information. Unfortunately, I checked in the Wechsler handbook yesterday whether I could find that information, I was not able to do. [00:44:00] The handbook, they want you to know the overall reliability of the Full Scale IQ and of the indices that are all great, but it doesn’t help you with that particular problem. You need to know what it’s called; I call them a reliability score called omega hierarchical.

    That reliability score tells you what is the specific reliability of, let’s say, verbal comprehension after extracting out the Full Scale IQ reliability. When you know that, then you can construct a confidence interval of the minimum sized difference between, let’s say, the Full Scale IQ, the general level, and the verbal comprehension that you need to accept.

    Let’s say that is 15 points, which is somewhat realistic if you assume a reliability of [00:45:00] 0.5, 15-point difference, but that’s only for the first time you look. So that gets back to, don’t use that criteria of 15 for every single comparison that you can make. Use it once, and then stop. So that would be, from my world, still allowable, maybe already somewhat shaky, but if you carefully apply that, I think you’re still in somewhat safe grounds, but I wouldn’t go with it.

    Dr. Sharp: You mentioned the behavioral questionnaires and behavior checklist, just briefly. I know we’ve been talking primarily about cognitive measures, and we’ve used the Wechsler measures as the example. But do you have a sense of how this all applies to the behavioral questionnaires that we administer?

    Dr. Ulrich: It’s important to understand what I just discussed. In no way, this is a [00:46:00] general methodological issue that has nothing to do with whether it’s about cognitive or questionnaire-based. If you want to go in and interpret specific facets of your questionnaire, I don’t know very little about these, I’m talking very abstractly now, but if you want to interpret specific aspects, you would have to be very mindful of how reliable these are relative to the general factor that I’m sure is also expressed in these questionnaire-based measures.

    So, the problem stays the same. You would have to look very carefully at the relationship between what the equivalent of the Full Scale IQ in something like the BRIEF might be and the individual scores. So the problem doesn’t go away. We were using the BRIEF as an example of an additional piece of information outside the cognitive [00:47:00] assessment that can be brought to bear.

    Dr. Sharp: That’s fair. Thank you. And then one other component I just wanted to touch on again, to make sure for anybody who missed it, what is the term you used for the statistic or the measurement that we’re looking for that would capture the reliability specific to an index?

    Dr. Ulrich: It’s called Omega hierarchical. I don’t know whether you have something like show notes.

    Dr. Sharp: Yes.

    Dr. Ulrich: I can send some references. There’s one paper by the person I mentioned before, Marley Watkins, who presents that type of information for one version of the Wechsler.

    Dr. Sharp: Great. That sounds good.

    Dr. Ulrich: As a software package that you can use to extract that information from published information about the test.

    Dr. Sharp: Fantastic. Great.

    Dr. Ulrich: Somebody you should have in your show sometime.

    Dr. Sharp: Yeah. I’m bookmarking that, certainly. I’m going to look him up. I think we’ve kept people in [00:48:00] suspense for long enough. I would love to dive into how we can do better, essentially. Given everything we’ve just talked about, you’ve mentioned additional measures validating the results. Let’s dive into that for a bit.

    Dr. Ulrich: This is now a lot more speculative. It’s also, in some ways, political and talking about the markets because the testing industry is a big market. The technology that is being used is pretty much the same as 50 years ago. We basically are riding and driving a bicycle even though we could be driving a Porsche.

    It seems like there has been very little pressure from the psychological associations and so forth on the testing industry to do better. I don’t know why that is; that’s not my field, but there is work to [00:49:00] be done there to put more pressure on doing things better.

    And that can go in different directions. Maybe the most difficult one, as I said before, the main problem is that our cognitive tests are saturated with […] It is possible that there isn’t anything beyond g, and it’s very difficult to go beyond.

    That’s the field that I’m in, in my basic science. It is true, it’s really hard to find specific meaningful individual differences of variants beyond the g factor. So that’s hard work, but it’s worth trying to get to measurement instruments that measure individual aspects reliably and reduce the relationship to the g factor.

    So that would be one way to design instruments that give you meaningful [00:50:00] profiles. And so, ideally, then you would have a much shorter battery to get at the general g factor. And then you have a bunch of satellite measures that assess the things that are still interesting but not captured already by g. So you broaden your perspective that way.

    There we get more in some methodological details that I probably don’t want to bore anybody with, but there are now statistical methods that could be used to much more meaningful and adaptively design how you select tests for a given individual where you basically, you test somebody, the information that you gain from that individual is immediately used to suggest what is the most meaningful next test that you should be doing to address or test [00:51:00] certain hypotheses.

    That is something for which the technology absolutely exists. I used Bayesian modeling. I don’t know how much people know about Bayesian, but Bayesian modeling means essentially that you use the information you already have to make the best search for the next possible relevant piece of information. That can be done adaptively.

    It’s a little bit like the idea that I suggested before. Don’t go ideally with a full scale 12 test battery. Pick a few tests that really get at the general cognitive ability and then in test specific hypothesis of what might be going on and over sample those tests where you think that something interesting might be happening.

    That would [00:52:00] be a tailored, adaptive way to do that, but of course, our instruments right now are not geared towards doing that. So this is something you can’t ask a current practitioner just to go around and do that. You would have to have different testing technology to do that.

    Dr. Sharp: I think that’s where things get frustrating. I don’t know a lot, honestly, about the testing industry, but what seems to be on the surface is the fact that a lot of tests are locked behind different publishing houses, which makes this difficult. So it’s hard to sample from each of these different measures and put together a truly comprehensive or meaningful battery, because you have to switch between different platforms and the data isn’t housed in the same place, and then you’re doing your [00:53:00] own calculations on the results, and that seems hard. That seems to be a component of this.

    Dr. Ulrich: God, I was losing my thread here.

    Dr. Sharp: There’s a lot of threads.

    Dr. Ulrich: I think some hope here comes from the big data technology side because this is a big data problem, where in order to get these types of Bayesian estimates, you need lots of data. You don’t need to have one sample of 1,000 participants that are tested for norming studies, you just need a lot of people who do different types of tests where you can collect information and then hear these procedures based on these data. It’s a problem that is [00:54:00] solved in principle, it just needs somebody who wants to do the R&D diverse investment into this.

    Dr. Sharp: Yeah. So we’re talking about computerized adaptive testing here. And just to make it super concrete, the theory is, you give someone a relatively brief set of subtests or something, and then if they do poorly on a verbal subtest, then it triggers, hey, we’re going to administer these additional 10 to 20 items looking at verbal comprehension to go deeper into whatever.

    Dr. Ulrich: Exactly. Yes.

    Dr. Sharp: That’s essentially what they use for the GRE and the SAT.

    Dr. Ulrich: Yes. They have to come up with new versions every year. So it is doable. Somebody has to lobby Pearsons or whatever they’re called.

    Dr. Sharp: Someone has to do it. There’s a lot to consider [00:55:00] there, and there are some downsides. Capitalism is important. Making money and selling different tests is important. Sometimes, that comes up against best practices. Goodness.

    Are there other strategies that we can use? Anything else that can be helpful with what we’ve got right now in terms of interpreting and using our data in a meaningful way?

    Dr. Ulrich: The few things that I’ve said are the ones that I feel comfortable with right now. I think that, more generally, as scientists and practitioners, we have to be aware of the confirmation bias that haunts everything we do and think about. Psychological practice is not free of that confirmation bias.

    The testing manuals that present you ready-to-go information about strengths and weaknesses and so forth [00:56:00] are designed to work with that confirmation bias and give it something to work with. I think that’s, if nothing else, a take-home message to get out of this, don’t fall for that.

    Dr. Sharp: I like that. We’ve talked about bias on the podcast a few times in the past. I’m currently trying to schedule another guest to talk about bias and diagnostic impressions. So it’s important. I’m glad you highlighted that.

    Well, it’s been a great discussion. I know in some ways we could see this as a little bit of a bleak discussion and there are some ways that we can combat the problems here. I appreciate that you highlighted those.

    It’s important to keep it front and center. It’s easy, like I said at the beginning, to fall into the temptation to overinterpret our data and [00:57:00] succumb to the pressure of making meaning out of things to “help” our clients.

    Dr. Ulrich: Pleasure.

    Dr. Sharp: Thanks for being here.

    Dr. Ulrich: Thank you. Bye-bye.

    Dr. Sharp: All right, y’all. Thank you so much for tuning into this episode. Always grateful to have you here. I hope that you take away some information that you can implement in your practice and your life. Any resources that we mentioned during the episode will be listed in the show notes, so make sure to check those out.

    If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes or Spotify or wherever you listen to your podcasts.

    And if you’re a practice owner or an aspiring practice owner, I’d invite you to check out The Testing Psychologist mastermind groups. I have mastermind groups at every stage of practice development: beginner, intermediate, and advanced. We have homework, we have accountability, we have support, we have resources. These groups are amazing. [00:58:00] We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at thetestingpsychologist.com/consulting. You can sign up for a pre-group phone call, and we will chat and figure out if a group could be a good fit for you. Thanks so much.

    The information contained in this podcast and on The Testing Psychologist website is intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional, psychological, psychiatric, or medical advice, diagnosis, or treatment.

    Please note that no doctor-patient relationship is formed here [00:59:00], and similarly, no supervisory or consultative relationship is formed between the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with expertise that fits your needs.

    Click here to listen instead!

  • 501. How to Be More Confident in Our Data w/ Dr. Ulrich Mayr

    501. How to Be More Confident in Our Data w/ Dr. Ulrich Mayr

    Would you rather read the transcript? Click here.

    In this episode, I talk with Dr. Ulrich Mayr, a cognitive neuroscientist, about the fundamental limitations of current cognitive and neuropsychological testing. We explore the reliability of test scores, the dangers of overinterpreting profiles, and strategies for improving assessment interpretation. Dr. Mayr shares insights from cognitive science, discusses the challenges of measuring change over time, and proposes ideas for advancing assessment methods.

    Cool Things Mentioned

    Featured Resources

    I am honored to partner with two AMAZING companies to help improve your testing practice!

    PAR is a long time supporter of the Testing Psychologist podcast, and I will continue to bring you featured items from their catalog! Listen to each episode for specific product recommendations and links. Learn more at www.parinc.com.

    TherapyNotes is the leading EHR system for mental health practitioners. I’ve used TherapyNotes for over 10 years, and it just keeps getting better. Use this link and the promo code “testing” to get two free months and try it for yourself! www.therapynotes.com/testing

    The Testing Psychologist podcast is approved for CEU’s!

    I’ve partnered with At Health to offer CE credits for podcast episodes! Visit this link to access current and past episodes available for CE credit. You can use code “TTP10” for a discount on ALL the course credits you purchase from At Health!

    About Dr. Ulrich Mayr

    Ulrich Mayr is Robert and Beverly Lewis Professor for Neuroscience at the University of Oregon, where he was department head for nearly 10 years and does NIH and NSF-funded research on cognitive functioning and decision making across the adult life-span. He has also been editor in chief of the scientific journal Psychology and Aging. While his research is on the basic science of cognitive functioning , his partner runs a psychological testing practice, which often leads to inner conversations where the theory and the pragmatics of assessment clash in interesting and often productive ways.


    Get in Touch

    About Dr. Jeremy Sharp

    I’m a licensed psychologist and Clinical Director at the Colorado Center for Assessment & Counseling, a private practice that I founded in 2009 and have grown to over 20 clinicians. I earned my undergraduate degree in Experimental Psychology from the University of South Carolina before getting my Master’s and PhD in Counseling Psychology from Colorado State University. These days, I specialize in psychological and neuropsychological evaluation with kids and adolescents.

    As the host of the Testing Psychologist Podcast, I provide private practice consulting for psychologists and other mental health professionals who want to start or grow psychological testing services in their practices. I live in Fort Collins, Colorado with my wife (also a therapist) and two young kids.

    Ready to grow your testing services? Click below to schedule a complimentary 30-minute pre-consulting call!

    [x_button shape=”square” size=”large” float=”none” href=”https://app.acuityscheduling.com/schedule.php?owner=13476033&appointmentType=2843005″ target=”blank” info=”none” info_place=”top” info_trigger=”hover”]Schedule Your Call[/x_button]