Dr. Jeremy Sharp (00:00.568)
Hello everyone and welcome to the Testing Psychologist podcast. I’m your host, Dr. Jeremy Sharp, licensed psychologist, group practice owner and private practice coach.
Many of y’all know that I have been using therapy notes as our practice EHR for over 10 years now. I’ve looked at others and I just keep coming back to therapy notes because they do it all. If you’re interested in an EHR for your practice, you can get two free months of therapy notes by going to thetestingpsychologist.com slash therapy notes and enter the code testing. podcast is brought to you by PAR. Use the Pfeiffer diagnostic achievement test to home in on specific reading
writing and math learning disabilities and figure out why academic issues are occurring. Learn more at parinc.com backslash Pfeifer. That’s F-E-I-F-E-R. Hey folks, welcome back to the testing psychologist. Hey, we are here. We’re talking about AI. Today is part two in the AI series. It’s a three part series. If you didn’t catch the first episode yesterday, I think.
It’d be great to go back and check it out. Those were practical tools and tips. And the episode before that was kind of a novelty episode where I used AI to generate the whole thing, including my voice. So that was kind of fun. So if you haven’t checked those out, go check them out. But today is part two. We are talking about ethics and supervision and protecting clinical judgment and expertise in the AI era. These are a few things that we’re going to chat about today. We’re going to talk about should students use AI? How do we balance training and technology?
Disclosing AI use in reports. This question comes up all the time. Informed consent and AI. Talking about the risk of degrading our skills and what is it like to use AI and will that have any impact on our cognitive ability to do the work in the future. Over-reliance on AI. Making sure AI doesn’t perpetuate cultural bias or inequity. So there are
Dr. Jeremy Sharp (02:09.89)
a lot of topics within this. And of course, like usual, I’m not gonna go super deep and solve any of these problems, but we are gonna talk about each of these things and hopefully give you some ways to think about them and put some strategies into action as you continue to develop a relationship with AI or not and utilize it or not in your practice. So let’s get to this discussion on ethics and supervision and protecting our clinical skill.
Dr. Jeremy Sharp (02:52.974)
right, everybody, here we are. talking about ethics and judgment and supervision and those sorts of things as it relates to AI. These are questions that come up a lot and are just getting more and more relevant as AI becomes a bigger part of our field. So like I said, if you missed part one, we talked about practical ways that AI is being used and ways to improve workflows and enhance clarity and those kinds of things. So we’re gonna zoom out a little bit here today and talk about how AI is affecting our ethics and our students or trainees and just our profession at
large. So again, we’re pulling a lot of this material and the foundational questions from the panel that I did back in April 2025 on AI and testing with AAPN. So my guess I should confirm this, didn’t. So I’m just winging it here. But my guess is that you can go back and retroactively access it. You would have to pay for it, of course, as a conference registration. But
Go check that out if you are interested in a three hour discussion with some great colleagues on AI. Okay, so some questions we’re gonna be looking at. Should we disclose AI use in reports? What are some ways to approach informed consent? How do we combat like over-reliance on AI? And then of course some training questions as well. Do we let our trainees use AI? So how do we do that? If not, how do we talk with them? So I’m gonna structure this.
Section one, I’m going to call, let’s say cognitive load and clinical judgment. And foundational question here, I would say is, does reducing our cognitive load always enhance critical thinking or could it inadvertently erode clinicians’ analytical and clinical skill? So of course, the underlying issue here is that, you know, as we rely on AI more and more, it’s going to reduce our cognitive load.
I would argue that our cognitive load at this point for most of us is way too high to do good clinical work. And that’s why we end up rushing through reports. And by the time you get to the place where you are writing the conceptualization and recommendations, you’re already like totally burned out from, you know, copying scores into tables and regurgitating a background and history that everybody already knows anyway.
Dr. Jeremy Sharp (05:10.954)
I think there is a lot of room to reduce our cognitive load and that’s one of the best parts of AI. And the flip side of that is does reducing the cognitive load help us or hurt us? Okay, so some evidence and perspective here. We know reducing cognitive load can enhance our critical thinking, particularly for quote unquote novices in our field because it frees cognitive resources for more complex reasoning. All right.
So for routine or repetitive tasks like entering demographic information or summarizing what I would call like pretty simple historical data, automation clearly improves clinician focus on higher order tasks. That said, research on automation complacency, we’ll call it, from fields like aviation and healthcare seem to indicate a big risk. And that is that consistent offloading of tasks
may diminish clinician skills at those tasks over time. And they call this phenomenon skill decay, which makes sense. For us specifically as assessment folks, think repeated reliance on AI generated interpretation and conceptualization specifically could lead clinicians to lose their precision and understanding nuance and edge cases. In other words, we might get a little lazy in relying on the AI to make these complex interpretations.
In clinical settings, if clinicians become accustomed to the automation, then you might also struggle to reengage cognitively when the AI fails or offers what I would call ambiguous output. And to me, this is a big problem because over and over, we have said and the research has shown that appropriate oversight of AI is absolutely necessary if you’re going to use it. And this seems to be kind of a shooting ourselves in the foot.
problem where if we repeatedly rely on AI to generate interpretive or conceptual information, then it might, like I said, erode our ability to reengage when we actually need to. So bottom line, think reducing cognitive load is beneficial when it applies to somewhat superficial tasks and enables deeper analytical thought for harder tasks. But you have to monitor it pretty carefully to make sure that you remain actively engaged and vigilant.
Dr. Jeremy Sharp (07:36.222)
on the more nuanced tasks that we have to do. Second question within here is something around like how do we balance workload reduction without de-skilling ourselves? So kind of an extension of the first question. Again, my current evidence from aviation and medical fields, I think provide some helpful parallels. So they would say that regular manual practice, periodic quote simulation training and critical reflection exercises are known
kind of proven ways to mitigate loss of skills. Okay, so specifically in radiology, deliberately incorporating periodic human-only interpretation sessions, like without any AI support, has helped to maintain expert-level skills despite widespread AI use. And this is, I think, a really good test field. Radiology was one of the initial, like, areas that AI got to be deployed.
So they have some years on us in terms of measuring these constructs. for us, again, we might benefit from, I would say, periodically completing reports or interpretations, specifically, specifically interpretations, manually. So I don’t know that many AI software programs are offering AI conceptualization at this point. Our program
I’m a co-founder, as many of you know, of Reverb, which is an AI report writing program. Reverb will offer a draft conceptualization based on your preferences. So that is available, but I think it is, you know, that’s not where AI is really meant to shine. And so for us, we need to periodically do reports or conceptualization specifically on a manual basis. We have to review the AI output pretty critically.
And we do need explicit training on cases that AI might not handle well, which I would argue is in our practice, many cases. So those complex, like culturally nuanced or environmentally nuanced cases, this is where we still have a job, which is a good thing. So what do we take from all this? Basically, you need to establish, I would say, explicit clinical practices, like do the task without AI, do reflective exercises, do structure.
Dr. Jeremy Sharp (09:49.102)
over site sessions to make sure that you maintain your proficiency and avoid loss of skills. I think we would all say that critical analysis is crucial in conceptualization and interpretation of the data and we have to practice at it. So don’t let AI do the whole thing.
All right, let’s move to section two, which I titled as automation bias and over-reliance on AI. So first question to tackle here is how might clinicians be trained to recognize and resist automation bias? All right, so let’s dive into the evidence. Automation bias is well-documented, particularly in what are called high-risk environments like healthcare. So what do we do? Some training approaches include explicit education,
on automation bias. This includes like confirmation bias and anchoring, which are two forms of bias that we engage in quite readily. If you want a deep discussion on clinician bias, go search for Stephanie Nelson’s episode. I think it’s titled something like, Are All Wrong Sometimes, something like that from a couple years ago. And I’ll try to link it in the show notes if I can remember. But great episode on bias.
But we want to do explicit education on these biases that might be present to just increase clinician awareness they are there. Another strategy is to do what we call reflective reasoning training, which involves reviewing clinical decisions made with AI assistance. This has been effective in medical fields, so it kind of ties into the first section. We want to review the AI output pretty explicitly and regularly. Let’s take a break to hear from a featured partner.
Y’all know that I love TherapyNotes, but I am not the only one. They have a 4.9 out of 5 star rating on TrustPilot.com and Google, which makes them the number one rated electronic health records system available for mental health folks today. They make billing, scheduling, note taking, and telehealth all incredibly easy. They also offer custom forms that you can send through the portal. For all the prescribers out there, TherapyNotes is proudly offering ePrescribe as well.
Dr. Jeremy Sharp (12:09.504)
And maybe the most important thing for me is that they have live telephone support seven days a week. So you can actually talk to a real person in a timely manner. If you’re trying to switch from another EHR, the transition is incredibly easy. They’ll import your demographic data free of charge. So you can get going right away. So if you’re curious or you want to switch or you need a new EHR, try therapy notes for two months, absolutely free. You can go to the testing psychologist.com slash.
therapy notes and enter the code testing. Again, totally free, no strings attached. Check it out and see why. Everyone is switching to therapy notes. The FIFER Diagnostic Achievement Tests are comprehensive tools that help you help struggling students. Use the FAR, FAM, and F-A-W to home in on specific reading, writing, and math learning disabilities and figure out why academic issues are occurring.
Instant online scoring is available via PAR iConnect, and in-person eStimulus books allow for more convenient and hygienic administration via tablet. Learn more at parinc.com backslash fifer. That’s F-E-I-F-E-R. All right, let’s get back to the podcast.
The third option here is that you can do structured quote unquote challenging scenarios or failure mode simulations similar to like aviation safety training where the clinicians are practicing like questioning the AI generated outputs to build vigilance. So again, this is just a fancy way of saying you want to review the AI output at all times and simultaneously have in your mind
the risk of bias and the most common types of bias that can happen, you know, we’re susceptible to them. So again, bottom line, we should be systematically trained in recognizing this bias and encouraged to actively question automated recommendations. And periodically, we need to confront these AI-generated errors to make sure and strengthen our own clinical judgment. Second question here within this section is could structured reflection exercises after AI use mitigate
Dr. Jeremy Sharp (14:26.254)
The evidence would say yes. So structured reflection exercises have consistently demonstrated effectiveness in reducing bias in clinical decision making. A reflection prompts us to consider alternative explanations and interpretations. It makes us analyze our decision making rationale pretty explicitly and critically appraise why certain AI outputs might be erroneous or incomplete. So yes.
There is a known framework called the cognitive forcing strategies, which includes intentionally revisiting clinical decisions to evaluate if the automation component like overly influenced your clinical judgment. yeah, definitely integrating brief systematic reflection into the clinical workflow when you’re using AI can significantly affect the automation bias and improve your decision accuracy, which is awesome. Okay, moving on to section three.
the impact on expertise development. Okay. This is crucial. So this ties back into the previous two sections, especially section one, but it’s really, okay, so how could frequent AI use interfere with the deliberate practice required to master a skill set? So let’s dive into the evidence. Deliberate practice, this is coming from all the Anders Ericsson, you know, 10,000 hour stuff. Essentially it requires repeated engagement with
challenging tasks accompanied by immediate feedback and active self-assessment. All three components are huge in this process. You have to do it repeatedly. You have to get immediate feedback, and then you have to actively self-assess the whole process. how does this tie to AI? Frequent reliance on AI-generated summaries or interpretations could limit
clinicians exposure to precisely the difficult ambiguous tasks that we need to truly master something. So if AI always smooths the way, so to speak, clinicians might never sufficiently wrestle with diagnostic complexity. And if we over rely on AI, then we can inadvertently make complex tasks like clinical reasoning into pretty simple verification tasks. And that reduces our effectiveness and the cognitive.
Dr. Jeremy Sharp (16:46.232)
challenge here. As you can guess, a lot of this is tied into trainee development. So I think this is really crucial. And this is, if nothing else, like this is a rationale for not just allowing trainees to use AI without any guardrails. OK. I think there is a lot of value in developing these skills on our own and kind of fighting with the process and the conceptualization and pulling the data together. In my mind,
I don’t want trainees to just turn into psychologists who are essentially just editors and who review AI-generated material without having the foundational knowledge built from wrestling with it themselves. So bottom line from all this, you want to be careful in considering what tasks are delegated to AI. So it just keeps coming back for me to, I’m pretty hesitant to rely on AI for conceptualization or interpretation. We need that consistent exposure to those challenging cases.
without AI to kind of aid in that process. I actually think about this in a different way, which is responding to emails. I think I mentioned that in the past episode. I do use AI a lot to generate responses to emails and craft them a little bit more eloquently than I could and use a slightly different tone than I could, particularly if I’m feeling kind of like fired up and don’t want to respond in that manner. And so,
I periodically though kind of force myself to engage in that process because I know that it is hard and I don’t want to lose that skill just as a real world example. Another question in this realm though is can AI supported education be structured to preserve skill acquisition pathways so to speak? So can we use AI in a helpful way to actually develop these skills? And again, the evidence is positive here. Provided that the AI tools are explicitly framed as supportive,
rather than definitive sources of interpretation. This goes back to the thought partner use. I really like framing AI as a thought partner versus a worker or, you know, an employee, so to speak, to do the work for you. For instance, AI being used to flag potentially irrelevant information for further review, to me, that preserves the clinical judgment, okay? Now, you could argue, of course, like we should be reviewing everything and being able to discern what’s clinically relevant and what’s not, but…
Dr. Jeremy Sharp (19:12.578)
This is a nice kind of intermediate step where you use AI to maybe flag some things and then you have to go back and do the synthesis and analysis. Again, falling back to our sister fields of medicine and to a lesser degree aviation, when you integrate AI within structured training exercises, like where the trainees have to explain their reasoning explicitly, that reinforces critical thinking, which I think is pretty intuitive, but easy to forget. This is a practice that…
I’m really trying to implement with our trainees where we talk about, you can utilize AI for certain things and explain your rationale first. And then let’s go back and use AI to help. So it’s kind of an explain to verify model where clinicians have to explain why they agree or disagree with AI output. This has been effective in radiology and I would imagine can be really effective for us too. It’s kind of like a built-in like miniature comp exam, which I like.
So again, bottom line, educational approaches here just have to include like pretty structured explicit opportunities for clinicians to actively justify their decisions and reason through complex tasks. Using AI to complement and not replace our critical thinking. All right, so there’s a lot to dig into here. I mean, again, we could go like super deep on bias in AI and there are all kinds of examples of where that has occurred over the years, cultural biases and those kinds of things.
There’s a separate episode to be had just on how to kind of verify AI output or check AI output for bias and evaluate different models and how much bias may be present in them. So I’m gonna keep this relatively short and cap it here just in talking about, you know, being aware of our own bias that we might bring to the process and making sure that AI doesn’t exacerbate those biases and erode our skills at the same time. All right, so.
A lot of what we’ve covered, you know, touches on kind of big picture questions. We did do some practical stuff last time, but I want to extend this conversation in particular into the next episode where I’m going to be talking about what I’ve developed and I’m really excited about, which is a proposed set of AI competencies for ourselves and our trainees. Yes. So I’ve done a bit of work.
Dr. Jeremy Sharp (21:33.528)
pulled together a lot of resources and next time we’ll be outlining again a draft of some AI competencies so that we can actually get some operational sort of structured evaluation of our own skills and knowledge and our trainees skills and knowledge of AI and how to use it and how to mitigate bias and many other things. So stay tuned and we’ll catch you tomorrow. All right y’all thank you so much for tuning into this episode. Always grateful to have you here.
I hope that you take away some information that you can implement in your practice and in your life. Any resources that we mentioned during the episode will be listed in the show notes, so make sure to check those out. If you like what you hear on the podcast, I would be so grateful if you left a review on iTunes or Spotify or wherever you listen to your podcasts. And if you’re a practice owner or aspiring practice owner, I’d invite you to check out the Testing Psychologist Mastermind groups.
I have mastermind groups at every stage of practice development, beginner, intermediate, and advanced. We have homework, we have accountability, we have support, we have resources. These groups are amazing. We do a lot of work and a lot of connecting. If that sounds interesting to you, you can check out the details at the testingpsychologist.com slash consulting.
You can sign up for a pre-group phone call and we will chat and figure out if a group could be a good fit for you. Thanks so much.
Dr. Jeremy Sharp (23:19.246)
The information contained in this podcast and on the testing psychologist website are intended for informational and educational purposes only. Nothing in this podcast or on the website is intended to be a substitute for professional psychological, psychiatric or medical advice, diagnosis or treatment. Please note that no doctor-patient relationship is formed here. And similarly, no supervisory or consultative relationship is formed between
the host or guests of this podcast and listeners of this podcast. If you need the qualified advice of any mental health practitioner or medical provider, please seek one in your area. Similarly, if you need supervision on clinical matters, please find a supervisor with an expertise that fits your needs.
Click here to listen instead!