562 Transcript

Dr. Jeremy SharpTranscripts Leave a Comment

Dr. Jeremy Sharp (00:00)
Hey folks, welcome back to the testing psychologists. Glad to be here as always. And today’s episode is moving along with our third pillar in this March sprint slash marathon. ⁓ today we are tackling the part of the job once again, that keeps most of us up at night and away from our families on the weekend. That is report writing in previous episodes. We talked about how the clerical or administrative burden is the single biggest driver of

burnout in our field. For years, I looked for a way to get rid of that administrative burden without losing my clinical soul and ⁓ couldn’t find it necessarily. And so when the time was right, ended up partnering with a software engineer and built it. So that software is Reverb, which you’ve likely heard about on the podcast and elsewhere over the last year or two.

But today is not about that. It’s not a sales pitch or anything. It’s more a discussion on the ethics of efficiency. So how do we use automation and AI to solve the documentation crisis that we face without becoming like GPT clinicians who just sort of lose their soul and diagnostic nuance?

This is an ongoing topic. know we tackle it a lot on the podcast, but I feel like it’s just continually developing, to be honest. And there’s always something new to talk about in this realm. So we’re going to do that from a slightly different angle here today as part of our March sprint slash marathon. And if you have been on the fence, I know you have heard these advertisements for crafted practice. And my guess is that most of you have thought to yourself, that sounds really interesting. I want to check it out, but I’m not sure if it’s a good fit or.

if I can afford it right now or any number of other questions ⁓ or I don’t know, maybe all of you are out there like my practice is in a fantastic place and I feel totally connected to my work and the people who do this work and I have no need for any support or accountability or a nice relaxing retreat in Colorado. If that’s you, that is amazing. But if you’re considering it, check it out. You can do like a pre-booking call.

And we can chat about what the event’s like and I’ll give it to you straight about whether it’s a good fit for you or not. So that’s at the testingpsychologist.com slash crafted practice. For now, let’s jump into this discussion about ethics and AI and efficiency.

Dr. Jeremy Sharp (02:26)
Okay, everyone, we are back. So I just want to start with a brief discussion and going back over this idea of documentation and professional burnout. So a little bit of info there. The research on this, I think, is pretty staggering. Studies have identified, quote, clerical burden and EHR documentation as the primary drivers of professional exhaustion in fields like ours. In assessment,

This is compounded, think, because our output isn’t just a chart note. Granted, we have less volume than a physician or a therapist. But the fact that we are writing an integrated narrative that runs anywhere from, two to 52 pages, depending on the practitioner, is pretty important. So what we lack in volume, we make up for in length and content. And one might argue, I’m…

want to die on this hill necessarily, but one might argue that the work that we are doing in our integrated reports requires a higher cognitive burden than a typical soap note or session note or medical visit note. So this documentation crisis leads to what you might call a burnout loop. We get behind on the reports, our wait lists grow, and then we rush the writing just to clear that task or clear the desk or clear the stack or

you know, whatever, we’ve all had that experience. The problem is that when we rush, we make all these clerical errors that I discussed in a past episode, and we are definitely not doing our best work most of the time. ⁓ We’re not utilizing our brains effectively for the most important part of our job, which is communicating results clearly in that written format, which is, you know, the document that goes on to live forever, essentially. So I would go so far as to say then that

efficiency and burnout therefore is a clinical safety and practice issue. And by reducing the kind of mechanical rote part of report writing. So you know I’m talking about data entry score tables ⁓ template formatting even things like writing the background or you know regurgitating sort of ⁓ rote known information. ⁓

By reducing that and offloading that part of the report, we’re preserving our executive functioning for the high level synthesis and conceptualization and integration that our clients actually need.

So that, to me, is sort of a plea on the ethics side for the use of automation. But let’s go a little bit deeper and actually pivot to ⁓ real ethics. So this comes up a lot when we’re talking about AI, right? So where do we go? We can go to the APA ethics code. Of course, specifically, standard 9.09 states ⁓ that

Dr. Jeremy Sharp (05:28)
While clinicians may use automated scoring and interpretation services, they remain 100 % responsible for the accuracy and clinical utility of the final product.

think this is pretty well-worn territory, but I think it’s important to emphasize as well. So the software is definitely a tool, not a substitute. You could think of it like ⁓ almost like a skeleton versus soul model. I use the word soul in the intro, so I’ll stick with that. So Reverb and other tools can generate the skeleton, right? It helps import the scores and build the tables and ⁓ draft, like I said, wrote text from

notes that you may have taken or ⁓ even, you know, reporting scores and the like from static documents. But you as a clinician are still providing the soul of the report. That is important. There’s a lot of fears, I think, when I talk with folks about using AI, like, is this going to take our jobs? Is it going to take our jobs, but actually do them worse because AI content tends to be hit or miss?

⁓ many questions like that. And so I just want to, you know, lean on this distinction that yes, you know, these tools can provide kind of that skeleton or a nice sort of frame, but you really get to go in and provide the soul, which to me is the conceptualization, integration and recommendations. I mean, these are the parts of the report that people are paying for. It’s what they want. And it’s where our expertise I think is most valuable.

So right now, we don’t have great, consistent research on quality of clinician writing versus AI writing, though there is some really promising research coming out. So many clinicians would report that AI assisted drafting reduces the report writing time significantly, I mean like 30 to 50%. But it’s uncertain if those automated drafts are qualitatively superior or inferior to human-only drafts in terms of long term.

patient outcomes. Many stakeholders report that the time saved allows for more like FaceTime with the client and better tailoring of recommendations. But just, you know, full disclosure, the empirical data on patient comprehension of AI assisted reports and like the actual quality is definitely still emerging. There’s some great folks doing ⁓ good research in this area. And my hope is to bring them on the podcast and really dive deep into this as we learn more.

Dr. Jeremy Sharp (07:53)
Another aspect here in the ethics realm is privacy. So in 2026, we’re talking a lot about what’s called data sovereignty. So you basically have to ensure that any tool that you use is HIPAA compliant and utilizes what’s called zero retention or private instance model. So what does this mean? If you are copying and pasting clinical notes into a public facing,

LLM like chat GPT or Claude or ⁓ you know, public Gemini, not the workspace version. If you’re doing that, you are likely using your patients protected health information to train a public model, which as you might guess is a massive ethical breach. And we have to be really careful here because even when we like quote unquote de-identified data, ⁓ HIPAA guidelines,

dictate that there are, I think it’s 18 different data points that can be used to identify her are considered, you know, identifying information and there, I forget what it’s called now. I should have done this research, but there’s kind of a secondary standard as well that looks at micro identifiers and has found that there are several of those as well where, you know, given things like geographic location or

other little pieces of information that you wouldn’t think would be identifiable. When you put them all together, they can actually become identifiers. So again, just emphasizing, absolutely do not put information into a public facing LLM, even if you’re quote unquote de-identifying. All right. So what do you do? If you want to integrate AI in your work or just kind of update your current use, if you’re already using it,

I’ll give you a four point quick hitting ethical efficiency protocol, so to speak. First thing, you could call it a blind edit. This is pulled from a talk that I recently gave at the NASP conference in February. So blind edit, always read your automated draft as if it was written by essentially like a, you know, it says like a first year student. I don’t know that it is that bad.

My apologies to any first year students, I’m so sorry. But treat it at least like an intern or maybe a postdoc, right? Or a clinician who’s new-ish to the field of assessment and is working on their skills. So you’ve got to read the draft. And you’ve got to expect to proofread it like you would any other trainee’s draft. So you want to look for hallucinations, ⁓ generic phrasing. If the software

you know, output says that the client was quote engaged, but your notes say that they were fidgety or contradict that. ⁓ clinical observation is going to win every time, but you have to read through the drafts ⁓ to make sure and make those edits before it gets finalized. The second point is verifying the data pipeline. So this just means that you want to ensure that your tech stack uses encrypted and private servers. ⁓ The data should be used to help you.

Dr. Jeremy Sharp (11:04)
write your report, and then it should be scrubbed. This is one of those things that we really lean on at Reverb specifically. I can’t speak for other companies or models,

through third party auditing for compliance and security, and we make sure that none of the data is being used to train any kind of model or goes outside of our system by any means.

The third point is something I’ve touched on in the podcast before, and that’s called the human in the loop model. And this essentially just means that you have to be involved in the report creation process. ⁓ I think you can target your energy, though. And that’s really what I’m getting at with a lot of this. So save your cognitive energy for the integration and conceptualization and recommendations. ⁓ Let the automation handle data tables.

you know, rote information, that kind of thing. And then you jump in and, you know, handle what I would call the important stuff, like the so what or the why is this important. you know, really making sure that it matches the client’s characteristics and is, you know, as personalized and nuanced as possible. And then the last component to be thinking about here is disclosure. So we want to be transparent in our reports. You can include a brief statement. It’s relatively straightforward.

quoting and reading here that says something like, administrative software was utilized to assist in data organization and preliminary drafting. All clinical interpretations and final synthesis are the sole work of the undersigned clinician. I would even go so far as to replace administrative software with ⁓ artificial intelligence or AI, just to be clear.

All right, so like all the other episodes in this series, it’s a short and sweet episode with some hopefully punchy information and clear takeaways that you can actually utilize in your practice. by hopefully killing some of this administrative drag, I don’t think we are cutting corners here by any means. I think we’re clearing the way so that we can do the high level work that we were actually trained for.

And this is a hill that I will die on. I do not think we as psychologists should be using our ⁓ expansive, amazing brain power to transcribe scores into tables or ⁓ turn a pretty comprehensive written history into a prosaic narrative that just sounds good. ⁓ I feel like we should be using our brain power for much more important tasks that people are actually paying for.

I’ll leave you with a question, which is, would you rather see a clinician who just spent four hours transcribing scores into tables or paragraphs or reporting ranges and standard scores and such, or the clinician who started fresh on the conceptualization and recommendations because those other rote tasks were automated for them?

I’ll leave you with that. Now, this is, of course, a developing area, but I hope this episode has provided just more food for thought.

Click here to listen to the podcast instead.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.