573 Transcript

Dr. Jeremy SharpTranscripts Leave a Comment

Dr. Jeremy Sharp (00:36)
Hey folks, I am really glad to have NovoPsych Psychometric sponsoring the show. If you do structured assessment work, then you will likely love NovoPsych. NovoPsych brings 150 plus standardized measures into one platform. What I particularly like is the extra layer of psychometric interpretation. So it helps you understand what scores actually mean. So the results are easier to communicate. If you are interested in high quality measures for personality, disability, ADHD, or autism,

You can try NovoPsych with a 15 day free trial via the link in the show notes, is novopsych.com slash testing psychologist. That’s N-O-V-O-P-S-Y-C-H.com slash testing psychologist.

Dr. Jeremy Sharp (01:20)
Hey folks, welcome back. Thanks for being here. Today we are talking about AI again. We’ve talked about AI in different contexts here on the podcast, but the angle today is really AI and compliance and how to vet AI programs and things like that. And to have that conversation, I have a true expert here with me, Dr. Melissa McCaffrey. She is a return guest. She first came on the podcast sometime within the first 20 episodes. So it’s been almost 10 years, which is wild, but

I’m excited to have Melissa back. She’s a licensed psychologist, a nail design enthusiast, and author of the book, Stress-Free Documentation for Mental Health Therapists. Through her business, QAPrep, she empowers therapists and psychologists with training and consultation on clinical documentation. She focuses on the why behind the usual recommendations and encourages clinicians to think outside the box while also keeping their ethics intact. As someone with ADHD who’s had to figure out

What works through trial and error, Melissa aims to make sure the trainings are practical while also allowing for plenty of laughter and fun. And I think you’ll see that today for sure. Melissa has a pretty dynamic, easygoing personality and really knows her stuff. So in our conversation today, I think we move beyond kind of the surface level of AI scribes and tools and things like that. We dive into ethical and clinical and

compliance related nuances that a lot of us need to understand if we’re trying to vet and utilize AI programs. So we talk about ambient listening scribes and how those might fit in. We talk about the transition from like actually doing the documentation to being more like a supervisor of the AI doing the documentation and many other things. We talk about how to vet AI companies, HIPAA compliance, BAAs and so forth. Talk about

some of the quote unquote life-changing impacts of AI on clinicians based on Melissa’s experience and many other things. So this is super relevant, super helpful and as always pretty concrete. So lots to take away from this episode. And I’m taking, I’m going out on a limb here. This is, I’m recording pretty far ahead of release for this episode. So there may be spots left for crafted practice. This is my in-person.

business retreat each summer where we get together for four days, do small group coaching, implement some of those ideas that we talk about and just make solid connections with one another and have a really good time in the process. If that sounds interesting, if you need a break this summer, if you want to work on your business instead of just like drowning in reports for the rest of the, for the, for the entire summer, this is a good chance to take a break and get support from a bunch of folks who know what you’re going through.

You can get more information at thetestingpsychologist.com slash crafted practice. For today though, let’s transition to this conversation with Dr. Melissa McCaffrey.

Dr. Jeremy Sharp (04:30)
Melissa, hey, welcome back to the podcast. Almost said welcome, but it’s a welcome back.

Maelisa (04:34)
Thank you.

from many, many years ago. It’s good to be back.

Dr. Jeremy Sharp (04:39)
Many, many years ago. Yeah.

Yeah, I actually did look back. You were one of the very first interviews that I did. I mean, it was back in 2017. So that was like back in the very beginning. So I’m glad to be having you back and talking about more contemporary topics, I suppose.

Maelisa (04:58)
Yeah,

who would have known, right?

Dr. Jeremy Sharp (05:01)
Yeah, yeah, for sure. For sure. Yeah, I really appreciate you being willing to come back on. So we’re talking about AI and documentation and compliance and all kinds of things here today. That’s your thing. So I’ll start with the question that I always start with, which is why this is important to you? Like why care about AI, compliance, those sorts of things.

Maelisa (05:24)
Yeah, multiple reasons. And I would say that has even shifted over the last few years as AI has shifted. ⁓ But ultimately because it is, I don’t wanna be hyperbolic, but it’s kind of taking over our industry. It’s taking over everyone’s industry, right? And it is…

Dr. Jeremy Sharp (05:32)
Mm-hmm.

Mm-hmm. Mm-hmm.

Maelisa (05:48)
being embedded in every aspect of our life and that includes our careers and our professional work. And it’s really, really important because it is so new and specifically within mental health because AI has literally led to people harming themselves. ⁓ It’s really critical that we’re aware of it, comfortable with it, on top of it.

Dr. Jeremy Sharp (05:50)
Mm-hmm.

Mm-hmm.

Maelisa (06:11)
comfortable talking about it and that we understand how it could potentially impact our clients. I think whether you’re talking about compliance and privacy and security issues or whether you’re talking about talking to team clients you have about whether or not they use AI or not really whether or not how they’re using AI and.

you know, what advice they’re getting from it. Those are all really, really important things that are just going to become a part of our everyday work if they aren’t already.

Dr. Jeremy Sharp (06:42)
Yeah, yeah, yeah. mean, it sounds like this is true for you. For me, it’s permeating everything, everything. mean, I’m using AI in some form or fashion multiple times a day and then thinking about how my kids are doing using it. You know, my wife is using it to redecorate our house. It’s like it’s everywhere.

Maelisa (06:59)
Yeah, yeah. And even if you’re, it doesn’t matter if you’re pro or anti or kind of open to it, but scared of it, you know, regardless of how you feel about it, it’s either being used by you or being used on you. And that also relates to your clients. And so understanding how that’s happening is really key.

Dr. Jeremy Sharp (07:08)
Mm-hmm.

Mm-hmm.

Yeah, yeah, for sure, for sure. I know we’re not going to get too much into the clinical aspect of assessment necessarily, but I was talking the other day to some group, I forget who, but just talking about how now we’ve started to integrate AI questions into our intakes for our testing cases, asking how people or teens or kids are using AI and what they understand about it.

those kinds of things. It’s like the new screen time. know, like we had to start asking about screen time a few years ago. And now it’s like AI use, you know, it’s everywhere.

Maelisa (07:51)
Yeah

Absolutely,

yeah, I think it’s becoming a critical component, especially for younger people, but even for adults and people in middle age and beyond, to ask about as part of assessment and to talk to them about, have you used it, are you using it, have you used it for mental health, and make sure that you ask about that because…

Dr. Jeremy Sharp (08:02)
Mm-hmm.

Yes.

Maelisa (08:14)
it can’t have really damaging effects. And so I’m sure we’re gonna get into like ethics stuff and all of that, but kind of related to that, I’m interested to see how that might morph over time where now our ethics are very clear that like we’re responsible to ask questions about harm, right? When we first meet with people and is asking questions about AI going to be related to that in the future? I could see that.

because of the potential damage.

Dr. Jeremy Sharp (08:42)
Sure, sure. Yeah, it totally makes sense. mean, there are some pretty clear cases, right, of AI leading to harm and a lot of insidious cases that we don’t hear about.

Maelisa (08:49)
Yeah.

Yeah, and good stuff. I don’t want to like, you know, make this all doomsday, right? Like there’s a lot of really helpful parts to AI. ⁓ But ethics wise, I mean, I think it’s a lot around mitigating the harms that we talk about.

Dr. Jeremy Sharp (08:56)
Of course.

Mm-hmm.

Yes, yes, definitely, definitely. So before we get too deep into it, I would love to hear from you to sort of a, from your perspective, an overview of the AI landscape and mental health right now, right? Like we, I’ve talked about sort of the narrow focus of on testing and assessment and ways that it may come up there, but you’re, have a much broader perspective. You’re a psychologist. do so much with documentation and compliance and things. So yeah, I would love to hear like generally, what are you seeing in

mental health right now with AI.

Maelisa (09:38)
Yeah,

yeah. So I will say I was one of the first therapists who was really talking about AI because I work with people who are struggling with their documentation. And that was one of really the first inroads, right? So I think most people at this point, if you’re a mental health clinician, you’re really familiar with that, at least as an option if you aren’t using it already to help with your documentation. So

Dr. Jeremy Sharp (09:51)
Mm-hmm.

Mm-hmm.

Mm-hmm.

Maelisa (10:04)
progress

notes, progress notes specifically, but even maybe intakes or treatment planning. ⁓ And so since my audience are people who are struggling with documentation, I think they were more open than the general mental health clinical realm to AI early on. So that prompted me to start really looking into this because people kept asking me, have you heard about this thing? What if I use this? And this was well before any ethics code

Dr. Jeremy Sharp (10:10)
Yes.

Maelisa (10:32)
mentioned AI when it was, it’s still Wild West, but when it was truly Wild West out there. And so I think that’s the most practical everyday use that a lot of clinicians see on a professional level. Then there’s also using things like chat GPT or Gemini or Claude, right? That you might be using for marketing or writing blog posts or doing research on things.

Dr. Jeremy Sharp (10:37)
Mm-hmm. Mm-hmm.

Maelisa (10:59)
that you shouldn’t be using for documentation. We can get into that, but ⁓ you know that some people are using for documentation. So there’s these more open-ended AI platforms that you might also use to research a recipe, right? Or ask like, I need to substitute this ingredient tonight for dinner. So, you know, just for everyday use and for professional use too. You know, people will tell me they use things like.

Dr. Jeremy Sharp (11:02)
Mm-hmm.

Maelisa (11:22)
asking chat GPT, what are some treatment goals that are in smart format, for example, you getting ideas for things. So that was my first foray into it. And I think where clinically it made sense for a lot of the ethics codes to look at and how most people were open to using AI. Now,

you’ll hear things like AI Chatbots or not AI Chatbots, AI Therapy Chatbots, right, or AI Therapists. You might see some platforms that offer AI documentation are now branching out to also do things like help you with diagnosing, right, and make treatment plan suggestions. And so they’re actually getting into treatment suggestions and clinical formulation.

also connecting, there’s at least one platform I’m aware of that also connects that to communication with clients. So you could, for example, if the AI listens in on your session and helps you with your progress note, it can also then recommend homework and then send the client a text message or an email and recommend homework and kind of follow up based on what you talked about, right? So it’s taking things to the next level as far as

Dr. Jeremy Sharp (12:31)
Hmm.

Maelisa (12:34)
getting involved in actual treatment, not just being this admin tool on the back end. ⁓ And that could be, like I said, through formulation or actually through communicating with your clients, either on your behalf or as part of the treatment on purpose. Another way that it’s really, I think a lot of clinicians are seeing is through insurance. insurance companies are using AI.

Dr. Jeremy Sharp (12:39)
Mm-hmm. Mm-hmm.

Maelisa (12:57)
to deny claims, to review claims. And that’s something to be aware of that it does happen. And also platforms, like a lot of the larger platforms, I always struggle with what to call them, but places like Alma, Rula, Headway, right? Like these types of platforms where you contract with this company and they are…

Dr. Jeremy Sharp (13:14)
Yes.

Maelisa (13:19)
offering you administrative support, they are using AI often in that record keeping system that they have that you have to log into, they are using AI for their auditing tools, right? So either for checking on the backend to flag things or they’re using it to prompt you if you’re logged in. And so if people will see different messages, like, did you include a note about progress?

At this point, a lot of those companies are using an AI to actually analyze what you wrote and give you feedback on it. And so those are some of the, I mean, that’s a variety of ways that it’s actually really commonly affecting what we do on top of actual AI therapy. So there are companies that are working on that on

creating AI therapists. And some of them are saying they’re doing it to help therapists and maybe offer some type of adjunct. I could see potential there. And a lot of them are doing it to help with increased access, which is a great idea.

Dr. Jeremy Sharp (14:10)
course.

Maelisa (14:26)
But that means like, what does that mean if an AI is in charge of someone’s therapy and it’s not just being used as this administrative tool with clinical oversight by an actual person, right? But those are things that are actively being researched. You know, right now and very big companies that have lots of, you know, capital and investments are

really trying to do that. And there’s some incentive there with things like Spring Health is one that has a pretty open platform about the fact that they’re doing research on creating an AI for EAPs. So they are like an EAP organization and they contract with a lot of really large organizations like Coca-Cola, companies like that, ⁓ trying to get mental health services into

Dr. Jeremy Sharp (15:05)
Mmm

Mm-hmm.

Maelisa (15:17)
these large organizations and they’ve done some cool research around how mental wellness for employees, you know, actually boosts productivity for a company. And so it’s beneficial for companies that can afford it to pay for mental these mental health services for their employees. But they are also trying to save money and you know, really invested in their goal being that AI is doing a lot of that and not

Maelisa (15:45)
human therapists doing a lot of that service. Yeah.

Dr. Jeremy Sharp (15:47)
Right. Right. Yeah.

I mean, out of everything that you mentioned, the thread that runs through is basically just like it’s a double edged sword. know, there are, of course, like there’s a potential for improvement and enhancement and streamlining and help and access and the, and I think ultimately it’s about making money in some form or fashion, you know, and, know, do you, how much control do you turn over to these companies?

Maelisa (15:56)
you

Dr. Jeremy Sharp (16:13)
There’s so much, you said so many, so many things that I want to follow up on. I will say this though, I just to add another piece of context to the AI landscape. I’ve read about some like bigger entities like hospitals, for example, where they are using the ambient listening to then drive coding as well. Like they listen to the session and then it determines like what CPT codes you should use based on what intervention or procedure or whatever, you know? So that’s just another.

use case that I’m seeing prop up.

Maelisa (16:44)
Yeah, yeah, when you get into the, with mental health, like our codes are so limited, you know, the codes that a typical psychologist or therapist is billing are very specific. ⁓ But when you get into broader healthcare, all of these things open up exponentially. Yeah.

Dr. Jeremy Sharp (16:53)
Mm-hmm.

Right? Yes.

Yes. Yes. So let’s see. Where do we want to go from all of this? There’s a lot to tackle. So with the documentation component, you were saying before we started to record that you have interacted with like 40 note taking or, you know, record keeping AI programs just for therapists and, you know, geared toward documentation. Is that right?

Maelisa (17:23)
Yes, yeah, I went back and looked at how many of these companies have reached out to me because I started doing YouTube videos. So then they started reaching out and saying, will you review our product? And it got to the point where I was like, I started this thinking I would do these reviews, you know, every few months and it’d be great. And I was like, this would be a full-time job. Just staying up off of that. And they all update their system dramatically every three to six months too, right? Because AI changes so quickly. Yeah.

Dr. Jeremy Sharp (17:28)
Mm-hmm.

⁓ yes.

This is too much. Yeah.

Yes. Yeah, definitely.

Definitely. So just lay the groundwork. mean, for folks, again, like I’ve talked about AI and report writing. think people like generally know what that looks like. But in the broader mental health landscape, are most of these programs like relying on ambient listening in some form or fashion, like listening to the session, then generating a summary? Like how’s all this working in the rest of

Maelisa (18:10)
Yeah, it varies actually a lot. And so I think the listening component is the part that a lot of therapists are still really hesitant to accept. So a lot of these platforms have that capability. I would say most of them have that capability and there’s kind of like four general ways that an AI platform is gonna help you write a note. And so that is either you log into a session and you know,

Dr. Jeremy Sharp (18:19)
Mm-hmm.

Okay.

Maelisa (18:36)
Ideally, it’s a telehealth session, both log in. It’s literally listening in to your session in real time and then creating a summary for you after the fact. Obviously, that’s how you’re gonna get the most accurate information, the most complete information. That’s also obviously the most invasive, right? So there’s a lot there. That’s one component. The next is the clinician after the fact giving a dictation.

summary, which this part is really cool because with AI, dictation is really great because you can kind of talk off the cuff and reorganize things as you’re talking. You can tell it to change things. You can change your mind. So it’s different from transcribing where, like if I talk to my notes app, it’s just, it’s literally copying down everything I say, which is going to have a lot of filler words that I don’t need to have in there.

Dr. Jeremy Sharp (19:13)
Mm-hmm.

Maelisa (19:27)
Whereas an AI can really succinctly summarize what you say. So I find that to be the most universally useful for most clinicians. ⁓ The other option is that the AI can, you can upload a recording. So if you are in person, a lot of times you might have to do this, you know, for the sake of getting audio and all of that.

Dr. Jeremy Sharp (19:38)
Okay, nice.

Maelisa (19:53)
And again, that’s a little bit more invasive. That’s also a little more time consuming and less convenient for the therapist. ⁓ And then the last, I think is the most, the least useful, which is actually logging in and typing in a summary to the AI. To me, I think I can give you strategies and like sentence starters and things that would make, if you’re typing,

Dr. Jeremy Sharp (20:01)
Yes.

Maelisa (20:20)
I’d rather use some other strategies and then just be done with your note. Rather than type in something, then have AI write a note for you, then have to read the note and then potentially edit it. Yeah, so I see dictation as the least barrier to entry, most practical use for most people. Or if your clients are comfortable with it and if you’re comfortable with it, obviously having the AI listen in on the session and give you

Dr. Jeremy Sharp (20:30)
Yeah.

Mm-hmm. Mm-hmm.

Maelisa (20:45)
information about the session afterwards is the most convenient and the most accurate. That’s also potentially, I think what most clinicians are concerned about with that is often not what’s actually the biggest concern security wise. So I think most clinicians are terrified. It’s this whole view that people are concerned about when I talk with people about their notes in general, documentation in general, it’s like,

this fear that my notes are gonna be read by out loud in court by an attorney, you know? And that’s like everyone’s greatest fear that this is gonna happen. And of course, that’s super rare. And so it’s this concept of like my client’s deep dark secrets and trusted information is going to be made public. And the reality is most people don’t care, right? So like that information is not necessarily what like

Dr. Jeremy Sharp (21:34)
Yeah. Yeah.

Maelisa (21:37)
hackers are looking for or anything like that. And so that information is useful to a lot of these AI companies, I will say, because they’re looking at how people use things and at human interaction. But what’s more concerning for me, like from a compliance and documentation perspective, is that when you have the AI listen in, then what happens? Is there a recording saved? A lot of times there is.

Dr. Jeremy Sharp (22:03)
Mm-hmm.

Maelisa (22:05)
Now you have a whole recording of a therapy session. That is very different than having a progress note that summarizes what happened in a therapy session, right? ⁓ Some of these companies, some of these AI platforms give you a transcript of the session. Some of them will give you the recording, the transcript, a summary, and the progress note. And so now you have four different versions of the session after the session happened.

Dr. Jeremy Sharp (22:16)
100%. Yeah.

Mm-hmm.

Yeah.

Maelisa (22:32)
And so for me, that’s actually where things get really messy is what information is saved. You do not want to save transcripts of sessions unless you are in some type of training environment. That just leaves way too much to be potentially misinterpreted. ⁓ You know, so I would not recommend that.

Dr. Jeremy Sharp (22:49)
Mm-hmm.

Yeah, yeah, yeah. Let’s talk about that for a second just because I feel like that’s a place that’s super relevant for testing and for therapy, right? So we will record or transcribe our intakes, for example, and our feedbacks, actually. So tell me, like, specifically, what should we be concerned about with that?

Maelisa (22:59)
Mm-hmm.

Yes, yeah, and this is where I think testing and therapy can be very different, right? So, yeah, I mean, when I did learning disability assessment, I mean, you’re asking like when people learned how to tie their shoes. I wasn’t asking that when I had a private practice and was just working with people, right? In a therapy context. So the level of detail you go into with a lot of testing is, it’s just extreme.

Dr. Jeremy Sharp (23:14)
Okay.

Maelisa (23:32)
and for a good reason, right? And often you need all of that information. And oftentimes, I think it gets to a point where it’s the, all of the, you have reports, you have like the testing protocols, you have scores, you have all of these things, you have to take behavioral observational notes about, you know, even when people are doing a more passive test, right? So all of that,

can feel very much like a transcript and might need to be. So I do think it’s within context. For therapy, I always like to use the example of, imagine you’re talking with someone and they’re talking about their ex-husband and how they’re so frustrated with him and they say, gosh, sometimes he just makes me so mad I could kill him. But that’s…

just a phrase that they’re using, and this is not someone where that’s actually a concern, right? You do not want an AI listening in on that and then picking up on that and then maybe talking about like safety planning. The AI might take that and go all over the place with it. You also don’t want that in a transcript because that’s the type of thing where if the same person is potentially in a custody battle and…

Dr. Jeremy Sharp (24:29)
Mm-hmm.

Mm-hmm.

Maelisa (24:50)
maybe things get really contentious or things get subpoenaed, you don’t want that because, and the reason is that particular statement is not an accurate representation of what happened in the session, right? An accurate representation is just that they were frustrated about their ex-partner. And so that’s where things can get tricky when you have transcripts. Now, on the flip side, if someone,

Dr. Jeremy Sharp (25:04)
sure.

Maelisa (25:17)
legitimately like was threatening their partner, which also happens, then yes, like having that, that specific state, I would recommend document that statement, right? If you then had to do some kind of safety assessment. So my answer is always a dependence for every question and that’s why, but that also highlights the nuance that you have to be aware of with not only with AI, but just with the fact that we’re dealing with people. And that’s part of testing.

Dr. Jeremy Sharp (25:26)
Mm-hmm.

Maelisa (25:43)
why you can’t just have someone log in and do tests, why you actually have a psychologist interpret things, is because there is nuance, right? And we have to explain that a lot of the time. And so thinking about that is really key. And then from a practical perspective, even if let’s, and honestly, 99 % of the time, this isn’t gonna be a big problem. Nobody’s gonna ask for their records. They’re not gonna care about it. It’s not gonna be something, even if they did ask for the records.

You’re not gonna give them the transcript probably. They don’t want the transcript. They just want notes for some reason and it’s not a big deal, right? But it could potentially cause a lot of harm and it’s a huge administrative headache. So why do I use to now do a session and have one note that I produced after the fact? Now I’ve got these four pieces of documentation and they’re potentially not even the progress note. I’m gonna log into a separate system.

and copy paste the progress note into there, right? So like managing all of these pieces ⁓ is so confusing. And I think it’s something that a lot of people don’t consider ahead of time. doesn’t have to, it really doesn’t have to be a big deal. Most of these companies now do have settings on the backend. So you can say, I don’t want transcripts, or you can say automatically delete all recordings, or you can say automatically delete everything once a week or once a month or something like that.

Dr. Jeremy Sharp (26:44)
Right.

Mm-hmm.

Mm-hmm.

Maelisa (27:09)
but it is something you want to think about ahead of time so that you don’t all of a sudden have like essentially two sets of records and two locations and they’re all saying different things.

Dr. Jeremy Sharp (27:21)
Yeah, yeah, I’m glad you brought you brought that up. Yeah, that was one of the first things that occurred to me is just like the administrative headache of well, where do you keep all of that? Is that going in the like does everything go in the EHR like in the file or is it in like

But you’re just saying, okay, try to turn it all off. Don’t have duplicate records if you don’t have to. Yeah.

Maelisa (27:42)
Yeah, yeah, and obviously I think, you know, I think five years from now, it’ll probably all just be within your EHR and you won’t be logging into potentially a separate system to use it. But right now, most of the EHRs, their AI is a little behind. So you’re probably gonna get a better AI note writer if you use a separate system. ⁓

Dr. Jeremy Sharp (27:55)
Yeah.

Mm.

Maelisa (28:07)
That’s been my experience anyway. ⁓ And you know, which makes sense because that’s like a company dedicated to that one purpose. Whereas the EHRs are like adding it on, you know, as because of the huge demand. So I do think those will get a lot better right now. Most of them are not that great. And so they might help a little bit, but if you’re really struggling with notes and want to try it out, you’re probably better off actually logging into a separate system.

Dr. Jeremy Sharp (28:09)
Mm-hmm.

Yeah.

Maelisa (28:34)
using one of those methods to get at the information and then you have to copy paste the note into your wherever your official record is.

Dr. Jeremy Sharp (28:42)
Gotcha. I gotcha. That makes sense. So you said that the least barrier to entry maybe for most clinicians is the dictation. So do the session, then sort of freestyle, know, stream of consciousness, your thoughts, and then let an AI kind of scribe, organize it and put it together and generate. Yeah. Are there, are you allowed to recommend specific?

Maelisa (28:52)
huh.

Yes.

Dr. Jeremy Sharp (29:09)
platforms for that or things that you found, you know, folks have success with.

Maelisa (29:12)
Yeah.

Yeah, I mean, I purposely don’t like do any affiliate or income with any of these companies, because who knows what they’re gonna be in five years. And they change so rapidly. like, I don’t wanna do it. Plus, so I can do this too, like give you real recommendations. Now, no, I don’t know all of them. I said, we talked about before, there’s like 40 plus that have contacted me. I haven’t tested them all. So if I don’t mention it, it doesn’t mean it’s a bad one. ⁓ But Quill.

Dr. Jeremy Sharp (29:19)
Mm-mm.

Mm-hmm.

Maelisa (29:40)
Therapy Notes is one that I have heard from a lot of therapists that they really like. It is a little bit more limited. It is getting, even just in the last couple months, getting more advanced. But you can pretty much literally log in. I think you get a few free sessions and click record and dictate right away and just do it. You can just use it without any setup. So it’s very easy to do.

if you give it some good prompts, just like with anything with AI, if you give it a lot of context, then it’ll give you a better note. But the cool thing with Quill is that it does what I would recommend anybody do if you’re using dictation. It has a little sticky note feature. So you can save notes to yourself to prompt yourself what you wanna write. Because I know that would be my issue is…

if I’m dictating that I’m gonna forget the plan or I’m gonna forget to include a progress statement that I always recommend people include for insurance, for example. Or I might forget to list a certain number of interventions or something. So you can have these little prompts to yourself that it will save and just have up. And so that way you have a structure and a flow, but it’s also reminding you of the key components you wanna tell it to put in your note.

So even if you don’t use Quill, I would still recommend doing something like that if you’re using Dictation. So Quill is great. I think they seem to, at this point, really value client security and privacy. So that is really good too. And that’s why they don’t do recordings. So you only have the option to log in and either do Dictation or type.

Dr. Jeremy Sharp (30:59)
Yeah. Nice.

Maelisa (31:19)
Now frankly, most clients are fine with the whole recording thing and logging in and don’t care. You know, it doesn’t seem to be a big deal for people. So, yeah.

Dr. Jeremy Sharp (31:25)
Yeah.

Okay, yeah. I want to ask you about that, just the informed

consent component and how you might be recommending that we disclose use of AI recording, transcribing to clients.

Maelisa (31:43)
All of the above, yes. So pretty much all of the ethics, this has changed just in the last few months. All of the ethics codes now do have some kind of statement that says, if you use AI, and this means if you use AI in any shape, form, or fashion for something that will end up in a client record or to help treat a client in any way, you have to get their consent.

Dr. Jeremy Sharp (31:45)
Yes.

Mm-hmm.

Mm-hmm.

Yes.

Maelisa (32:09)
So, for a lot of people that I work with, clinicians that I’m working with, they’ll say, all of my clients were fine with it, it’s cool. So they had a little conversation and said, hey, I wanna use AI for my notes. Almost everyone has already talked to their veterinarian or their physician or somebody, like they’re already used to other professionals doing this. So it is not shocking to people. But just let them know that it’s, that is their choice.

Dr. Jeremy Sharp (32:17)
Yeah.

Mm-hmm.

Yeah.

Maelisa (32:34)
So right now all the ethics codes are very clear that we do have to inform people and make it optional, right? So you can’t say, you know, if you don’t want to use AI, I can’t see you because this is how I get my notes done, right? ⁓ But most people are okay with it. And so maybe you have a couple clients who don’t want to use it, but other clients, it’s, you know, I’ve heard some clinicians tell me it was life changing.

Dr. Jeremy Sharp (32:46)
Right. Right.

Maelisa (33:00)
I’ve heard many people actually use that exact phrase that using AI was life changing for their notes. And so if you’re someone who really struggles with notes, I think it is totally worth trying out and doing and going through the effort of testing out a few platforms, talking to clients about it and getting their consent so that you can use it because if you struggle with notes, it really can be a huge benefit.

Dr. Jeremy Sharp (33:03)
Mm-hmm.

Mm-hmm. Mm-hmm. Okay.

Mm-hmm.

Yeah.

Maelisa (33:27)
especially if you can just log into a site. If you’re doing telehealth, that’s kind of the easiest thing. You just log in, it’s listening in, it gives you a note afterwards. You don’t even have to do anything. And I think this is just my take from the neurodivergence perspective. I think for a lot of people who have clinicians who have ADHD, it’s really helpful because it initiates the task, right? So it feels so much easier and less overwhelming to just review a note that an AI wrote for you.

I think it takes some of the emotion out of it too, because you don’t feel like you’re grading yourself as much as you do when you’re writing your own notes. And so that process in itself means that people are just getting them done when maybe they would have avoided the task if it was something they had to produce on their own.

Dr. Jeremy Sharp (34:12)
Yeah, yeah, yeah. I want to go back to the consent component and just nail this down a little bit more. So I’m guessing, you know, we need to put this in our consent form or disclosure form, you know, whatever that we have people sign. You’re making a gesture.

Maelisa (34:17)
Yeah.

So none of our codes say anything about that. Yeah, so even the most strict like state, so there are some states that have laws about this. So you wanna make sure, and these are rapidly happening. So you do wanna keep up to date on your state and if it has a law around AI and mental health or healthcare. So Illinois has the most specific stringent law right now about mental health and AI.

And so they not only have guidance on, you know, for laws, actually restrictions about how AI can talk to people and call itself mental health. They specifically tell clinicians, if you’re a mental health professional, if you’re licensed, know, or, you know, under a board, right? You have to get consent. You can only use it as an administrative tool, meaning,

you could use an AI to help you create treatment goals, for example, but you cannot do that and then not look at them, right? So it explicitly says that if you’re a clinician, you are the one responsible for the product, essentially for the end result. And so you can’t use the excuse that like, well, the AI gave me a bad diagnosis. No, the diagnosis is on you, right? But you can use it as a tool to save yourself time.

Dr. Jeremy Sharp (35:26)
Yes.

Maelisa (35:37)
Even that law, the reason I bring that up, that gets very nitty gritty about specific things, does not say if consent needs to be written. So all of our ethics codes say we need to get consent, but there’s nothing that I have seen yet anyways that has said that it has to be like in writing or in a form. Now I would always recommend, and honestly a lot of state laws even still have that just for general consent, for treatment.

Dr. Jeremy Sharp (35:45)
Wow. That’s shocking. huh.

Maelisa (36:03)
They don’t actually say that it has to be in writing. We just assume that it is. But there are laws that just say you have to get consent. And so you could technically just document that you got consent, but all of us are doing forms and having people sign forms, right? So all that to say, at the very least, I would recommend you document it. So document that you reviewed with your client.

Dr. Jeremy Sharp (36:08)
Interesting.

Yeah.

Maelisa (36:26)
that you, the use of AI in documentation, which is typically how people are using it, and that they consented to using AI for their documentation. If you want to have a form, you can. A lot of the AI companies, these AI platforms will give you a sample consent form. I can’t vouch for them, you know, so look through it, check it out.

A form can feel scary, especially consent forms, because then it gets into legal stuff. I would say don’t let that deter you. And part of what can be scary about that is a lot of the consent forms you’ll see that AI companies will give you use these technical terms that we don’t know. I can’t explain what that means. And so how many therapists know what encryption is?

I am aware of what it is, but I still probably couldn’t give you a good explanation in a succinct way. But don’t let that stop you. Just know that, again, most clients are used to this. And worst case scenario is you just say, actually, I don’t fully know about that. Let me reach out to the company, and I’ll get back to you with that information. So always just defer to that. You can always ask more questions if clients have questions.

Dr. Jeremy Sharp (37:29)
Mm-hmm. Mm-hmm.

Maelisa (37:36)
and reach out to these companies and most of them are very receptive to that type of feedback and to talking with you about those things. So if you do have questions yourself, if you’re looking at their consent form and you’re like, I don’t know if this makes sense or I don’t know what these two bullet points in this list mean, know, email them and they will help you because they are, most of the people who are running these companies that I’ve worked with, they’re…

Dr. Jeremy Sharp (37:53)
if

Maelisa (38:01)
really nice people, they have a good heart and started these companies genuinely because they wanted to help, usually help in some way with access to mental health care and found that this was a big concern for a lot of mental health clinicians and so this is a way to help them. And so they really are invested and friendly and want you to feel comfortable using the product.

Now, you do use your critical thinking, trust your gut. I would always go with that. But ask them questions if you’re confused about anything.

Dr. Jeremy Sharp (38:32)
That’s fair. That’s fair. Yeah. Since we’re kind of in this realm of vetting, uh, the companies out there and looking at, know, their documentation and so forth, whether, are there other, just sort of like major factors that you would consider or things we need to look for when considering an AI scribe or documentation helper or program? What do we need to look for?

Maelisa (38:55)
Absolutely,

yeah.

Dr. Jeremy Sharp (38:57)
Let’s take a break to hear from a featured partner. Y’all know that I love Therapy Notes, but I am not the only one. They have a 4.9 out of 5 star rating on TrustPilot.com and Google, which makes them the number one rated electronic health record system available for mental health folks today. They make billing, scheduling, note taking, and telehealth all incredibly easy. They also offer custom forms that you can send through the portal.

For all the prescribers out there, TherapyNotes is proudly offering ePrescribe as well. And maybe the most important thing for me is that they have live telephone support seven days a week so you can actually talk to a real person in a timely manner. If you’re trying to switch from another EHR, the transition is incredibly easy. They’ll import your demographic data free of charge so you can get going right away. So if you’re curious, or you want to switch, or you need a new EHR,

Try Therapy Notes for two months, absolutely free. You can go to thetestingpsychologist.com slash therapy notes and enter the code testing. Again, totally free, no strings attached. Check it out and see why everyone is switching to Therapy Notes.

Dr. Jeremy Sharp (40:15)
Hey, everyone. I’m really excited that NovoPsych Psychometrics is sponsoring the show. NovoPsych is a platform for psychologists who care deeply about assessment and testing and want their self-report measures to be the very best. NovoPsych has an extensive library of 150 standardized instruments with strong coverage across the presentations many of us assess every day, like disability, functional impact, autism, ADHD, and a wide range of symptom measures.

You can also use it for broad personality assessments like the Big Five or go deeper when you’re looking to understand personality pathology. What makes NovoPsych different isn’t just the range of scales, it is the quality of the experience. So I really appreciate the depth of psychometric info that it provides and the clear graphs and visualizations that make results easier to interpret and communicate. If you want to try NovoPsych psychometrics, you can access a 15 day free trial via the link in the show notes, which is

novopsych.com slash testing psychologist. That’s N-O-V-O-P-S-Y-C-H dot com slash testing psychologist.

Maelisa (41:19)
So the number one thing is, is it HIPAA compliant? And most of them are at this point, a few years ago, that was not the case. So, ⁓ and the way that you are HIPAA compliant as a clinician is by having a BAA, a business associate agreement. So you need to have a business associate agreement if you are using an AI platform.

Dr. Jeremy Sharp (41:27)
yeah.

Maelisa (41:41)
Where I have actually seen this come up more recently is with some of these platforms, these larger therapist platforms like Alma, Rula, Grow, that type of thing. ⁓ I don’t know which ones offhand and I’m not calling any of them out, but I have heard that clinicians were told, you don’t need a BAA because we have a contract with you. No, you do. If you’re using their system and you are not an

Dr. Jeremy Sharp (41:52)
Mm-hmm

Maelisa (42:09)
of theirs, which you’re not for those companies, then you do need a BAA. So that’s just your general rule. That’s for you to be HIPAA compliant. Now the system is saying, the AI company is saying, hey, we’re HIPAA compliant. And that means they understand that healthcare data is special and has special security around it that is different from other data people collect in general business, right? And so that’s why HIPAA applies to them. So they’re saying, we understand what HIPAA is.

Dr. Jeremy Sharp (42:10)
Yeah, yeah.

Maelisa (42:37)
We guarantee that our product is following these guidelines and then you are compliant by having that BAA with them saying that you did your vetting, but you don’t have to do technical research and look at all of the things they’re doing. I will say though, some of these companies, and this is with tech companies in general, even some EHRs,

They are not mental health clinicians and they should know about PIPA and all of these laws, but you’d be surprised by even very large, well-known companies that often are actually not doing, quote unquote, the right thing. Where, I will say where this comes up the most practically is around something called anonymized data. So, ⁓

Dr. Jeremy Sharp (43:03)
Mm-hmm.

Okay.

Yeah.

Maelisa (43:25)
So I actually have a list of questions to ask AI companies and I will give you that link so people, can link that in your show notes so people have that. So this is one of the questions I recommend that you ask them before you sign up. And that is, what are they doing with your clients’ That’s what you wanna know. And so often what they will say, I’m gonna blank right now on the name.

Dr. Jeremy Sharp (43:31)
That’d be great. Yes.

Maelisa (43:48)
and this is terrible because this is kind of the distinction here. So anonymized data is a specific term. It is a legal term in HIPAA and there is a specific way that tech companies have to do whatever behind the scenes to data to make it so. ⁓ That is not the same as, I’m gonna blank on like what is the more common phrase that they would use.

Dr. Jeremy Sharp (44:04)
Mm-hmm. Mm-hmm.

Maelisa (44:13)
to, but there’s another common phrase, maybe I’ll think of it later, that they’ll say that they’re like making your clients’ data private. But there is a literal technical way that they have to do it per HIPAA. And that’s the thing I’ve seen where I’m like, I don’t think some of these companies actually know what they’re talking about. Because, you know, in order for an AI to get better, it has to use the data. It has to use your clients, the information you’re giving it.

Dr. Jeremy Sharp (44:16)
Yeah, yeah.

Maelisa (44:39)
So it’s not a bad thing that it’s using your data as you enter it in your client’s data, but there is a way that it needs to make that data secure on the back end and scramble it so that people could not be re-identified potentially. And you might actually know what these terms are, I don’t know. ⁓

Dr. Jeremy Sharp (44:49)
Mm-hmm.

Yeah. Well, I was just gonna say, don’t know if I know the term you’re talking about, but I do know that there are, I’m like adjacent enough with our AI, you know, our report writing software that there was, it was a big process we had to go through to kind of vet, you know, there are specific like HIPAA, PHI algorithms that the engineers can implement to like identify and remove protected health information from.

Maelisa (45:23)
Yeah.

Dr. Jeremy Sharp (45:23)
records.

it’s a yeah, there are some, I guess, standardized processes that that can happen. But you need to make sure that they’re happening. yes.

Maelisa (45:29)
Yes. And I thought of the term because you said it, de-identified. So if people tell you de-identified, that doesn’t mean anything. That’s like, you your cereal in the grocery store saying Whole Foods, right? Like they can just say that if they want. It doesn’t mean anything. There’s no standard for them to point to. So you want to make sure anonymized actually is a legal term in HIPAA. And that’s what you want them to be doing with your client’s data.

Dr. Jeremy Sharp (45:35)
Mmm.

Mmm.

Yeah. Yeah.

Maelisa (45:56)
Deidentified is not good enough. so deidentified is also what people will tell me they’re doing when they log in to ChatGPT. And they’re like, well, I just use ChatGPT. And if you’ve done this, a lot of clinicians have, would say, no shame, and today’s your last day. Yeah, so. ⁓

Dr. Jeremy Sharp (46:06)
⁓ gosh, yeah.

Yes. No shame.

Yeah, don’t do it again. No shame, but stop right now.

Maelisa (46:21)
You know, if you’re not using a company that you have a BAA with, right? So there is no such thing as de-identifying a progress note or a treatment plan for a client because inherent to a clinical session, even if you take out the person’s name and you think you take out their diagnosis and you give chat GPT information about the session, that literally based on like a legal definition is still identifying information. So there is no such thing as de-identifying a session.

Dr. Jeremy Sharp (46:32)
Right.

Maelisa (46:49)
And so we don’t want de-identified data anywhere. ⁓ Yeah, and that’s more for the tech companies to worry about. But you just wanna ask them and you wanna see if they have answers, right? So you may not fully understand all of the technical terms, but I have a list of questions you can ask them and just see if they are forthright with, and like forthcoming with all of these answers, if they…

Dr. Jeremy Sharp (46:54)
want anonymized data. Yes.

Right.

Maelisa (47:14)
want to give you information, and I would say most of them do, and then just stay on top of that stuff.

Dr. Jeremy Sharp (47:20)
Yeah, yeah, that’s fair. So maybe an unrelated question, but of course still in this realm with all these AI companies doing documentation for folks, you like philosophically, how do you feel about, about that? And like the clinician, not personally writing the note, like are we losing something like the soul or something of the note if the clinician doesn’t write it?

Or is this like, or do you just view it like, hey, this is a really cool, like administrative tool that streamlines things and offloads a part of the work that most of us don’t need to do. Like, how do you look at that? Just philosophic.

Maelisa (47:58)
Yeah. Honestly, I see both. So one of the things I often tell people when I’m teaching them how to write progress notes is I want you to write as little as possible. And so I’m teaching people to like use phrases that they use a lot in notes and have cheat sheets for themselves and create starter sentences. So I’m often teaching people very similar concepts, which is don’t reinvent the wheel every time basically. Right.

Dr. Jeremy Sharp (48:08)
Mm-hmm.

Mm-hmm.

Mm-hmm.

Mm-hmm. Mm-hmm.

Maelisa (48:25)
And so does that make notes a little less personalized? Yeah, it does. It’s a little less personalized than a narrative description. And there’s a lot of benefit to making notes less personalized, but more focused on like the clinically important things in that way. And that is definitely an art, not a science. And…

I think there’s a lot of benefit. even, you know, five years ago there were just things that I wouldn’t have worried about telling people to document that I actually do question now. As information is getting more open and as states are having very different interpretations of legal things and sort of like we’re becoming more polarized politically, which means that

even our state guidelines are becoming more polarized around clinical ways in which we practice, right? And just as a very real example, some states have outlawed abortion and others haven’t. And that is a topic that comes up in therapy, regardless of which state you live in. And documenting that now could have a different consequence. That wasn’t an issue five years ago.

Dr. Jeremy Sharp (49:30)
Absolutely.

Mm-hmm.

Right, right.

Maelisa (49:39)
So ⁓ There is that reality. And many clinicians now are licensed in multiple states. And how do you navigate that type of thing? So all that being said, I think there’s some benefit to actually taking some of that personality out of the note. ⁓

Dr. Jeremy Sharp (49:56)
Mm-hmm. Mm-hmm.

Maelisa (49:58)
but also leaving in what is important to you. The good thing with a lot of these AI platforms and the fact they’ve been around for a while now is that you can personalize things more now. So you can tell it, you can create a template for yourself. You can teach it the things that you want it to pull out of a session or phrases that you want it to say, whether or not it’s listening in on your session. You can really personalize the note to how you work with people. ⁓

Dr. Jeremy Sharp (50:01)
Mm-hmm.

Yeah.

Mm-hmm.

Maelisa (50:26)
And the benefit is that, yeah, for a lot of these, if someone tells me that using this system is life-changing, I mean, how am I to argue with that, right? If you couldn’t get your notes done, I mean, that’s the reality is I work with a ton of people who just literally are not getting notes done because it’s so overwhelming, either administratively and or emotionally, like so stressful, that if this gets your notes done,

Dr. Jeremy Sharp (50:34)
Yes.

Yeah.

Maelisa (50:50)
To me, I have a phrase, any note is better than no note. Like I think, provided you have client consent, I think it’s a great tool to use. Absolutely use it if that’s what you need to use.

Dr. Jeremy Sharp (50:53)
Mm-hmm.

Yeah, yeah, yeah. mean, one way I think about it, I’m curious what you think about this is, especially like with report writing, I think about like, is the AI, are we using the AI to do something that we don’t yet have the skill to do? That to me seems like a problem, right? If you’re kind of relying on AI to do something you haven’t learned yourself. Usually that’s like the critical thinking component and

Maelisa (51:20)
Yes.

Dr. Jeremy Sharp (51:27)
you know, things together and analyze. So maybe the question in there is like, do you feel like there is a skill being built or some value, you know, to the person writing the note themselves? Like, like as writing a good note a skill that therapists need to know, or does it otherwise help like process the session or conceptualize the client, you know, where we might lose something to offload it to AI? You know what I mean?

Maelisa (51:49)
Yeah.

Yes, it absolutely is a skill. And the reason I have an entire business on this is because none of us get training in it, for the most part. And so it is a skill you need to have, and it’s a skill you likely didn’t get training for. ⁓ And so that is, think, the benefit of being a psychologist is that you typically do have to do testing and report writing. And I really do think that helps a lot.

Dr. Jeremy Sharp (51:59)
Right. Right.

Mm-hmm.

Maelisa (52:13)
because you can at least infer some things from that into progress notes for typical therapy treatment. most clinicians get zero training on this topic. So yeah, it’s crazy, but that’s the reality. And yes, you have to know what a good progress note looks like. You have to know what needs to be in your progress note in order to review the AI. Because essentially, you’re moving

Dr. Jeremy Sharp (52:18)
Yeah. Yeah.

Mm-hmm.

Mm-hmm.

Maelisa (52:39)
It’s not that you’re completely letting go of the task. It’s that now you’re the supervisor of the task instead of the doer of the task. And so obviously to be the supervisor of the task, you have to know what is a good product, what is a good result. And so I do think people still need to do training, get some documentation training, get some examples, have an idea of what should be in your note, what you want to be in your note, ⁓ and you have to edit them.

Dr. Jeremy Sharp (52:53)
Yeah.

Mm-hmm. Mm-hmm.

Maelisa (53:06)
because AI still gets things wrong or just focuses on the wrong thing that’s not critical and leaves out things that are critical. ⁓ And again, that really is nuanced and depends on your client and that day. So you still have to review it. Yeah, does that answer your question?

Dr. Jeremy Sharp (53:08)
Yes

Mm-hmm. Mm-hmm.

Yeah, that’s fair.

Yeah, yeah, yeah. Where do you, where do you fall on the letting AI do the diagnosis and treatment planning? Does that, is that legit? Do you, yeah. Are you okay with it? Is it really? Yeah.

Maelisa (53:33)
Yeah, it’s pretty good at diagnosing. Yeah, yeah, in my experience, just from like one session, I always use the same session when I’m testing all these platforms so can compare them. And it’s pretty good. It’s pretty spot on. Now, but it’s not 100%, right? So you still like, you always have to review it. in the session I give it does not have a clear cut diagnosis. It’s not like,

Dr. Jeremy Sharp (53:46)
Uh-huh. Uh-huh.

Yeah.

Maelisa (53:58)
a scenario of someone with a panic disorder coming in and talking about their panic attack. It’s much more nuanced than that. And so it’s still pretty good. And that’s the research so far also shows that with physicians as well and in general, ⁓ it is pretty good at that. Yes. What it is not good at is creating succinct treatment goals. And if you’ve ever used AI in any form or fashion, you know, it creates these

Dr. Jeremy Sharp (54:03)
Mm-hmm. Mm-hmm. Mm-hmm.

Yeah, That AI is better at diagnosing than human. Yeah.

Okay.

Maelisa (54:26)
excessively wordy everything, right? Any response you give it, it adds in unnecessary words. So it does that with notes and you have to train it not to do that. A lot of my AI prompts include things like only use bullet points, only use one phrase, do not use paragraphs. I’m really explicitly telling it, don’t make this long. And so it does that with goals and I…

Dr. Jeremy Sharp (54:29)
Yes.

Mm-hmm.

Mm-hmm. Mm-hmm.

Maelisa (54:49)
hate that because it makes goals confusing, it makes treatment planning confusing, both for you and for the client. And honestly, but the reality is, I have already been training on that because clinicians do that too. They overcomplicate things and add in too many words and in order that and so that and order to and it gets very confusing what we’re actually tracking, you know, from a treatment goal perspective.

Dr. Jeremy Sharp (54:52)
Mm-hmm.

True.

Maelisa (55:13)
So I like keeping things really clear. I haven’t found it to be as good at that, but it’s very good at summarizing information and it’s pretty good at diagnosing. So, you know, use it and where I could see AI being really beneficial, and again, this is if it has continued access to your notes over time, I forget things.

Dr. Jeremy Sharp (55:34)
Mm-hmm.

Yeah.

Maelisa (55:36)
Right? You know, or it’s six months in and I want to talk to my client about updating their treatment plan. Well, if AI could give me a quick little summary rather than me having to go back and actually read through some notes. But that’s, that’s great, you know? ⁓ And so it’s, it tends to be pretty good at that type of thing at, you know, prompting you, highlighting what’s important and it will continue to get better.

Dr. Jeremy Sharp (55:50)
Yeah. Yeah.

Maelisa (56:03)
that task. So I think that could be really, really helpful for clinicians in the future.

Dr. Jeremy Sharp (56:10)
That’s fair. That’s I want to pivot, maybe start to close our discussion with some insurance talk. So you mentioned in the beginning that insurance companies are using AI to review notes and do some auditing and things like that. So can you share more about that? Do you have any more detail in that realm and, you know, exactly what insurance companies are doing and

Maelisa (56:18)
Yeah.

Dr. Jeremy Sharp (56:40)
Is this truly affecting us at this point? What that looks like.

Maelisa (56:44)
Yeah, so I should clarify, I don’t actually know if they’re reviewing notes, they are reviewing claims. ⁓ And so, and this is one thing that when we talk about insurance being scary, I always encourage people to remember, insurance only sees billing information. So, audit red flags, all of that is all based on billing information, and that’s what they’re using it for.

Dr. Jeremy Sharp (56:48)
Okay. Okay.

Mm-hmm.

Mm-hmm.

Maelisa (57:05)
using it for now, right? So AI is reviewing those things and looking at what could be unusual, what doesn’t make sense, was something built twice, right? Why not use AI to catch that kind of thing? And I know, but I do know it’s being used also to deny, so deny claims and then, goodness, I can’t think of the word, but when you’re asking for something to be approved, ⁓

Dr. Jeremy Sharp (57:28)
yeah, like pre-auth.

Maelisa (57:29)
Yes, yes, it is also being used potentially to deny that. So for that, I don’t really know what to do about that at this point, other than be aware and ask. So if you call the insurance company to talk to them about something like that, that something was denied in whatever form or fashion, then ask if it was denied by AI or if a human reviewed it. Now, you know.

10 years ago when they didn’t have AI reviewing it, they were still inaccurately denying things. So that doesn’t necessarily mean it’s kosher, but still check that out and ask that question because it can help at least for them to prioritize and look at it and give it a human review. And then I think as far as your actual documentation, nobody’s seeing your notes unless there’s been an

there’s an audit request. I have not yet heard of them actually using AI for audits. That doesn’t mean it’s not happening. I would be surprised if it’s not happening. Even if it’s like the employees of the insurance company, you know, using AI to help them as unfortunately people are doing all over the world with their jobs, right? So I wouldn’t be surprised. So anytime you see something unusual,

Dr. Jeremy Sharp (58:25)
Okay, sure.

Yeah.

Maelisa (58:42)
This came up, I heard someone who’s a licensed attorney and a licensed counselor was talking about this in the context of their attorney job and that they were reviewing something submitted by another attorney. And at the end of the letter, it said, it had that statement that you often see when you’re working with AI that was something like, if this works for you,

You know, let me know if you want me to edit anything or add something else. Like they didn’t even pay attention enough to take that statement out of the letter. And so all that to say, if it looks and feels off or looks and feels like AI, it probably is. And so trust your gut with that and ask. Don’t feel odd asking, was this AI generated? Did AI review this?

Dr. Jeremy Sharp (59:07)
Right.

my gosh.

Yeah. Yeah.

Mm-hmm.

Mm-hmm. Mm-hmm.

Maelisa (59:32)
Because at this point, I think at least you have a better chance of challenging that.

Dr. Jeremy Sharp (59:38)
Yeah, yeah, that’s good to know. I was not aware that we could challenge. So hopefully that’ll help some folks. Yeah. So let me see. Maybe we start to wrap up with some, I don’t know, optimistic, I don’t know, positive, positive thoughts here. What you’ve said that you, some folks have called this life changing, you know, being able to use AI. Are there specific, yeah, use cases or implementation, like ways that you’ve seen therapists or

I psychologists, know, anyone really, really benefit from using AI in their practice.

Maelisa (1:00:10)
Yeah, the big thing is, it sounds so basic, but that’s what makes it so life-changing is that it starts the task for you. think for overwhelmingly, one of the biggest issues I deal with with documentation is just that people are mentally exhausted by the task and avoiding it. And whether that’s because they had a supervisor,

Dr. Jeremy Sharp (1:00:16)
Mm-hmm.

Yeah.

Maelisa (1:00:32)
who reamed them about documentation, so they have these horrible negative associations with it, or because they really, these are usually good, conscientious clinicians who care a lot about what’s going in their documentation, and then get really anxious about that. And so then it feels like writing a note might take 30 minutes. And so if you know that you’re just going in and the note’s already started,

Dr. Jeremy Sharp (1:00:38)
Mmm.

Mm-hmm.

Maelisa (1:00:58)
That I think is the biggest thing. Yeah, it gives you something to start with. I will say the other thing I hear a lot is, you know, I wrote a note better than I could. And I would challenge people on that. You know, it doesn’t have to sound overtly clinical. Yes, it needs to sound like a therapy session. But, you know, just because the AI uses big words doesn’t mean it’s better than your note that you might write. You know, make sure it’s understandable.

Dr. Jeremy Sharp (1:00:58)
That’s huge.

Yeah.

Maelisa (1:01:22)
It should just be understandable. But that’s my biggest takeaway is just if you’re struggling with notes, that’s now one of my first go-tos that I recommend to people. ⁓ Ironically, it’s two things. either try AI or try going back to handwritten paper. And one of those two solutions, yeah, it seems to be like just some, I think because it simplifies things, right? It’s taking a lot of the guesswork and the thinking out of it.

Dr. Jeremy Sharp (1:01:32)
Yeah. Yeah.

That’s interesting. Yeah, yeah.

Mm-hmm.

Maelisa (1:01:48)
and that helps. I do want to mention a couple other AI companies because I remember you asked about that and I only mentioned one. So I mentioned Quill Therapy Notes. Another one is Barry’s. So that’s a much more complete solution. Another one is Heidi Health. And that one, I know for me personally, when I log in, I can tell that a clinician started that company.

Dr. Jeremy Sharp (1:01:57)
Mmm

Mm-hmm.

Maelisa (1:02:10)
There’s just something about it when I log in that feels different and feels a little bit more familiar, being able to navigate it. And that one is also really complete. Blueprint is another one I hear from a lot of people that they use. you know, and again, if I didn’t mention the one you’re using or one you’re interested in doesn’t mean it’s bad. It’s just like an EHR. Try them out. They’re all different and similar.

Dr. Jeremy Sharp (1:02:17)
Yeah.

Maelisa (1:02:35)
They all have different pricing structures, they all have different free options for trying them out. So that’s my thing is try it out and see what you like and then choose the one that you like and that works for how you work.

Dr. Jeremy Sharp (1:02:47)
Yeah, yeah, I love that. Yeah, I’ve heard of all of those, which is great. And I’ve heard good things, especially about Heidi. It seems like Heidi has made its way into the testing community. And a lot of folks mentioned them. Yeah, for transcription, particularly.

Maelisa (1:02:58)
cool.

Yeah, that would make sense because it was started by a surgeon. And so they focus on all health care. And it’s a pretty big company. Yeah.

Dr. Jeremy Sharp (1:03:10)
Nice. Yeah.

So my last question is more personal. What are, I don’t know, the top one or two, like, personal uses of AI that you are enjoying the most these days?

Maelisa (1:03:20)
Yeah, for now, I’m actually in a coaching program and there are these huge guidebooks that I have to go through and I’m like, I feel like I’m in a school program again, know, like doing all of this work, all of these assignments. And, you know, it’s a business coaching program. And so it was about like creating a really in-depth client avatar. And that was something I thought I’ve had.

Dr. Jeremy Sharp (1:03:35)
So much homework.

Maelisa (1:03:49)
my business QA prep since 2014. Like I’ve been doing this for a while. I know these people. I can give you specific examples, right? Like I know my client and it was really good. I giving me information and at summarizing things well. Um, and I was able to actually upload like survey data and spreadsheets of notes of things that I’ve had and just upload all of this information I’ve collected.

Dr. Jeremy Sharp (1:03:52)
Mm-hmm.

Mm-hmm.

Maelisa (1:04:15)
And then it took that and summarized it and curated it. So anything you have like that. The other thing I do, I do a lot of trainings and I hate writing quiz questions. And so I can give it all of the information about the training and then it gives me the quiz questions and I always have to edit them. But again, it initiates the task for me and it’s so much easier to just edit some questions and improve some things than it is to create them on my own.

Dr. Jeremy Sharp (1:04:19)
Hmm.

Yes. Yes.

Mm-hmm.

Yeah, yeah, those are great examples. Yeah, I love that. love that. Yeah, I’ve been, I love it for aggregating research. Like I’ve been using it a lot for podcast research, you know, just finding articles and summarizing and like where are the gaps and what questions would be good to answer, you know, and things like that. It’s just, it’s so good for aggregating text and like summarizing.

Maelisa (1:04:53)
all that.

Yeah.

Yeah. And

it can be good at like bouncing ideas off of things. You know, it’s a little sycophantic, you know, it wants to please you, but, or it wants you to like feel good about yourself. But if you do it the right way, it can be really, really helpful for giving you ideas. And I will say like one of my favorite examples is we had a little, this is a couple of years ago now, we had a little Christmas snafu.

Dr. Jeremy Sharp (1:05:09)
Mm-hmm.

Yeah.

Mm-hmm.

Mm-hmm.

Maelisa (1:05:29)
with one of the kids and there was some confusion about why Santa brought something early and why it wasn’t wrapped, you why it was on the doorstep when we arrived home from school. ⁓ so AI wrote a great letter from Santa explaining the whole situation. Yeah, it was so fun. But like creative things like that, it’s really, it’s really fun for. ⁓ So it can be fun to play around with.

Dr. Jeremy Sharp (1:05:30)
Uh-huh.

Hahaha

That’s amazing. I love this. I love this. Very specific, very helpful. Yeah.

Yeah, yeah.

that’s awesome. Yeah, nice. Well, this has been super helpful. I love we could talk about this stuff for a long time

Um, so where can people find you? You have a lot of resources. Yeah. Yeah. How can people get to you?

Maelisa (1:06:07)
Yeah,

QAPrep.com is where I have all the things. You can find everything there. I have a YouTube channel. So if you type in my name, it’s spelled differently, M-A-E-L-I-S-A. I’m pretty much the only Melissa spelled that way. You’ll find it. It’ll pop up on YouTube. And I have a ton of videos there. I’ve been doing more YouTube lives. So often you can pay attention to that schedule and actually show up live and ask questions. And then lastly, I have my book.

Dr. Jeremy Sharp (1:06:14)
Hmm.

Nice.

Mm-hmm.

Maelisa (1:06:34)
stress-free documentation. So that’s available on Amazon and has essentially a whole paperwork packet and templates and cheat sheets and all that good stuff in there. So that’s a really accessible way to kind of check out some of those resources that I have.

Dr. Jeremy Sharp (1:06:50)
Yeah, yeah, it’s great. It’s great. have it on my shelf back there and have lent it out many times. yeah, yeah, yeah. Really appreciate the work that you’re doing. I mean, there aren’t many people who really love compliance and documentation. So I’m glad that you are tackling it and being such a helpful resource in our field. So thanks for doing that. Yes. Well, great to talk to you again. And hopefully we’ll talk again soon.

Maelisa (1:06:57)
cool.

Yeah, yeah, thank you.

Yeah, thanks for having me.

Click here to listen to the podcast instead.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.