00:00 The Rise of AI in Medicine
01:27 Implications for Psychological Assessment and Therapy
01:30 The Rise of AI in Healthcare
10:51 Implications for Clinicians
17:32 Navigating Risks and Ethics
19:24 Integrating AI into Practice
Dr. Jeremy Sharp (00:00)
Hey, welcome back everybody. We’re talking about AI again today. I know it’s like the topic of the moment, topic of the year, maybe the decade, but a little different spin on it this time. Today I’m talking about an article that came out in the New York Times recently highlighting a big shift in medicine and I think mental health care as well, which is that patients are starting to ask AI chat bots for
medical advice, not even chatbots. mean, people are asking AI for medical advice just within chat, GPT and other language models. Sometimes this is happening before an appointment, sometimes instead of an appointment. ⁓ yeah, AI is in some cases taking the place of talking to real medical professionals. We are definitely not immune to that trend here in psychology. In fact, I would argue that we might be more effective
than most healthcare specialties because, you know, assessment and therapy is often seen as kind of a knowledge heavy task that some people believe that a chatbot could automate. ⁓ So today I want to break down that article, talk about what it means for psychological and neuropsychological assessment, and of course give you some clear actions that you can take to stay ahead rather than.
Playing catch if this comes to your practice.
Dr. Jeremy Sharp (01:30)
Okay, let’s dive into it here. Let’s start with the landscape. What is actually happening? What are we talking about here? So the article, which is linked in the show notes, by the way, describes, I think a pretty striking shift that I would imagine a lot of us are aware of, but it really puts a fine point on it. And that is that people are essentially treating AI systems as medical advisors in a lot of cases, not just for something like what does this symptom mean, but for actual
diagnostic guidance, medication questions, care recommendations, and any number of other things. ⁓ You may have seen some of the research on top use cases for AI and personal guidance slash therapy is right at the top. So this is happening in medicine, it’s happening in mental health, and we need to be aware of it. Now, why are people doing this? ⁓ Well,
you’ve probably experienced some of this yourself. full disclosure, I have absolutely ⁓ asked chat GBT questions about ⁓ medical, you know, medical issues, symptoms for myself and family members. I’ve taken pictures of ⁓ bruises and bug bites and things like that and uploaded them and asked for its opinion. this is, I mean, it is happening and there are lots of reasons. one, it’s instant. You don’t have to wait for an appointment.
It is non-judgmental, so you can put whatever you want in there and you don’t have to deal with a human potentially judging you or think that a human is judging you. AI sounds very confident when it’s offering this advice. ⁓ It is free or largely free. Maybe you pay 20 or $30 a month or whatever for a pro version of a language model, but it’s for all intents and purposes free. And maybe most importantly,
The user experience is dramatically better than doing just a Google search, right? So wading through like pages and pages of search results. I think this is related to kind of like an illusion of safety, know, like patients assume that AI is credible. It comes across as very credible. And there’s also maybe an assumption that if, ⁓ you know, these
theoretically reputable tech companies made it, then it must be vetted and ⁓ very well developed. But that assumption definitely isn’t necessarily true. Most AI systems used by consumers aren’t regulated as medical devices by any means. They’re essentially just like advanced aggregators of public information, not actual physicians or neuropsychologists or anything like that.
The accuracy in the advice, quote unquote, definitely varies. So studies over the past year have shown that when given structured data, AI can hit clinician level performance on diagnosis. But when given messy, real world narratives, the accuracy drops significantly. And in most cases, that’s what we’re giving to AI when we talk to it as messy, real world narratives.
AI tends to give very confident but occasionally dangerous advice and mental health advice is especially inconsistent. ⁓ You may have seen, you know, there are some pretty significant risks of misinterpreting suicidal ideation or psychosis and things like that. So we’re dealing with a tool that’s extremely appealing, extremely accessible, but also extremely flawed in ways that aren’t necessarily visible to lot of the users. So what does this mean?
for us as assessment clinicians. Let’s bring it home here to our world. I think first of all, we’ve been used to the idea of clients coming in with self diagnoses over the last few years, I think, right? Like we’ve talked about the TikTok trends and things like that. So this is already happening, but instead of people coming in saying, I think I might have ADHD because of TikTok, it becomes
I put my symptoms into chat GPT and it said I have ADHD and autism. So there’s like a ramping up of the perceived level of expertise and credibility there that is gonna come right into our laps as practitioners. So the difference is confidence. know, the AI sounds like an expert and then that transfers to the client. So I think this shifts our role from being an initial detective to more of like an expert ⁓
referee of sorts. The second thing just to be aware of, think, is that expectations are changing. Clients are getting what appear to be very credible answers instantly from AI. They’re not waiting months and months for an evaluation. And they’re not used to ambiguity. They expect clarity and speed and plain language and actionable recommendations because that’s what they are getting from these
language models. So this forces us, I think, to improve our service delivery speed and the clarity of our communication, which is ⁓ truly unfortunate. Like we’re trying to compete with pretty advanced technology here, but ⁓ that is the end game. think we have to, we do have to adapt and improve what we’re doing. Third thing to be aware of.
is the things that make assessment truly helpful are thankfully the exact things that AI cannot do well. So it cannot understand the developmental and cultural context or the environmental nuance and things like that, which I think is really the core of what we do. Data is only a part of what we do, right? It cannot pick up on subtle patterns during the interview like nonverbal cues and rapport issues and things like that.
⁓ It does not do well with integrating inconsistencies like discerning ⁓ suboptimal effort or things like that.
That’s a huge one. Actually, I just want to double click on that. It has no sense of effort, ⁓ especially qualitatively. Now, you may have data from effort measures, but ⁓ the qualitative understanding of that is still pretty opaque to an AI system. It’s also not great at using clinical intuition informed by thousands of hours of client contact. Now, side note.
clinical intuition quote unquote is actually not that reliable, ⁓ but it is valuable and it’s something that goes above and beyond what the AI can do. As I said before, it does not do well judging situational nuance. It cannot have any ⁓ understanding of what’s actually happening behind the scenes for the client or like that true lived experience behind the symptoms that are reported. ⁓ Perfect example of this that
I use in talks all the time as a case, probably six or eight months ago, one of our, extern students was doing an evaluation and, ⁓ you know, this was a referral question for autism and, know, the, the teacher was reporting all kinds of, ⁓ instances of isolation and not seeking peers out and social difficulties and things like that. So, ⁓ my trainee went to,
the school and did a school observation and found that the kids in the classroom, this was a daycare situation. The kids in the classroom all had some connection to one another. They were either cousins or siblings or neighbors. And our client was the only one without a buddy or three buddies in the classroom. And so that kiddo was a little more isolated compared to the other kids. There’s no way that AI would
would know that. And lastly, ⁓ just integrating results from truly standardized, validated psychometric instruments. There is no replacement for that. There is no test administration through AI, at least at this point.
⁓ One last thing just to emphasize as far as how this is going to impact us as clinicians is that ⁓ quite simply risk is going to increase. So if a client delays care because an AI, you know, minimize the symptoms or on the flip side, you know, client panics because an AI catastrophizes, ⁓ then we are going to be the ones to handle that downstream effect. So it can go both ways.
So our job is not just about doing good assessment. It’s navigating, I think, a completely different ecosystem of client expectations at this point.
Dr. Jeremy Sharp (10:51)
So let’s transition a bit to business and practice implications here.
You know, I always focus on workflow and business, business aspects. So, ⁓ number one, think that our value proposition has to evolve a little bit. So if clients think that a chat bot or an AI platform can give them a diagnosis, then you know, our value is no longer a diagnosis driven report. You know, they’re like, I’ve already got that. ⁓ your value is going to be as being a verified licensed
expert that provides nuanced interpretation of this data. So synthesizing the data across multiple domains, you know, emphasizing that we’re doing cognitive testing, emotional, academic, behavioral. ⁓ we are experts at interpreting psychometric data. We have like a rich understanding of development, not just that clients development, but development in general. ⁓ our relationship I think is
going to continue to be more and more important with the client. you know, some of those common factors and just the therapeutic relationship that we form with people is going to become more and more important. ⁓ You know, they cannot get that from, from an AI system. ⁓ Tailored recommendations always been important, but they’re just going to get more and more important as people can get, you know, pretty generic recommendations, I think, from language models. Our job as psychologists is to
provide like truly tailored recommendations with local referral sources and resources. ⁓ Judgment, think, and clinical responsibility are going to be huge for us and context, of course. AI can look like it’s doing these things, but it’s really not. And we, I think, have to start communicating with people ⁓ that there are limits to AI and we can go above and beyond what is provided there.
All right, the second thing is that the intake game, I think, is changing a bit, we like it or not. So people are coming in with more pre-formed, pre-digested narratives. Sometimes they’re going to be AI-generated summaries of their symptoms. ⁓ Sometimes they’re going to be completely inaccurate. And so I think we need a plan for how we are going to handle this content.
⁓ so things like clients quoting AI advice, ⁓ clients questioning your conclusions because AI told them something different clients with unrealistic expectations about speed and clarity. And then, ⁓ the inevitable request, if you haven’t gotten it already is, you know, can’t you just run this AI summary, you know, through your tools and use that, you know, as ⁓ a cornerstone of the evaluation.
So lots of things to prepare for. ⁓ But it also, on the flip side, ⁓ creates opportunities to streamline your practice. We’ve talked a lot about how AI can be useful in a practice. But it can summarize intake questionnaires. It can extract data from records. It can create draft report sections, certainly, descriptions of tests that you’ve administered. Standard background info ⁓ can help with scheduling and follow up.
One of my favorites is that you can use it to generate ⁓ patient friendly education documents, you know, for like explaining complex terms and even like generating infographics and things like that. So if you can integrate AI into your practice and do it well, then you know, your time becomes more valuable. And then it’s like a supercharged, you know, the human plus AI combination is super valuable and typically more
more, what’s the word, powerful than either one in isolation.
Another business factor though to consider is that pricing might need to reflect this, shifts. Okay. So as lower value tasks become automated, you may need to revisit how your pricing assessments. So the high value work is your thinking and your judgment and your relationship. ⁓ That becomes a premium service. So you are pricing that unique human, ⁓ you know, experience, expertise, authoritativeness, trustworthiness. This is that EAT model I talked about in the last episode.
that no algorithm possesses. Okay, so ⁓ it might be a two-part pricing adjustment where the kind of menial work or you know low value tasks are priced lower. This will be like maybe ⁓ you know clerical work and generating certain sections of the report, that kind of thing. Whereas the thinking time and the integration time and feedback
gets priced higher. One last factor just to be aware of in the business realm is that risk management is important. So you might need to update. Well, you will need to update your informed consent to share whether you are using AI tools as assistive technology. ⁓ You need to clarify where PHI is going or not going. This is always a good.
Always a good time to remind everyone not to put protected health information into commercially available models like chat GBT or Claude or Gemini. You have to have a workspace account or some kind of HIPAA compliant version. You want to review your internal policies for how AI drafts are handled and looked over. You may want to do some training for your staff on what can and cannot be delegated to AI tools.
This is one of those places, you know, several episodes ago I talked about my proposed AI competencies for clinicians. I’ll give it another shout out and we’ll put that in the show notes. If you want to take a look there, I put together a framework for becoming competent in use of AI. And there are a lot of factors in there that can be applied to these principles here during this episode. And then the last thing is, you know, policies that are clearly outlining that
The final clinical decision and liability rests solely with you, licensed clinician, and not the tool.
Dr. Jeremy Sharp (17:32)
So that’s a nice transition, I think, to limits and risks and ethics of this whole thing. ⁓ We have to be honest, ⁓ AI is powerful, but fallible, right? It hallucinates, of course. It misinterprets nuance. It lacks contextual understanding. It cannot see the client. It cannot detect rapport or tone or motivation or anything like that. It cannot resolve ambiguity.
⁓ It can’t determine what’s clinically meaningful versus background noise. And it can’t ethically administer or score any tests like I mentioned earlier. Okay. Now this all might seem like ⁓ just obvious to us as clinicians, but clients don’t necessarily know these things. And so we have to, you know, kind of assume the responsibility for everybody. From an ethical standpoint, we don’t yet have consistent guidelines across jurisdictions on how to use AI. So…
HIPAA and state privacy compliance have to be the first consideration. ⁓ Like I said, free public language models are never compliant for PHI. ⁓ Liability for AI-generated errors is murky at this point, but for licensed clinicians, ⁓ think liability is 100 % ours. So patients assume that AI is vetted when it definitely isn’t. And so that falls to us as the credible adult in the room. ⁓
So from my perspective, your job is not to avoid AI by any means. Your job is to use it responsibly and communicate clearly about its limits. All right, so what do we want to do next? These are concrete steps. know, I love concrete steps, y’all. So here are a few things that you can do to actually change your practice to accommodate this information. Number one, update your intake process. Start asking.
Dr. Jeremy Sharp (19:24)
Have you used AI or online tools to understand these symptoms? And what do these tools tell you? And how are you relating to this information? What do you make of this information? This can keep you from stepping into, I think, an unseen expectation minefield. It’s almost like, I think there’s a parallel here to those of us who work with trainees. It’s not, ⁓ you using AI? It’s more, ⁓
Tell me how you’re using AI and what you make of it and what you feel is going well and what’s not going well. Just assume that it’s happening. Number two, integrate AI intentionally in your practice. So use the AI for tasks that don’t require judgment necessarily. So summarizing data, restructuring or translating text passages, certainly writing drafts. You could generate tables with AI.
Infographics, I said, administrative workflows are a great place for AI. That was one of the very first things I used AI for in our practice was to have it be a workflow consultant and spelled out our entire intake process and scheduling process and had it poke holes in that and offer suggestions. And lastly, you can use it to generate educational content for either your website or handouts that you’re giving clients, things like that.
You never want to outsource the interpretation, the synthesis, the integration, the conceptualization, ⁓ that clinical decision making, things like that. Don’t outsource that stuff. Number three, you can build a client facing education just around AI, not like ground diagnosis and stuff like that, but client facing education around AI. So you can do a one page handout.
or even a website page titled something like, How to Use AI Wisely for Mental Health Questions. And that like positions you as a guide and puts you in a position of authority, so to speak, and almost like differentiates you from the AI by you kind of embracing it and assuming that clients are using it, but then also commenting on how to use it appropriately and responsibly. can, yeah, like I said, embrace that practice and.
differentiate yourself. And this gives another trust signal and also works pretty well for AI search optimization. Let’s see, another thing, re-center your values. So we alluded to this, but this is concrete step number four. ⁓ Just talking openly on your website and in your consultations about the true ingredients, I think, of strong assessment. So you may want to update your website copy to emphasize that you are doing really well at, you know,
interpreting context and collaborating with the client. have expertise as a clinical psychologist or neuropsychologist that gives you understanding of development and factors. Clinical reasoning is an area of expertise. You provide a trusting relationship. And it seems like it goes without saying for us, but we want to emphasize for clients that we are actually using validated, well-standardized
psychometric instruments. ⁓ And that’s something that they cannot get from an AI chatbot. Another thing you can do very concretely is just prepare your team. So, you know, training your admin staff or your clinical staff on what to say when, you know, clients bring up AI or challenge clinical guidance, you know, based on AI outputs. So you can focus on, you know, validating the client’s search for clarity, of course, while reinforcing the necessity of human licensed.
interpretation. And then the last thing is just keeping, I guess, what you call a future focused stance, knowing AI is going to get better. ⁓ It’s going to get better very quickly, I think. And instead of being afraid of it or ignoring it, now’s the opportunity to position your practice so that you’re leveraging it in ways that really lift the quality of your work. So AI, don’t think
is replacing assessment at any point soon, but it is reshaping expectations and workflows and the business environment around assessment. So if we cling to old models that are slow and opaque and overly focused on just delivering a diagnosis in our reports, I think we’re going to struggle. But there’s a lot of room to lean into the parts of the work that are uniquely human and verifiable.
and ethical. ⁓ and if we, you all know this, I’m a big believer in optimist and AI. If we can allow AI to take the burden off of our cognitive load, it should free up space for creativity and nuance and the relational work that I think actually transforms our clients’ lives. So as always, thanks for listening and ⁓ keep doing good work out there.
Prefer to listen to the episode? Click here.
