528 Transcript

Dr. Jeremy SharpTranscripts Leave a Comment

Dr. Jeremy Sharp (00:00)
Hey folks, welcome back to the testing psychologist. And today is our last episode in this series on AI for testing folks. It’s a short series and there will be a lot more discussion in the future about AI, of course, as it’s an ever evolving technology in our field. But the short series is coming to a close today and culminating, I would say, in my proposed AI competencies for testing psychologists.

So we’ve talked before in the previous episodes about practical tools and tips regarding AI and how to get started and that kind of thing. We also talked about ethics and supervision and potential erosion of our clinical skills if we rely on AI too much. So if you didn’t catch those past couple episodes, I would invite you to go back and listen, of course, but I’m trying to bring it all to a ⁓ culminating point here.

today by talking through my proposed AI competencies. So as many of you know, I am an AI proponent and optimist. And I think that it’s going to be a pretty integral technology for us. rather than try to ignore it or sweep it under the rug or pretend that it doesn’t exist, we should run into the fire, so to speak, and develop competencies like any other area of practice that will help guide us as we use AI.

So let’s get to this conversation. I’m going to run through the competencies and tell you what I have proposed here for us to work with as we develop our skills in AI.

Dr. Jeremy Sharp (01:42)
Hey everyone, we are back and we’re going to dive right into these competencies. like I said, we’re taking that previous conversations, previous conversations that we’ve been having about AI and using them to create some concrete competencies for ourselves and ⁓ maybe more importantly for our trainees. Now I have no doubt that APA is working on something like this and the intent isn’t to supersede anything that they come up with, of course. ⁓

And I should also give the disclaimer that these competencies are in no way endorsed by APA at this point. So I want to put that out there, but my rationale for designing these competencies is really to, it started in solving my own problem. ⁓ Last year, as many of you know, we have ⁓ an accredited internship program, doctoral internship program, and this was a few months, maybe a year into AI really coming on the scene.

And I found that our interns were experimenting with AI, but we didn’t really have anything in place to gauge their competence or comfort or knowledge of AI. And then that extends, of course, to the other clinicians in my practice and just testing folks as a whole. So the rationale here is that AI use is expanding pretty rapidly, but

Like many areas of our business and practice, there are significant variations in comfort and knowledge and application of it. I’ve seen folks who are ⁓ very tech and AI savvy who are using AI very effectively, but doing so in a completely inappropriate way as far as privacy and data security. I’ve seen folks who are way too cautious with privacy and data security, if that’s possible. And I’ve seen folks who are completely AI naive using it ⁓ without concern.

for privacy or bias and everything in between, to be honest. So that’s the hope is to operationalize and sort of make concrete these ideas around AI literacy and knowledge for the work that we’re doing and this powerful tool that we are wielding. Another part of the discussion, of course, like I said, is how trainees fit in.

In the last podcast, I touched on supervision as it relates to AI. And this is an attempt to bring some of those ideas ⁓ to the mainstream, so to speak, and just, like I said, operationalize them. I’ve seen training programs attempt to prohibit the use of AI entirely for trainees. And I just don’t think that’s realistic at all. To me, it’s honestly, it’s like kind of like running an abstinence only safe sex campaign.

⁓ I think we should just assume that trainees are going to use AI, whether it’s prohibited or not. And rather than assuming that they will not be using it just because the supervisors said so, ⁓ let’s go the other direction and actually regulate it, so to speak, with defined competencies and open conversations about our skill level and knowledge of AI.

So as we go through these know that the document is available for download here in the show notes. I would ask that if you choose to use it please just give attribution where it’s due. There is a fair amount of work that I put into this. And of course if there’s anyone out there at APA who wants to collaborate I am happy to have those conversations as well. With that let’s dive in here. So this competencies document you know is based on other

competencies that are out there, but I really used our doctoral internship evaluation to ⁓ model for these competencies. So I broke it out into a few different domains. There are five domains, six domains rather, seven domains, my gosh, y’all. There are seven domains and each domain has different items underneath them and I’ll go through each one and just give you an idea of what we’re working with here.

So domain zero, I’ll call this ground zero, is ⁓ foundational knowledge of AI. So examples of competencies here are understanding core AI concepts like machine learning, large language models, generative AI, ⁓ identifying commonly used AI tools in psychology like report writers or transcriptions, chat bots, diagnostic screeners, things like that. And the third one is understanding ethical and

regulatory frameworks guiding AI use in psychology. So this is tapping into, for example, like APA stance on ⁓ ethical AI use and transparency and bias and informed consent, that kind of thing. So that’s domain zero, ground zero. Now we move to domain one, which I’ll call transparency and informed consent. A few items under here.

Being able to clearly disclose the use of AI tools to clients, research participants, or consultees when relevant. Obtaining informed consent for AI-supported services, including risks and benefits and alternatives. And then communicating about AI in culturally, linguistically, and developmentally appropriate ways. So like using plain language analogies when you talk with clients about AI. And then the last one is respecting individuals’ rights to decline AI-supported services.

providing alternatives. Domain two is bias mitigation and equity. So I think this is pretty huge. ⁓ Items underneath here include recognizing potential biases and AI systems and their implications, evaluating whether AI tools are trained on representative data sets, advocating for equitable access to AI supported services and research practices. And the last one is identifying and addressing ways that AI could

exacerbate disparities or stereotypes. Okay, so an example might be ⁓ being able to identify or flag a predictive algorithm that might over pathologize boys of color in school settings, for example. these, ⁓ history is rife with examples of AI doing this kind of thing. this domain, think, is particularly important. Domain three is data privacy and security. So

Items under here include demonstrating knowledge of HIPAA and privacy laws in relation to AI. A good example is just knowing that uploading PHI to chat GPT without a BAA is a violation of privacy. Other items include using AI systems that meet accepted standards for data security. This is where you have to be an informed consumer of the software that you’re using, ⁓ knowing about their privacy and security practices.

Another item is informing clients, participants, or consultees about how their data might be stored or shared. And if you’re starting to get overwhelmed with some of these questions, ⁓ that’s totally okay. I mean, like I said, this is a competency document and there’s a Likert rating scale. And of course, the intent is to do a bit of a self-assessment and then recognize the areas that you can get better at. I think we all have plenty of room to get better in this realm.

But particularly within those data privacy and security domain, ⁓ many of these things might sound overwhelming, ⁓ but depending on the tools that you’re using, ⁓ they may have already covered a lot of these issues. For example, my AI company Reverb handles all of the privacy and data security items and can provide documentation around data security and so forth. Let’s see. So another item under here is informing clients and participants

how their data may be stored or shared. talked about that. ⁓ Avoid or discontinue use of AI tools when privacy and security risks arise. Okay, so this is accounting for the possibility of a breach in servers or storage or something.

Domain four is accuracy and misinformation. First item under here is being able to critically evaluate AI generated content before applying it in practice. OK. This is huge. This taps into a lot of the material that I talked about in the last episode around eroding our clinical judgment. Another item is selecting AI tools that demonstrate empirical support and transparent validation.

Another one is identifying and correcting misinformation and AI output. This is very crucial for many reasons. So noticing when AI produces errant output and taking the time to correct that information. Another item is demonstrating caution in recommending AI tools to clients or colleagues. This is particularly relevant for me and I think for others in this consulting space right now, but for

Honestly, for all of us, mean, a lot of us are talking about AI. So bringing this to life a little bit more, it’s essentially declining to recommend a system that does not have any peer reviewed support or does not meet some of the standards that we’ve already talked about.

All right, domain five, I’ve called human oversight and professional judgment. So first item under here is maintaining professional responsibility for decisions, even when AI is used. This is, I think, a core feature of using AI in law and anywhere else. The clinician is the owner of the report. AI is not the owner.

The next item is demonstrates appropriate skepticism toward AI outputs. I think you’re getting a theme here. The next item is integrating AI into workflows to augment and not replace psychological expertise.

This is important for many reasons. Like I talked about in the previous episode, knowledge erosion, I think, is a real threat with AI. really, this whole section just gets at the idea that we have to review the output. We have to be able to critically evaluate the output. And we are not using AI to do the parts of our work that I think require the highest levels of critical thinking, like conceptualization and integration of our data.

And the last item here is establishes human checkpoints when using AI in clinical research or administrative tasks. OK, so this is, for example, just reviewing AI scheduled participants for conflicts of interest before confirming.

All right. We’re getting toward the end. Domain six. Liability and ethical responsibility. OK. Pretty straightforward. think when you’re dealing with liability and ethical stuff, you know, the language is pretty similar in these competency documents. But first item here is demonstrates awareness of potential legal and ethical risks in AI use.

seeks consultation or supervision when uncertain about AI ethics. At this point, I almost think we should all be seeking consultation or supervision ⁓ if we’re using AI. And this is just a plug to integrate it into any consultation meetings that you may be having with your peers or elsewhere. The next item is participates in continuing education on AI in psychology. This is a tricky one because there’s not a lot of continuing education right now that is

APA approved as far as I know. There’s some basic stuff out there, but my sense is that it’s going to evolve and we’re going to have a lot more options as time goes on. And the hope for me is to turn these competencies into an actual CE course that you could take to ⁓ build your knowledge and have some amount of quote unquote official documentation of AI competency.

The next item, last item in this section is, maintains professional competence and AI as part of ongoing ethical practice. Now, this might be the most controversial element of all these. There are many people, I’m sure, who are out there and might say, ⁓ I do not intend to use AI and I have no intention of professional competence and AI as part of ethical practice, just because, you know, if you feel like it’s not

relevant if you have issues with using it in general. Yeah, I totally acknowledge that. I’ve talked to many folks, I think, in that realm. And like I said, I don’t think it’s going anywhere. being somewhat familiar with AI and having some amount of competence is just going to ⁓ evolve to be part of ethical practice for us at this point.

All right. So we get to our last domain, professional development and advocacy. The first item in this domain is stays informed about new developments in AI relevant to psychology. So this is essentially just keeping up with the literature, just like anything else that we that we have to be aware of as part of our practice. The next item is engages in professional discussions about AI ethics and practice.

The next one is advocates for responsible A.I. within organizations or professional bodies. And the last item in this domain and in the competencies checklist is contributes contributes. Sorry folks. Contributes to education of peers, students or supervisees on ethical A.I. use. And again, understand there may be differing opinions in terms of the value of A.I. in our field.

And this is very purposeful to, ⁓ to include again, as a, a clinic or a practice with an accredited training program. ⁓ my personal belief is that, ⁓ we need to be talking with our trainees about AI from the beginning. And again, we’ve had now two intern classes, two intern and postdoc classes, ⁓ since AI really became a thing.

so to speak, ⁓ none of them have been naive to AI. All of them are using AI in some form or fashion. And if we’re not talking about it in supervision, I think we’re doing everyone a disservice.

So I this is a quick run through. There’s a lot to dig into here, but I wanted to give you a bit of a preview or an overview, I suppose, of, like I said, my draft AI competencies. I’m going to be working on this and polishing it and actually trying to put it into play with our trainees this year to see how it plays out in real life, so to speak.

As time goes on, the hope is to, like I said, turn it into a CE course of sorts complete with educational materials and workbooks and quizzes and so forth to actually bring it to life. yeah, I developed this ⁓ to solve my own problem, essentially, of being a little bit overwhelmed with how to navigate AI use with our trainees last year.

and just needed a little bit more organization and definition of the things that I think were important in developing the skillset and being competent with AI as it gets more and more popular. So I would love to hear any comments. ⁓ know, folks find this helpful. If you think it is irrelevant, anything that may be coming to mind, I’m open to have that discussion. So as always.

Thanks for listening and ⁓ happy testing.

Click here to listen instead!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.