546 Transcript

Dr. Jeremy SharpTranscripts Leave a Comment

Dr. Jeremy Sharp (00:00)
Hey folks, welcome back to the podcast. As you can see, I’m in my home studio, quote unquote. That means an office in the basement. And I’m excited to be talking with you about business trends for 2026. So I started these episodes, I think last year and had a really good time researching and putting the content together. So I figured maybe this will be a recurring episode here at the end of every year. So doing a lot of research to figure out where we are headed as a field.

assessment and psychology, neuropsychology, even school psychology. So anyone who is focused on testing, this episode is for you. So we’re going to talk about what’s going on in 2026, what to look out for, what to plan for. The tone of this episode, think, is more almost cautionary tale versus optimistic, future-oriented fun.

Take that as you may, but either way, I hope you stick around and check it out. And if you’re listening to this episode and you hear anything along the way and you’re thinking, my gosh, I would like to figure out how to implement this stuff or have a bigger community to talk about this stuff with, check out my upcoming membership community just for testing folks called Craft. It’s launching in mid January, 2026. You can go to thetestingpsychologist.com slash.

craft and get on the wait list so you can be the first to know when the door is open. But for now, let’s talk about business trends for 2026.

Dr. Jeremy Sharp (01:24)
Okay folks, we are back. We’re talking about business trends and assessment for 2026. All right, I try really hard in these episodes to focus specifically on assessment. So we’re not just generally gonna be talking about the future of psychology quote unquote, or AI will change everything. So for those of you who think I’m gonna spend this episode talking all about AI, that is definitely not true.

Also not going to be talking about vague healthcare trends that don’t actually meaningfully matter for us. So this episode is all about what will materially affect clinical psychologists, school psychologists in private practice and neuropsychologists who specialize in assessment. So here’s the idea upfront. And I kind of alluded to this in the intro, but for me, 2026 is not going to be a quote unquote breakthrough year. It’s more of a friction year.

So what does that mean? It means the practices I think that are going to do well are not necessarily going to be the most brilliant from a clinical standpoint. They’re going to be the ones that have operational clarity around documentation and payer interaction, that’s insurance panels, workflow, and risk. So we’re going to be talking a lot about

how insurance is influencing the assessment picture. And even if you’re in private pay practice, I think this is still gonna be pretty relevant for you. But the idea here is if you’re still running your assessment practice, like it’s 2019, which to me means like great testing, but kind of fuzzy operations, you’re gonna feel increasing friction and drag from all of that. So let’s dig in and talk about where that drag is gonna come from and what to do about it, of course.

So the whole, I think, unifying pattern here that we’re going to talk about is there is an emerging direction, I think, across payers and regulators and technology platforms. And that shared direction is essentially more structure, more justification for what we’re doing, more traceability, and less tolerance for the idea of, quote unquote, this is how we’ve always done it.

And this applies whether you are cash pay or insurance based or a hybrid practice. Sadly, none of this is really designed to make your life easier. It’s designed to make your systems more auditable. So that’s kind of the lens for everything that follows from here. And if you’re already like, my God, this sounds terrible, just trust I’m right with you as I was doing the research for this episode and the material all kind of lined up.

These are the trends that we’re looking at. okay, trend number one, quote unquote, efficiency adjustments and the shift toward integration work. What does this mean? Let’s start with regulation. We’re not going to get in the weeds necessarily, but in just sort of a direction of travel level. CMS and commercial payers are increasingly signaling the same thing to us, and that is

Raw volume is less defensible than integration and synthesis and outcomes in the work that we do. So where do we see this? We see this in how testing policies are written, how time-based versus technical codes are being scrutinized, how quality and value frameworks are evolving. So, you know, some of you have heard, I’m sure, of MIPS and MVPs.

This is all tied into RVUs and so forth, like if you’re in hospital setting. So whether or not you personally participate in any of those programs or any other value-based programs, I think the downstream effect of this is still going to reach you. The practical implication is that if your revenue model relies heavily on technical administration with relatively thin interpretation and synthesis and feedback time,

then your profit margins are going to be pretty vulnerable in 2026 compared to where they were five years ago. And so what does this actually mean? It means that reimbursement for test administration and scoring, among other things, is likely going down pretty significantly. It already has among a lot of commercial payers. And I think that’s just going to continue as this is deemed to be pretty low value work.

That’s, you know, if we’re lucky and it continues to get reimbursed for psychologists versus technicians. So what does that mean on the flip side? Well, it means that they’re valuing high value work like report writing, thinking time, feedback, those kinds of processes. So what is the action step here? I think the action step is to essentially audit your CPT code.

Honestly, so you can ask yourself where does the value of this evaluation actually live? I would argue it’s not in our test administration and then secondarily does my documentation make that value obvious to an outsider? so You’re not trying to game the system by any means but it is about I think aligning your work with how Insurance payers and other entities are valuing our

All right, so trend number two, prior authorization is going to get more structured and definitely not kinder. All right, so there is a lot of talk out there about, well, electronic pre-offs, interoperability, and modernization of the whole pre-off process. If this has not come to bear in your geographic area or with your mix of payers, is likely coming pretty soon. So.

We’re not submitting pre-auths by hand anymore. I think that has been changing over the past few years. But there is a big push toward interoperability, so systems being able to talk to one another, EMR systems, that is. And what is the reality then? Okay, what does this mean for assessment practices? More good news, it means that the pre-auth process is likely gonna get faster, but the scrutiny is going to be

tighter as the insurance panels and payers streamline their own processes. And yes, they are likely going to be using AI to scan pre-auths for certain keywords to justify approval or not. So they’re going to start implementing, like I said, standardized denial rationales and clearer medical necessity language and more consistent enforcement. And this just means that sloppy or inconsistent submissions are going to

hurt a lot more. So again, what do we do here? I don’t think the opportunity is not about automation, but repeatability. So it’s kind of an interesting distinction. I think every assessment-focused practice should have standard pre-auth packets for common evaluation types, very consistent medical necessity language. If you need help with this, can consult

Each payer’s documentation, they will typically give you some kind of language for medical necessity. And if you can’t find it in your payer packets, then you can go search the internet for medical necessity language. I did an episode, gosh, this was a long time ago. It was an episode in the hundreds. So probably, I don’t know, five, six years ago around medical necessity that I will link in the show notes.

Another thing that we’re going to need, I think, is just clear internal rules about scheduling only after prior authorization or explicit financial consent has been given. if you’re one of those practices, it’s like a we’ll wait and see, or we’ll figure it out if it’s denied, and then we’ll pivot. I think you’re just bleeding time and goodwill, honestly, at this point. So tightening these processes is going to be super important.

All right, trend number three, payers just keep tightening their testing policies. It’s not new. This is not new. We’ve seen this for many years. But literature and information out there would suggest that it is accelerating. across major payers, we are seeing caps on the number of hours that we can bill for our units or CPT codes. We’re seeing narrower definitions of medical necessity.

And we are seeing stronger pressure to use a single predominant evaluation code set. So assessment is where payers are pushing back the hardest. We are seeing this personally here in Colorado right now with our Medicaid payer. So every practice needs to answer explicitly, are we insurance first? Are we hybrid with firm boundaries? Or are we cash pay with super bills?

there’s not really any neutral position anymore where you can sort of waffle and figure it out as you go along.

If families, if you are taking insurance and you don’t have some kind of rigorous pre-auth or pre-verification process in place, you’re going to have a really hard time, I think. So if your families are still assuming coverage without written financial consent, you’re just like setting yourself up for conflict and probably not getting paid. So I think clarity upfront is kind, not harsh, and certainly protective.

given where we are headed. OK. A lot of insurance stuff here. Let’s pivot for a second and talk about AI. Everybody wants to talk about AI, right? But I’m not going to talk about it in the way that you probably think I am. So this trend is AI moving from a convenience tool to a governance issue. All right. So everybody loves AI. Of course, I’m a co-founder of an AI report writing platform, Reverb. You all know this.

so this hits home for me. I’m not going to talk about AI as a convenience tool or as a tool to help us during evaluations. I think the real shift in 2026 is, that clinicians are kind of moving from being writers to being editors. Okay. So we’re not talking about like better prompts or like flashier tools necessarily. it’s this changing role of clinicians. So.

AI is just being increasingly used for extracting scores and drafting these boilerplate narratives and organizing data. The clinician role as a consequence, I think, is kind of narrowing and sharpening around concepts like, well, our thinking time, like the diagnostic reasoning, differential diagnosis, tailored recommendations, interpreting nuance and context and environment, lot of things that I’ve talked about on the podcast before.

At the same time, we’re running into a governance issue where, at this point anyway, now at the time of this recording, there is a lot of talk at the federal level to establish some kind of AI guidelines, which essentially a lot of people seem to think will mean no guidelines whatsoever on AI restrictions. So right now, we have a lot of state level health care AI laws that are being passed, and that process is accelerating.

The themes in these state level AI laws are transparency and disclosure and accountability. So if you are using AI anywhere near diagnostic work and you don’t an internal AI use policy or clear PHI rules or human review requirements that are actually documented, you’re falling behind and you’re exposing yourself to quite a bit of vulnerability.

I think the practices that are going to do well are not going to be the most automated, necessarily. They’re going to be the most auditable. So the scrutiny around AI use is only going to get more thorough. And if you don’t have policies in place to document exactly how and why and who is using the AI in your practice, you could be in trouble. All right.

Trend number five, validity and defensibility become built in, not optional. So assessment platforms and publishers that think are just increasingly embedding validity indicators and structured defensibility tools. So what does this actually mean? From a business perspective, this matters because clear validity language reduces report revisions, which is great. Standardization reduces internal work.

And defensibility protects you when your decisions are questioned. All right, sounds great. What does this mean for your practice? It means, for me, I think every practice should essentially define what validity indicators are reported by default. Now, we don’t want to give away our validity measures, right? We need to talk about the fact that we did validity testing and which variables they are.

We need to talk about what triggers additional comment or testing and what your quote unquote minimum defensible data set is for your evaluations. So all this kind of flows downhill to say that over testing by default is no longer a neutral thing. I think we’ve been over testing for a long time. A lot of us often over test. Our practice is probably guilty of some of this as well.

in the interest of being thorough. That’s, think, where we’re coming from with this. especially if you are in that insurance payer mix, this just increases cost and time and scrutiny. And you’re going have to a lot of defensibility for the decisions you’re making and why you’re choosing certain tests over others. And validity testing is a part of that. But I’m talking about validity even in a broader sense of why is this test warranted? Why am I doing an extra academic measure? Why am I doing?

six executive functioning measures, right? All right. Trend number six, hybrid assessment models become even more of the norm. So I think at this point, the fully in-person start to finish assessment model is kind of increasingly like a premium niche service and not the default. This is certainly the case with my consulting clients when I talk with folks around the country like,

And this is true for our practice, too. We are essentially doing all intakes and feedbacks remotely, and then the testing is happening in person. So this matches with where I think people are headed. Virtual intake and data gathering, in-person testing, or an intensive kind of model where they come in once and get it all done, and then virtual feedback. So this does have some, I think, real operational implications if you’re not already there.

So this means that many practices might need fewer offices. So you might be thinking about using a rotating bullpen almost of testing rooms. I know some practices that are implementing this with good success where you can take it a couple different ways. You can either have singular offices that are small and reserved only for testing and then broader, bigger offices with shared workspace.

That is what I meant by bullpen. I used that term incorrectly earlier. So shared space for doing the high value thinking time and then rotating testing rooms. You want to make sure you have better telehealth set up. like investing in internet and cameras and all the things that make telehealth successful. This also means setting your clients up for success with telehealth. So making sure they’re aware of what that means and what to expect.

Clean your digital protocols and scoring systems are going to be your best friend. So the payoff here is faster turnaround and hopefully fewer errors and just better alignment with patient expectations. I want to make sure and emphasize this is not about like lowering standards. It’s just like designing workflows that actually work in 2026. For better or for worse, it seems like people don’t really want to leave their house and we’re still in this virtual.

virtual world as much as we don’t like it. Okay, trend number seven, we’re starting to wind down here, but the last thing that I’ll really hammer home is just like workforce scarcity. So this is really affecting our operating decisions, I think. So demand for assessment, like especially adult ADHD and autism assessment is really, really high, but the demand doesn’t scale unless your model scales.

So in 2026, I think practices increasingly have to choose, like, are we boutique or are we scalable? I was just having this conversation with a consulting client the other day. Boutique versus scalable, are you psychometrist supported or are you psychologist only? And then are you doing a narrow scope or like a broad swath of assessment? So you can’t be everything to everyone without burning out your team or yourself.

So if you’re not already thinking about this, I think it’s important to do so. And an exercise you might engage in is just writing down the things that only the psychologist has to do, and then everything else is quote unquote a design choice within your practice. So some of us can have a hard time mapping our minds around that, but it’s worth thinking about. All right. So here’s the part that might make some people uncomfortable. I think rethinking our comprehensive battery is

super important, all right? Everybody loves to say we have a comprehensive battery. We love to say that. We pride ourselves on being thorough and comprehensive. You know, it’s in our tagline even. But the problem is that there is growing pressure, financial and regulatory, and just practical to justify why an eight-hour comprehensive battery is necessary in every case. Necessary, quote unquote. It’s doing a lot of heavy lifting in that sentence.

So the question I think in 2026 is not whether comprehensive batteries are bad necessarily. You can always get more information. It’s more like, they defensible? And do they paint you as an expert like you think they are? Or do they paint you as something different?

Targeted batteries, I think, paired with thoughtful follow-up and clear recommendations are often a lot more sustainable and many times more clinically useful than just default maximalism. So traditionalism alone, I think, is not a business strategy. So you’re kind of seeing these themes. Just because we’ve done it this way or always done it this way is not a viable reason to continue to do it that way. All right. So as we start to close, I want to…

just like hammer home one more time, it’s not necessarily about the tools, okay? Most people think 2026 is gonna be about AI tools. I don’t think that’s true. It’s about auditability, defensibility, operations, policies, documentation. So if you can clearly show what data you relied on, who did what, why the battery was necessary, how conclusions were reached, and where human judgment was applied.

hopefully judiciously and in the right place, I think it’ll be fine even as these systems outside of our control tighten up. And if you can’t do that, then I don’t think any amount of clinical brilliance is going to protect your time or revenue or reputation, especially if you are working with insurance payers. So from my perspective, that’s the real work of next year. And…

You know, I know some of you are probably thinking, my gosh, this is such a downer episode. But this is, yeah, when you really dig in, I think this is where a lot of the trends are headed. This is where the information points. if you’re private pay, you are a little bit insulated from some of this. But if you’re working with any kind of insurance payer, my guess is you’re seeing these things already. And it’s just going to accelerate, like I said, and get tighter and tighter. So I hope that.

This overview and discussion has least given you some things to think about and a few things to put into play as you head into 2026.

Prefer to listen to the episode? Click here.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.