561 Transcript

Dr. Jeremy SharpTranscripts Leave a Comment

Dr. Jeremy Sharp (00:00)
Hey everybody, welcome back to the testing psychologist. Here we are. We are entering the third and final pillar of this sprint. I’ve been calling it a sprint, but it’s kind of like a marathon over the last couple of weeks of March. So this pillar is all about the technology suite in your practice. So today we’re going to kick that off by talking about one of the great debates in the field. If you spend any time on professional listservs or

Facebook group or anything like that, you probably heard this great debate between the iPad and paper administration. So on one side we have traditionalists who argue that if it isn’t paper pencil and a manual stopwatch, then the standardized norms don’t apply. And on the other side, we have the digital first clinicians who haven’t touched a physical stimulus book in five, 10, or maybe even close to 15 years, which is my situation.

So as we look toward the future of our field in 2026, we have to move past, I think, preference and look at validity and data. Does the medium change, the construct being measured? It’s just one of the questions that we are tackling in this debate. So today, we are weaving together over a decade of equivalence research with concrete protocols to help you decide when to tap the screen and when to pick up the pencil.

As usual, I am inviting everyone to check out the Crafted Practice Retreat this summer. It is the only in-person business retreat for testing psychologists. It’s a small group affair, only 20 people there at this point. I think we have 12, maybe 11 spots left. Let’s see. I think we’re past the early bird rate at this point, but you can still get, I think, great value from this event. Feedback in the past years has been awesome.

Strong relationships are formed. You get a lot of work done on your business and hopefully go home feeling refreshed. So you can go to the testing psychologists.com slash crafted practice and check it out. And we’d love to see the summer in Colorado

For now though, let’s get into this discussion between digital versus paper admin.

Dr. Jeremy Sharp (02:06)
OK, everybody, as we dive into this topic, we’re going to start with some of the concerns about digital administration. These are concerns that have been around for a long time and rightfully so. But I tried to dive into some of the relevant research here to couch this discussion. Now, when the major publishers moved to digital platforms like Q-interactive, I think that’s one of the biggest ones out there. And other publishers have certainly come up with their own platforms over the recent years as well.

The primary concern here was mode effects. this is the statistical difference in scores when a test moves from paper to a screen. Okay. Very relevant. Relevant concerns. But foundational research here is pretty robust I think at this point. So meta-analyses over the years, including work by Wallstrom and colleagues in 2017 and then more recent work by Alva and others in 2025, investigated whether, you know,

A 100 on an iPad is actually a 100 on paper as well. For the vast majority of cognitive subtests, the research indicates cross-mode equivalence. So whether a kiddo points to a picture concept item on a paper book or taps it on a screen, the underlying cognitive construct remains identical, as best we can tell. So it is equivalent across modes of administration.

But we do have to address something called user experience bias. So a recent February 2025 meta-analysis on digital health technology anxiety, that’s a thing, revealed that for older adults in particular, technology itself can act as a cognitive tax. What does this mean? This means for every year of age over age 60, that’s what they found, the likelihood of a patient experiencing

tech anxiety increases by roughly 10 percent, which seems huge. But this anxiety creates an extraneous load, essentially. So this is a concept we touched on way back in episode one of this sprint, that cognitive efficiency concept. So the technology actually creates, like I said, an extraneous load that competes for working memory resources for these clients. So.

If your client is more focused on like navigating the tablet or not breaking the tablet or getting things right with the digital interface, then on the actual task that we’re trying to measure, then the data is going to be compromised. And, you know, we have to account for this extraneous load. Another thing that we like to keep in mind here is

It’s a funny, like very nuanced and kind of in the weeds concept, but it is important. And I’ve talked about this with some of the Pearson folks, I think, in our conversations in past episodes about development of some of the more recent digital measures, but we see something called a tactile friction gap. So there’s some research from 2024 that highlights that, you know, the medium of the administration does matter. Fine motor speed and

tactile feedback are important in this process. what that means is, know, pencil on paper provides physical resistance. A stylus on glass is more slick, so to speak. And while digital versions are valid, the research would suggest that the slip or the slickness of the stylus can influence performance, particularly in individuals with sensory processing issues or motor tremors.

And all of that potentially leads to an over diagnosis of motor issues or processing speed concerns and things like that. So these are just some things to be aware of, you know, as we’re moving to more of a digital administration world out So all of that said, I think one of the most compelling arguments for digital testing isn’t about the client. It’s actually about us. And here’s why.

Research into manual scoring in clinical settings indicates that there’s a human error rate of somewhere around 5 % on each of our evaluations. that’s not, mean, I don’t know, maybe that sounds small, but when we’re considering the errors that we could make and the impact on folks, you know, if you realize that like a simple kind of three point addition error, when you’re totaling, you know, sum of scale scores, can shift a client from average to

below average or even more significant, from below average to very low or something like that, and potentially change their entire educational trajectory just based on some of the scores. So manual scoring, as you know, it requires us to look up numbers in small print tables. We have to sum columns, we have to transfer data. These are all potential points of failure. And digital platforms virtually eliminate these

clerical errors. Now obviously we still have to tap the right thing on the screen throughout the digital administration, but this elimination of simple addition errors is no small no small feat. It’s certainly worth paying attention to and valuing. So by moving to a digital administration model, you’re effectively clearing your data of human noise, which I think is an asset.

Additionally, you know, we’re moving more in this direction, but you know, digital platforms utilize millisecond accurate timing. so, you know, a clinician’s thumb on a manual stopwatch can only go so far. You know, there’s just a built in like reaction time lag, but digital precision allows you to see the difference between, you know, slow and accurate versus slow and confused with, I think, a level of granularity that we just don’t get from manual timing.

Dr. Jeremy Sharp (08:19)
OK, so how do we apply all this to your practice? That is the most important part. This is what I would call like a 2026 mode integrity protocol if you are thinking about switching to Q-interactive or any other digital administration platform. Number one, consistency. What does this mean? This means that if you can help it, never mix modes of administration within a single domain. So if you start a memory battery on an iPad,

you should theoretically finish it on an iPad. So mixing matching within a given domain is not a great idea. It introduces mode variants that you can’t really account for in your interpretation. So stick with a consistent administration medium whenever possible. Number two. This is interesting. We don’t do this. We don’t do this. I’d be curious how many practices out there do this. And that is screening for digital

literacy. What does that mean? So there is a tool out there. This is just one example, but it’s called the digital tools test. And it’s a standardized instrument that’s designed to screen for tech competency before you begin your battery. just, mean, this is a random guideline and full disclosure, I have not used this measure. So I’m just describing what I found in my own research.

But if the patient scores in the bottom 20%, then they’re at high risk for what’s called a digital penalty. And you might consider a pivot to analog measures for cognitive tasks in that case. The third thing, and many of you, I would imagine, are doing this, but if not, just a reminder. And that’s auditing your scoring. So if you’re still using paper, you’ve got to implement some kind of second eye technology or second eye protocol where

You know, have a psychometrist or an administrative staff member or a colleague, double check your scoring just to make sure that everything is accurate. If you use a digital format, your audit should focus more on ensuring that the automated interpretations actually match behavioral observations and match the performance that you recorded. So there have been a few reports over the years of even automated scoring programs.

not being accurate. I think it’s worthwhile to double check these digital programs as well, or at least keep your finger on the pulse of any reported issues that other folks might be experiencing, like again, in the testing psychologist community or on the various listservs. The fourth thing is documentation. again, we also don’t do this. So this is enlightening for me just in planning this episode.

The guidance here is, know, in your report, you should specify the mode of administration. So it doesn’t have to be anything fancy, but you could just add a line in the, you know, procedures section. This is also where you might put an AI disclosure if you’re using AI to generate or augment parts of the report. But you can put something in the procedure section that says, and I’m quoting here, assessment was administered via a digital tablet-based platform with validated mode equivalence for the measured constructs.

The client’s digital literacy was screened and found to be sufficient for valid data collection. OK, so you could put that in procedures. You could also put that in your, you know, validity or certification statement for the evaluation if you have that.

All right. Again, as with many of these sprint or marathon episodes, we could go very deep into any of these topics. And the hope, of course, is to do that in future episodes. just touching base, checking in with sort of the state of play right now as far as digital versus paper administration. Hopefully this gives you a little bit more understanding and if nothing else, as a refresher to check back in on your processes and make sure that you are providing the

cleanest administration and data possible for your clients.

Click here to listen to the podcast instead.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.