540 Transcript

Dr. Jeremy SharpTranscripts Leave a Comment

Dr. Jeremy Sharp (00:00)
Hey folks, welcome back to the podcast. We’re here talking about the NIH toolbox today. My guest, Dr. Stephanie Ruth Young is a licensed psychologist and assistant professor of medical social sciences at Northwestern University’s Feinberg School of Medicine. She was clinically trained as a bilingual pediatric neuropsychologist in Spanish and English. She earned her PhD from the University of Texas at Austin, completed her clinical internship at Dell Children’s Medical Center.

and her clinical fellowship at Children’s Hospital Colorado. Since joining Northwestern in 2021, her research has focused on advancing the accuracy and efficiency of psychological assessments for clinical workflows. To this end, her team has created and validated dozens of assessments of various domains of functioning designed for the youngest infants to the oldest adults. She serves as multiple principal investigator or co-investigator on

several large NIH funded projects primarily focused on tracking cognitive trajectories and early detection of cognitive impairment. So like I said, we are talking about the NIH toolbox here today. So my guess is that many of you have heard of the NIH toolbox. If you haven’t, then there’s a lot to take away from this episode. But even if you have, like I have,

I still think that there is some fantastic content here as we just kind of take a deep dive into the NIH toolbox. We do an overview, of course, like what it is and the suite of tools that are involved in the NIH toolbox. We talk about some misconceptions around it and quote unquote brief measures in general. We talk a lot about computer adaptive testing. We talk about the clinical utility. We talk about validation and norming in

context of the NIH toolbox and tools. And then we take a little detour toward the end, just have a really fun kind of engaging conversation around the future of testing, AI, technology, our role as psychologists, and so forth. So there’s a lot to sink your teeth into from this episode, and I think you will enjoy it, especially if you are looking for alternative tools that may

be less expensive and easier to administer. As I mentioned last time on the podcast, I am launching a membership experience for testing psychologists in January, 2026. It’s called craft and it’s all around doing and not just ingesting the information that’s out there. So this is really meant for all of you folks out there who’ve been maybe long time listeners and long time.

deliberators, maybe, ⁓ or folks who just want some extra accountability and support in putting some of these ideas into practice. That is the theme, the core principle of craft. So this is a community just for testing folks who want to take action and have the support of others to move forward with your practice. So if that sounds interesting, you can go to the testing psychologist.com slash craft.

and get on the list, the wait list. We’re going to do a launch in early January and there’s a special founders rate and some additional perks for those of you who join this initial launch cohort. So go check that out. There will be more information coming over the next several weeks leading up to that January launch. For now, let’s get to my conversation with Dr. Stephanie Ruth Young.

Dr. Jeremy Sharp (03:44)
Stephanie, hey, welcome to the podcast.

Dr. Stephanie Ruth Young (03:47)
Hi Jeremy, thanks for having me.

Dr. Jeremy Sharp (03:49)
Yeah, thanks for being here. I know that you have a lot going on and super grateful and folks take time out of their day. And I’m very excited to talk about this topic. ⁓ yeah, again, plenty of thanks for being here and taking some.

So I usually start with this question of why this is important to you. ⁓ All of us can spend our time and energy and feelings on any number of things. And so I’m curious for you why this in particular.

Dr. Stephanie Ruth Young (04:22)
Yeah, I was really excited to come on, primarily because I am currently in a research role, and I can talk about that a little bit more. But I come from a clinical background and a practice-based background, and I don’t practice as much anymore. That’s not the focus of what I do, but I really have a strong heart for that, practicing psychologists and especially ⁓

sort of the process, the methods, the workflows that we do as professional psychologists. That’s really important to me and I think that sort of gets overlooked in the research world a lot. And so I’m happy to kind of have an opportunity to call that out.

Dr. Jeremy Sharp (05:07)
Yeah, I love that. I’m going to make a lot of people mad by saying this, I think. But yeah, it’s always a belief of mine that practitioners get forgotten a little bit. Like research is really easy to research things, but applying them is kind of a different story. So I like where you’re coming from here. So thank you. Thank you. And this is going to be an interesting conversation, I think, as we sort of bridge both of those worlds. Like, you know, the NIH toolbox is theoretically a research instrument.

Dr. Stephanie Ruth Young (05:21)
Right.

Dr. Jeremy Sharp (05:37)
But I’m thinking we can talk about how it can be more applied as well and work in our practice.

Dr. Stephanie Ruth Young (05:45)
Yeah, absolutely. Yeah, go ahead.

Dr. Jeremy Sharp (05:50)
⁓ I totally lost it. that’s okay. Yeah.

Dr. Stephanie Ruth Young (05:54)
I was going to say, I’m happy to

dive in and just talk a little bit about the NIH toolbox and kind of like that perception of it as a research tool versus its kind of broader applications.

Dr. Jeremy Sharp (06:07)
Cool. Yeah, yeah, yeah. Let’s just kind of like sit and be quiet for a second for the editing purposes. And then I’ll jump in and ask some question that lets you go there.

So let’s jump into it. ⁓ For people who may not be familiar with the NIH toolbox, let’s just lay some groundwork. ⁓ What is it? How does it fit into this broader landscape of practice and research, et cetera?

Dr. Stephanie Ruth Young (06:36)
Yeah, happy to chat about that. And that’s sort of the flagship tool that my group at Northwestern works on. It’s been in production for over a decade, funded by the NIH. And it’s a comprehensive tablet-based neuropsychological battery. So one of the first tablet-based batteries. And it’s in its third version now.

So it was designed to be a common currency for measuring health outcomes. So psychological outcomes and health outcomes more broadly in large scale research studies. So I think that’s why people sort of associated with a research tool because it has been used in so many different research studies over the years. But it is, you know, like has many of the same features as tools that we think of as clinical tools. So it is validated and normed on a

representative sample, national sample for children as young as age three up to 85 years plus. It only requires a single tablet and you do need a trained examiner. So it’s designed to be given by a trained examiner in person, but we do have remote options and different features that allow the examiner to be in a different place. And it’s very efficient. So most tests are

one to three minutes long and use computer adaptive testing. So it’s adjusting to the participant’s ability as you go along, which really shortens the battery. It makes it less burdensome on your patient as well as the ⁓ examiner. And the full battery typically doesn’t take more than 30 minutes. So it assesses.

cognition, so kind of what we would think in like a general cognition battery. There’s executive function, attention, memory, language, processing speed, those types of domains. But it also has ⁓ really comprehensive emotional assessment, motor function assessment, and sensation assessment. And it’s used in both clinical and ⁓ research purposes. So there have been many studies published about using

the NIH toolbox in clinical samples. So neurological samples, psychological, ⁓ cancer, cardiovascular, stroke, TBI, many different populations where this has been used. And so it does have good evidence for ⁓ use in clinical samples. So that is, like I mentioned, that’s sort of our flagship measure.

Now, there’s been other measures that have come out of that, and we can talk a little bit more about those today, but we do have a developmental ⁓ assessment called the Baby Toolbox, which would work for infants and toddlers as young as 16 days of life. So that sort of extends the NIH toolbox downward. And then we have the Mobile Toolbox, which is remote assessment on a smartphone.

Dr. Jeremy Sharp (09:23)
Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (09:38)
as well as MyCog and MyCog Mobile, which are actually designed to be clinical measures and are used in primary care.

Dr. Jeremy Sharp (09:45)
Fantastic. Yeah, there’s so much to get into here. um, I might, hopefully I’m channeling my audience and asking a question like, why have I not heard more about this? Um, I feel like it’s very, you know, and I have like a little bit of behind the scenes, you know, behind the curtain kind of look, but you know, it’s very popular in what I perceive to be like pretty small circles and not like widely used clinically is that first of all, is that accurate? And then if so,

Why are people not hearing more about it?

Dr. Stephanie Ruth Young (10:18)
Yeah, I think it depends on what circles you’re in because you know if you go to these big conferences, people are using them in their studies and they’re using them in these large national studies but I think that comes down to that bridge between research and practice ⁓ where we’re kind of like in our own little worlds and we’re not sharing these different tools like

Dr. Jeremy Sharp (10:22)
Hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (10:40)
You know, the reason that some of the tools and I’m not, I’m going to really try to not call out any tools like commercial tools specifically. might mention some just for comparison, but this is not an attack on, on commercial tools, which are also, you know, well-normed and well-established. but. You know, just to say like, there, there are some differences and there’s reasons why those are not used in, in big studies, right? Because if you have to give thousands of assessments.

Dr. Jeremy Sharp (10:56)
course.

Dr. Stephanie Ruth Young (11:08)
⁓ You need something that’s much more efficient, ⁓ much less expensive per assessment. ⁓ Those were the things that the NIH toolbox was designed for, but at the same time, we have to keep the same clinical rigor and psychometric ⁓ validation evidence and things like that. I think some of those things where we’re designing this to be more well-suited to researchers, also serve

clinicians as well. You want a shorter battery, right? You want something that’s less expensive and easier to give. And so I think it really is just that like kind of mindset. Like we tend to use the tools that we used in grad school and what we learned how to give in grad school. ⁓ Especially because the barrier to entry to a lot of those tools is like very high. So when you learn how to give, you know, a waste or especially like ⁓ I was originally trained as a pediatrics like

psychologists though, those kid measures, right? The developmental measures and the WISC and you know, there’s, it’s very, very difficult to learn how to give those correctly, right? You take a whole year of just learning how to give assessments. So when you invested so much time in learning these tools and you know, you tend to stick with those, right? And use them and trust them more than switching over to a new tool. And I think that might be another reason why people kind of just go with what they know and

haven’t branched out into this sort of thing.

Dr. Jeremy Sharp (12:40)
Yeah, I think that’s fair. I also think sometimes we have a quantity bias in, especially in private practice, by which I mean quantity equals quality. that extends to like, you know, doing lots and lots and lots of tests that are really long and have lots of different subtests, you know, to get at all these different things. And then we write super long reports and you know, it’s like this idea, we just need more and more and more to sort of,

validate our existence or something or our expertise? don’t know. ⁓ And so I wonder if people have some concern about moving or using ⁓ a brief measure, you know, and that sort of core question of, well, if it only takes one to three minutes, how am I ever going to get the data that I need? Do you run into that or do you, how do you tackle that?

Dr. Stephanie Ruth Young (13:31)
Yeah, you know, it’s like, when you say brief measure, I tend to think of like a screener or something like that. Yeah, they’re not necessary, like they are shorter to give, but they’re not brief measures in that they are like measuring really specific domains. And think that’s actually like the point that you’re bringing up is one of the reasons I like the NIH toolbox or like the baby toolbox.

Dr. Jeremy Sharp (13:37)
For sure, for sure.

Mm-hmm.

Dr. Stephanie Ruth Young (13:59)
is because of the domain coverage. So if you wanna do a full comprehensive assessment, typically I see psychologists having to use lots of different products, right? So it’s like, well, I need to give a waste and I need to give all of these, I need to give the CBCL and I need to give all of these parent measures and you need to give all of these different things. And that means you have to buy all of those different products. You have to have all those different products.

physical tend to be physical kits in the room, even if they’re digital, right? You have to have, you might have an iPad and a physical kit, or you have to do, no, I have to do the CPT on a desktop. You know, you have all these different things that you’re managing in the room, which, you know, adds to the time, adds to kind of that patient cognitive load and your cognitive load. And then afterwards you’re having to synthesize like all of these different things into one place. And so I think that having it,

Dr. Jeremy Sharp (14:30)
Mm-hmm.

Dr. Stephanie Ruth Young (14:57)
on the tablet and having like that different domain coverage is just like streamlining everything in one place. I think ⁓ that to me just feels like it’s cutting out a lot of that busy work so you can get more of that coverage without having to kind of like switch around and you know.

Dr. Jeremy Sharp (15:19)
Yeah, yeah, yeah, absolutely. think that just speaking personally for me, and I’m guessing other people maybe feel this way. This is a true kind of head heart problem where like in my head, I think I know like more targeted measures, you know, more circumscribe or, you know, fewer measures that are really good at doing what they do. It’s better than lots and lots and lots of tests, right? Like research wise, I think we know that I think. But

the heart thing is like, gosh, feel like I’m going to be missing. Like, I just don’t understand, you know, how it can do better, you know, when there’s fewer questions or whatever, you know, I’m simplified.

Dr. Stephanie Ruth Young (16:00)
And I don’t even think

it’s fewer tests, like, just to clarify, I think it’s just, all on one iPad. So it’s like, you are giving, you know, I am giving the child behavior checklist, but it’s all within the same platform. And so you’re just, I am giving like a, you know, the analog to a CPT, but I don’t have to pull out the computer to do the CPT or whatever. So you’re just doing, you know, you’re covering like the same domains.

Dr. Jeremy Sharp (16:13)
Mm-hmm.

Mm-hmm. Mm-hmm.

Dr. Stephanie Ruth Young (16:29)
but it just feels like it’s not quite as much testing.

Dr. Jeremy Sharp (16:33)
Yeah, yeah. really thank you, first of all, for pushing back on some of these things and clarifying. I think that’s super important. And I will say any number of things that are on the correct or incorrect. So thank you, thank you, thank you. We’ll find our way through it together here. ⁓ I do, yeah, so maybe I can clarify. Maybe what I’m thinking of is the time. You said like most of the tests ⁓

Dr. Stephanie Ruth Young (16:43)
Hahaha

Same here.

Dr. Jeremy Sharp (17:02)
are one to three minutes, is that right? Which sounds amazing and I’m like, how does that happen? How do you do that? know, when we’re used to these like hour, hour and 15 minute, you know, comprehensive quote unquote batteries and whatnot. Maybe that’s what I’m getting at.

Dr. Stephanie Ruth Young (17:04)
Mm-hmm. Mm-hmm. Yeah. Yeah.

The magic of cats. So computer adaptive tests, right? ⁓

Dr. Jeremy Sharp (17:25)
So talk about the little, let’s do a quick detour and just give me a definition of that so if anybody is not aware of what that is, why that’s different and why it matters here.

Dr. Stephanie Ruth Young (17:34)
Sure, yeah. it’s, are trying to find the patient’s true ability, right? And so instead of, and so you can kind of think of it, I think maybe your audience might be familiar with ⁓ like basal and sealing rules, right? ⁓ So that’s sort of like a really crude rudimentary cat where you are ⁓ adapting to, you’re saying like, okay, if you get these right,

then I know your ability is good enough to get the ones above it. And I don’t need to give all of these items that are low, that are just not gonna measure your true ability, right? We know your ability is higher here and I’m pretty confident you’re gonna get all those underneath, right? Same with the ceiling, right? You’ve gotten this many in a row wrong, you’re not gonna get the hardest one, right? Or like the odds of you actually getting the hardest one, right? Without guessing are really high, you know, they’re really low. Like it’s, it’s un-

Dr. Jeremy Sharp (18:13)
Mm-hmm.

Hmm?

Dr. Stephanie Ruth Young (18:31)
for you to do that. So we don’t wanna test someone. We don’t wanna give everybody the full battery. So what the cat is actually doing is saying like, okay, we’re gonna kind of start everybody in the middle and we’re gonna iteratively adjust to your ability level. And so instead of doing it in that block where you have to do, well, like I have to get five in a row wrong. And we’ve all been there where that’s, it’s like, ⁓ come on, know, like, they’re torturing the,

your client or your examinee or your patient and you’re torturing yourself. you know, it’s just, you don’t want to add to a burden, especially in a really, really long testing day. And so it’s doing that every single time. The computer is saying, what are the odds of you getting the next one right based on what you’ve done so far? ⁓ And so it’s more, it’s actually like a lot more precise than these, like those traditional types of like paper and pencil testing.

Dr. Jeremy Sharp (19:25)
Yeah, yeah, I love that. I love that. is truly adaptive. feel I’ve wanted that for so long, I think, in our measures, and many of us have. Thanks for going into that just a bit. So OK, so we’ve talked about the general. So the NIH toolbox is kind of the broad suite of tools. The baby toolbox is a thing within that. Is that fair? And then the mobile toolbox is also a thing within.

Dr. Stephanie Ruth Young (19:54)
Yes,

I’d say I think of it as sort of like an umbrella. We call it like a suite of tools, a family of assessments, right? Yeah, so the baby toolbox is its own tool. ⁓ It’s single iPad with an examiner. Obviously, you have to have like a highly trained examiner. But you can think of it as sort of like an analog to like a Bailey or a developmental test that

Dr. Jeremy Sharp (20:09)
Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (20:23)
that you might give, right? So different domains. It goes down to 16 days of life, which is typically like parent report and like ⁓ examiner observation, but everything is on that iPad, right? So you can kind of use the same tests. And one of the things I really like about the Baby Toolbox is that it is developed in conjunction with the NIH Toolbox, which starts at three. So the toolbox goes to three and a half.

Dr. Jeremy Sharp (20:25)
Mm-hmm.

Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (20:49)
and the NH Toolbox starts at three. And so if you do, you have that overlap for folks who, you know, do serial assessment over the years, which a lot of times we are going to do in PEDS. And if you’re working with little ones, ⁓ you’re going to want to see them again. And you have that point of comparison in that bridge.

Dr. Jeremy Sharp (21:02)
Mm-hmm.

Sure, sure. So tell me a little bit more. I’m fascinated by the, I think I talked to, know, in our pre recording conversation that we are kind of desperately searching for something for little, little kids, ⁓ as an alternative to the Bailey or the Mullen. And so how does this actually compare in terms of the tasks and the materials, the setup, ⁓ length, all that kind of stuff. Let’s just dive into the baby tool.

Dr. Stephanie Ruth Young (21:36)
Yeah.

And you know, I don’t want to say like, go run out and get this instead of a Bailey, right? Because I think everybody needs to kind of evaluate their needs. And there are some areas where, you know, it’s not necessarily, it’s not like a one-to-one with the Bailey, right? But it is covering a lot of the same domains and it does use a lot of different technologies. And I’m pretty familiar with the Bailey because I used to give it during

Dr. Jeremy Sharp (21:44)
Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (22:03)
during internship and fellowship and things like that. So I just tend to use that as a point of comparison. But I know a lot of developmental measures are sort of work like this where you have a specialized kit ⁓ and those tests are very difficult to learn how to give in a standardized way just because when you are working with little kids, it’s, you you’re doing so much behavior management and the clock is ticking and.

Dr. Jeremy Sharp (22:29)
Mm-hmm. Mm-hmm.

Dr. Stephanie Ruth Young (22:32)
you have less time with them and less patience with your examinee than you do compared to with an adult. And it’s ⁓ higher cognitive load of it’s harder to score, it’s harder to give. There’s more ⁓ moving parts in the kit. Like literally you have very specialized teddy bear and you now give this tool, now pull this out. And the cognitive load for the examiner on giving that is extremely high.

Dr. Jeremy Sharp (22:50)
Yeah.

Dr. Stephanie Ruth Young (23:00)
Whereas when you have with the baby toolbox, that was kind of designed with that in mind to make it just really accessible for the examiner. So first of all, there’s no paper. I think that’s just like a huge thing that seems like we shouldn’t have to be talking about in 2025, but we still are. So I will point that out is that it’s automatically scoring those things. Some of the tests are cats, so it’s going shorter. It’s automatically.

getting the score and you don’t have to worry about reporting things like that. I think one of the coolest parts about it is that it uses eye gaze tracking. So that is really great for like little ones ⁓ for measuring things like receptive language and habituation and things like that. So the old Bailey used to have something like this where you actually had to time how long the child was looking at, you know, the target stimulus versus the, the

Dr. Jeremy Sharp (23:41)
Mm-hmm.

Dr. Stephanie Ruth Young (23:58)
you know, new novel stimulus and write all that down and look at, and it’s like, if you’re trying to time where this baby is looking where, you know, it’s, there’s so much stuff going on in the room and the parents there. And, ⁓ it’s just really, really hard to do that accurately, even if you’re the best psychologist in the world. And so I think that, you know, that is really cool. It’s the first time that that’s ever been used outside of a lab setting. and yeah, I just think that’s.

Dr. Jeremy Sharp (24:08)
Sure.

my gosh.

Mm-hmm.

Dr. Stephanie Ruth Young (24:26)
that’s pretty neat and something that you can’t find ⁓ anywhere else. It also has ⁓ video analysis of all the motor tasks. So it’s recording everything and like timestamping things. It has visual guidance for scoring on the screen. So it allows you to set up the task with the child and like be really, you know, you have to be very present about how it’s set up and things like that and doing these motor tasks.

Dr. Jeremy Sharp (24:34)
Mm-hmm.

Dr. Stephanie Ruth Young (24:54)
And then afterwards, you can review it and score and have that graphical scoring guide with you. Yeah, so that’s the Baby Toolbox. It’s recently been released. is available now, so for anyone to use. ⁓ And it’s pretty affordable. So these are all NIH-funded projects.

some of them have a cost associated to use them and that’s just to like support the cost of maintaining them so that we don’t, know, once the grant is over, we still have to maintain the app so that people can ⁓ use it. So it tends to be a lot less than a lot of the other commercial tests out there. And then you’re also getting that like broad scope of different domains, right? Because within the baby toolbox, you have all of the parent measures, all of the, you know,

Dr. Jeremy Sharp (25:35)
Of course.

Dr. Stephanie Ruth Young (25:49)
social observation measures, like things that you would gather for like, you know, screening for autism or things like that. So you don’t have to buy lots of different things. It’s sort of like a more comprehensive platform. And then just like a low cost subscription. just the NIH toolbox and the baby toolbox have those fees associated. The mobile toolbox is actually free.

Dr. Jeremy Sharp (26:13)
Yeah, I’m sure people are excited to hear that. There’s a lot of talk these days, especially about the price of assessment and it keeps going up and up. So that’s pretty attractive. Yeah. Before we totally leave the baby toolbox, you’ve mentioned that it’s not necessarily in all cases of replacement for the Bailey or Mullen or something like that. Can you help us understand which kids or cases we might utilize one versus the other or is it a both?

Dr. Stephanie Ruth Young (26:22)
Yeah.

Dr. Jeremy Sharp (26:42)
situation, how does all that relate?

Dr. Stephanie Ruth Young (26:46)
I think it’s really hard to say because it’s not, know, every examiner needs to kind of look through the test and decide for like what they need for the types of assessment that they want to do. ⁓ It is testing, you know, lots of different domains, the same domains like language, it’s doing early numeracy.

Dr. Jeremy Sharp (26:57)
Hmm.

Dr. Stephanie Ruth Young (27:11)
⁓ some like executive functioning or kind of early executive functioning type tasks. ⁓ and then like social emotional functioning, is, you know, tends to be parent report. also has a, ⁓ social observation domain, which is sort of like, ⁓ I want to make a comparison to an ADOS, but again, I don’t want to, you know, I don’t think that it’s a replacement for an ADOS. I think that it’s a good, ⁓

Dr. Jeremy Sharp (27:22)
Mm-hmm.

Hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (27:41)
but I do think it can pick up signs of autism, right? And so it’s trying to pull on, tap on those same things. And it’s also the reason I’m kind of hesitant to like, you know, I’m not trying to be cagey or anything, but this is just released like this year. And so I also think that, you know, getting more feedback and more, there’s been great, it is well validated and normed and things like that, but I think.

Dr. Jeremy Sharp (27:50)
Mm-hmm.

Sure,

Mm-hmm.

Dr. Stephanie Ruth Young (28:08)
as we come out with more clinical studies, you can kind of see like, what is it really, really good at doing, ⁓ especially in like a practice population. And that’s an area I’d love more research on, right? I think we do need more research on practicing psychologists and what they’re actually doing and the tools that they’re using and what they need.

Dr. Jeremy Sharp (28:29)
Yeah, for sure. For sure. Yeah, that might be a nice bridge into the practice, just the practice area and sort of the adoption and, you know, use outside of research settings. Now, I will likely get this wrong somehow, ⁓ but my understanding is that people have historically kind of struggled with these tools, you know, applying them in practice.

because again, they’re like quote unquote research tools and ⁓ the research has been maybe, I don’t know, mixed as far as clinical utility and that kind of thing. Could you, could we dive into that for a little while just to, I would love to hear your perspective on

Dr. Stephanie Ruth Young (29:16)
Sure, and like, don’t think, you know, I don’t know if you’re like citing any particular study or, you know, things like that. Yeah, Yeah, yeah. Yeah, I think that it’s, you know, lots of these tools.

Dr. Jeremy Sharp (29:23)
I mean, none that I can recall right now, of course. It’s in my notes. It’s in my notes,

Dr. Stephanie Ruth Young (29:38)
They’re validated in the same ways. And the ways that we validate the tools tend to be against an existing gold standard. But what makes a tool an existing gold standard, right? So I think that ⁓ you really have to look at the evidence and the type of information that you’re getting from the test rather than just like, well, it didn’t correlate well with the waste.

right, or something like that. And I don’t, I think it actually probably does correlate well with the waste. just throwing out something like that. But, you know, if you’re saying like, well, you know, we didn’t, didn’t correlate like we expected with this test that was developed a hundred years ago. Well, does that really mean that it’s, it doesn’t work or like, what is that saying something about, you know, which test is that commenting on, right? Who’s to say that that gold standard is really the new gold standard, particularly as we have

Dr. Jeremy Sharp (30:31)
Mm-hmm.

Dr. Stephanie Ruth Young (30:34)
technology advances as our understanding of these paradigms advances and we add in like new features and things like that. And Jeremy, you’re breaking up like a teeny bit. I don’t know if I’m breaking up. Your video is just breaking up.

Dr. Jeremy Sharp (30:51)
Yeah, I’m sorry about that. I usually warn people ahead of time. The video sometimes gets a little grainy and ⁓ yeah, people often get alarmed about that. But it’s just like some kind of buffering thing with the platform. It actually should look fine on the final, the actual recording. Can you still hear me okay?

Dr. Stephanie Ruth Young (31:06)
Okay.

Yeah,

I can hear you. Yeah, I just want to make sure I wasn’t like breaking up.

Dr. Jeremy Sharp (31:15)
Yeah, yeah, yeah, no. We are good. Yeah, sorry. I I always forget something in that pre-recording spiel. And this was the thing. Yeah. OK, so let’s see. We can just pause for a second. then, yeah, I’ll ask you about the research that’s out there so you can just talk a little bit more about the reliability and validity and that kind of thing. Kind of make the case. I’m like, I really want people to like.

Dr. Stephanie Ruth Young (31:22)
That was good.

Dr. Jeremy Sharp (31:44)
get into this. ⁓

Dr. Stephanie Ruth Young (31:45)
Yeah,

I mean, that’s the thing is that there are hundreds of studies on the NIH toolbox specifically, like hundreds and hundreds of studies. is, you know, it’s very well researched and very well validated and very rigorously normed. And, you know, it’s normed by some of the best psychometricians in the country who are here at

medical social sciences in Northwestern. So it’s definitely on par with any of the evidence that you would see in like Pearson products or Riverside products. ⁓ And we do work with many of those people, right? Like people have come from here and gone to there. so it’s held to those same standards of norming, validation, ⁓ and then also just in terms of use, like where it’s been used in different studies.

⁓ so I can’t, you know, to speak to like any one study using it is like, would be, there’s so many out there with the toolbox specifically. ⁓ I can talk about like our newer measures, ⁓ if you want that has like kind of new evidence coming out. if that would be interesting.

Dr. Jeremy Sharp (33:02)
Yeah, yeah, I love that. Yeah.

Dr. Stephanie Ruth Young (33:04)
Yeah. So baby toolbox has, norming papers that have just come out. which that’s very exciting and, you know, it’s technical manual and all of that stuff is, available. ⁓ the mobile toolbox, I think is very interesting too. ⁓ just because that is more novel. ⁓ that is a smartphone based platform. So.

Dr. Jeremy Sharp (33:27)
Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (33:31)
It’s smartphone assessments, ⁓ smartphone versions of the NIH toolbox tests that are designed to be self-administered on a personal smartphone. So they’re given remotely, it’s completely remote, it’s completely on your own and it’s on your own smartphone. And the data is sent back ⁓ to the examiner and they can review it remotely as well. So excitingly, those validation papers have all been published.

⁓ And they show good ⁓ correlations, good evidence compared to gold standards. And that’s an area where you really do want to compare it to a gold standard, right? An in-person assessment, because you want to make sure that you’re kind of capturing the same thing. And there’s all these different factors that can come up when something’s given remotely, right? If you don’t have an examiner in the room, like distractions or noise, or just all types of confounders. And so we did find that it does compare well.

With gold standards are highly reliable. They have good test retest reliability. They’re designed for 18 plus, but they can be completed by kids as young as seven. So we published a paper about using it with kids. And so that I think is just a really exciting area. We are norming it. We’re in the process of norming it right now. But it’s great because it’s just an easy way to capture a lot of data.

Dr. Jeremy Sharp (34:52)
Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (34:57)
remotely,

and then it’s also a good way for practicing psychologists, I think, to measure patients over time or get some basic data, screening data before they come in. Being able to monitor patients remotely, I think, is something that hasn’t really been tapped into for practicing psychologists, but this could be a good tool for that.

Dr. Jeremy Sharp (35:21)
Yeah, yeah, I totally agree. I was going to ask explicitly what you think are the ideal use cases for the mobile toolbox.

Dr. Stephanie Ruth Young (35:29)
Yeah, I think the mobile toolbox, the ideal use case is research because that’s what it was designed for. It was designed to be able to ⁓ integrate these tools into research studies more easily, be able to measure cognition ⁓ remotely in really large diverse samples. But I do think that information can also be used clinically.

Dr. Jeremy Sharp (35:55)
Hmm.

Dr. Stephanie Ruth Young (35:55)
You know, like I said, it’s not normed right now, so you can’t do anything with that, but you can, you know, like if I were in practice still, I would probably be curious to be able to be like, you know, sometimes you have a really long wait list, right? And people are just chomping at the bit to do something, right? They really want to get in. So it’s a way to kind of give them something to work on before they come in. And then also get a little snapshot of like what might be going on.

and then follow them over time. ⁓ Again, that’s gonna be easier, like more meaningful when the norms are published, but I do think it’s just a way to kind of track someone’s personal progression over time. And then you can also put in ⁓ like different patient reported outcomes, know, quality of life or personality scale, mood scales, different things like that. So being able to combine all that data in one place I think is really helpful.

Dr. Jeremy Sharp (36:54)
Yeah, yeah, absolutely. You know, as we talk through all these things, ⁓ I wonder if we could do a quick detour and just discuss the relationship between validation studies versus something being normed and like how it can be quote unquote useful in research, not, ⁓ necessarily appropriate for clinical practice. Can, can we kind of like wade through that just for a bit in case, you know, hopefully this is helpful for some folks.

Dr. Stephanie Ruth Young (37:19)
Sure.

Yeah, absolutely. ⁓ Okay, so the validation studies that we have conducted so far tend to be things like kind of the basic psychometrics of the measure. So is it internally consistent, like reliability? Does it have test, retest reliability? Like, is it stable over time? And then what we think of is like construct validity. So is it measuring what we expect?

it to measure. If I made a memory measure, I want to make sure that it measures memory ⁓ and it doesn’t measure something else. And the way that we tend to do that is say, well, I know that the WIMS measures memory, so I’m going to compare it to the WIMS. And if we see both of these things correlating highly, we can trust that this probably measures memory similarly to the WIMS, right? Conversely, you want to compare it to something like

Dr. Jeremy Sharp (37:57)
Mm-hmm.

Dr. Stephanie Ruth Young (38:18)
maybe just like a verbal test, you wanna differentiate it from just pure language. So I’d give like a PPBT or something like that. And I’d wanna see that, you know, those things are gonna correlate highly, cause all of your abilities are gonna correlate highly, your cognitive abilities tend to, but not quite as much as with a memory measure. So it’s discriminating between like it’s specific to that. ⁓ So those are our validation studies. And then we’ll tend to look at things like correlations with age and.

Dr. Jeremy Sharp (38:38)
Mm-hmm.

Dr. Stephanie Ruth Young (38:46)
and things like that, but it’s not normed. So in norming, you’re going to want to get a really large nationally representative sample and get enough people to take it in each sort of stratified age band or whatever demographic variable you want to norm on, right? To say, I can say you, based on the score that you had, you have deviated like this is

where we would expect you to be based on your age or your demographics or you’re like significantly lower or you’re significantly higher. So it’s used to be able to compare individual scores to a population norm. And so that’s why that tends to be more important in clinical ⁓ use because that’s what you’re really trying to get at is like, are you kind of where we would expect you to be at or are we noticing kind of a pattern of like a true

weakness, right, or true strength.

Dr. Jeremy Sharp (39:47)
Right.

Right. Yes. Thank you so much for doing that. Like I said, super basic, but hopefully that’s useful for folks just to kind of sort everything out. And then just to bring it back home to our discussion, you said that the NIH toolbox, good. There are norms. It’s validated, et cetera. Baby Toolbox just published some norming papers or a norming paper.

Dr. Stephanie Ruth Young (39:52)
Yeah, it’s good.

Yeah.

It’s

normed. It’s normed. It’s validated. It’s out there in the world and ready to use. Yes.

Dr. Jeremy Sharp (40:18)
Nice. Mobile toolbox, though. Validated, but not normed yet. In the process. OK. OK. Perfect. Perfect. Perfect.

Dr. Stephanie Ruth Young (40:23)
in the process of being normed. I mean, these studies,

take a long time. They’re very expensive. That’s why they, you know, these tools, part of the reason these, you know, your Pearson products are so expensive is that they have to do these really expensive norming studies to make sure that it’s, it’s, you know, scientifically accurate.

Dr. Jeremy Sharp (40:31)
For sure.

Yeah, yeah, absolutely. Time and money, they go together. ⁓ Is there a database component or pooled data component of these tools as well? Or am I totally off base on that?

Dr. Stephanie Ruth Young (40:47)
Mm-hmm.

You can pull your own data, but we do not like the, if you were to get a subscription to the NIH toolbox or the AB toolbox, you own your data. So you could pull your own data and look at like of all of my patients, know, what’s going on with my patients and run your own little research study on your patients, which I encourage everybody to do. But it’s, you you own it. So it’s not ⁓ like.

Dr. Jeremy Sharp (41:02)
Okay.

Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (41:27)
shared back to like a shared database or anything like that. And that’s for patient privacy. We do encourage people to collect data, even, you know, practicing psychologists to collect their own data. And, you know, you can get your, if your patients want to, and you have consent and you have an IRB and things like that, like to get people to kind of conduct their own research studies. Cause I think that’s very, can be very valuable, even though I know how difficult that is. ⁓

Dr. Jeremy Sharp (41:56)
Sure. Yeah. Yeah. I mean, that’s exciting. We’ve tried to do, well, not tried. We are collecting data in our practice and have used that in different ways, but it does feel clunky. You know, we’ve had to build our own tools, you know, databases and that kind of thing. And I would imagine it’s really attractive to some folks out there in private practice who really miss the research world to have something that’s a little more tailor-made to capture this data in a, ⁓

Dr. Stephanie Ruth Young (41:56)
Yeah.

Yeah.

Dr. Jeremy Sharp (42:27)
predictable way, uniform way.

Dr. Stephanie Ruth Young (42:29)
Absolutely. I think that kind of comes back to, to like not having 10 different products in the testing room, right? Is that the end, like they’re all the data is in kind of one place and it’s easy to harmonize and you’re not trying to put everything in a spreadsheet. I always harp on this, but I actually published a paper about this in professional psychology.

Dr. Jeremy Sharp (42:45)
Hmm.

Dr. Stephanie Ruth Young (42:59)
practices, can’t remember the name of that journal. I can share it out if people are interested, ⁓ just how many errors you can make in the whole process of testing. So if you have to take all of your data, right, like you were saying, we’re going to make a database, have to transcribe it from this sheet to this sheet to the, enter it in here, like every point of data entry or writing something down is an opportunity to make a mistake.

And so it’s just nice to have like everything kind of streamlined and foolproof.

Dr. Jeremy Sharp (43:32)
Yeah, yeah, absolutely. Now you’ve completely piqued my curiosity about errors that we make in assessment. ⁓ Can you talk about that? I know this is a little off topic, but maybe, you know, maybe really. Yeah.

Dr. Stephanie Ruth Young (43:42)
No, I’d love to. Yeah,

this is my and not to plug my own paper here, but it is sort of a pet project of mine. So did have one in 2022 with some of my colleagues from grad schools and professional psychology research and practice to the journal that might be interesting to your reviewers. Or sorry, not reviewers, your audience. And

Dr. Jeremy Sharp (43:54)
Yeah, go for it.

Dr. Stephanie Ruth Young (44:12)
Yeah, just about kind of the benefits of switching to computerized assessment. Just I spent my whole career in when I was practicing like throughout grad school, throughout my training, ⁓ which really wasn’t that long ago in the grand scheme of things, right? And it, you know, we’re still using paper and pencil for pretty much everything. ⁓ Even though, you know, Q-interactive was out and there were some of those things were emerging.

Dr. Jeremy Sharp (44:27)
Mm-hmm. Mm-hmm.

Dr. Stephanie Ruth Young (44:40)
people were really resistant to change and there were some products but it wasn’t really like, you know, it wasn’t catching on quite as much. And so I wanted to write like a sort of like an op-ed about why we should be sort of embracing this computer ⁓ enhanced practice is what we were calling it. And there’s…

Dr. Jeremy Sharp (44:43)
Yes.

Dr. Stephanie Ruth Young (45:04)
turns out there are lots of large studies out there looking at errors that psychologists make in testing. And maybe your audience knows about that, or maybe you’ve already talked about that on the podcast, but errors are, yeah, errors in testing are the norm, not the exception. ⁓ So lots of studies that have like gone through protocols and looked at different errors that people have made. ⁓

Dr. Jeremy Sharp (45:17)
No.

Dr. Stephanie Ruth Young (45:31)
And they found that it’s actually more common in, they compared graduate students to like professional psychologists who have been in their careers longer. And they found that people who have been more experienced made more errors because they had that deviation from the training. And the numbers are really, really high. And it kind of depends on what you count as an error, but.

Dr. Jeremy Sharp (45:42)
Hmm.

Dr. Stephanie Ruth Young (45:58)
you know, to break standardized procedure, right? That’s an error. And we’re doing that all the time, but I think that, you know, you can kind of drift and there’s different, people have different ideas about like what’s standardized and what’s not. And so just taking the parts out of, out of the process that, that are sort of rote. So for example, like a highly common error was just like, at addition error.

Right? Because you’re like, if you’re looking at a protocol and you’re doing ones, you know, maybe you, you just add up wrong, right? ⁓ things like that, or you’re, you’re getting a number and then you’re looking through a tiny table with lots of different numbers and finding like, which one does it go with? And then I have to put that score into somewhere else. ⁓ so lots of opportunities for errors there, but they found that the, if you did it with tablets, those types of errors go way down.

And so my kind of argument or thesis is just about cognitive load of the examiner. So when you’re in the room with a patient, with a client, I’m not sure what terminology your audience uses, but you’re focusing on so many different things. And then in your head, you’re kind of trying to remember all of these things and collect this data. And really you need to be focusing on the patient as much as possible. And so

Dr. Jeremy Sharp (46:58)
Mm-hmm.

Dr. Stephanie Ruth Young (47:19)
offloading anything or anything that the computer can do, it should do because there’s just so much that the computer cannot do. And that’s what you need to be doing. And so that’s sort of why I’m really pushed for a lot of these advances and automating a lot of this stuff because it just takes away our time with the patient.

Dr. Jeremy Sharp (47:39)
I could not agree with you more. think a lot about cognitive load of clinicians, like you said, everything that we’re managing. I mean, I think about it a lot in the context of report writing as well and of course, like AI and its role there and that kind of thing. But I totally agree with you. ⁓ I think we’re managing a lot in the room and just thinking about, you know, those moments, we’ve all had them, like probably

daily where you’re like, you’ve got a clipboard and then a timer and maybe an iPad or a computer in front of you. And you’re like trying to watch the client and like do things with one hand, you know, it’s that’s crazy. That is crazy. That is not our job, you know, or the best use of our brains. So I’m right with you.

Dr. Stephanie Ruth Young (48:23)
Yeah,

if I can just turn on one person to using tablets, you know.

Dr. Jeremy Sharp (48:30)
Yes.

Yes. Well, I think we’re headed that direction, but I’m probably not the best person to talk about that. I mean, this really leads very nicely into the next question, which is like, where are we headed? Where do you think we’re headed over the next three, five, eight years as far as, you know, computer administered tests and technology and assessment and that kind of thing.

Dr. Stephanie Ruth Young (48:55)
Yeah, that’s a great question. And I will answer just as me, Stephanie Young, and not ⁓ on behalf of medical social sciences or Northwestern University or anything like that. It’s just my opinion. ⁓ I think it’s going to have a really huge role ⁓ because I think that, you know, AI, as we’ve all seen, it makes lots of errors and it can’t do everything.

Dr. Jeremy Sharp (49:10)
Yeah.

Dr. Stephanie Ruth Young (49:24)
but it can do a lot of those rote things, right? And I think we spend a lot of our time doing really rote things that the computer can just do so much better than us. I also think with report writing, it’s gonna change that a lot. I think that it will get us to focus more on

Dr. Jeremy Sharp (49:33)
Yes.

Dr. Stephanie Ruth Young (49:51)
what the report is actually saying and not so much on there’s a typo here and there’s a, this is a personal strength and this is a personal weakness and just repeating it, the AI will just spit that out all for you. And if it can do that, then that wasn’t ever your, like the…

Dr. Jeremy Sharp (49:54)
Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (50:18)
the value of your role as a psychologist in the first place. And so I think my hope, I’m sort of an optimist about this stuff. My hope is that I think that it’s gonna take a lot of that off of you and really have you focus on what is my role as a psychologist? What information did I get here that I think is really important to communicate? Not based on some template, not based on how my supervisor did a report or any of that.

Dr. Jeremy Sharp (50:22)
Hmm.

Yeah, same.

Dr. Stephanie Ruth Young (50:46)
just what do I need to actually help my patient right now? ⁓ And I think that what we’re probably gonna see is like some sort of data output, you know, because I think it’s a patient’s right to see that and the explanations and things like that, and that will be output. But then a really briefer summary or briefer communication, whatever that is, that is fully you.

You know, it almost allows you to be like even more human. So maybe that’s sort of pie in the sky, but there’s lots of challenges too that are gonna come up from this, but that’s sort of my hopeful ⁓ expectation.

Dr. Jeremy Sharp (51:19)
Mm-hmm.

Yeah, yeah. No, again, I really can’t do anything, but agree with you. I feel like we spend a lot of time on minutia and our skill, that’s not where we can best spend our skills. I personally want to do the conceptualization and write really nice recommendations that are kind of personal and meaningful versus regurgitating a history or transcribing scores into a table and telling people what’s average and what a percentile is.

Dr. Stephanie Ruth Young (52:01)
running it.

Dr. Jeremy Sharp (52:03)
that’s not where we need to spend our brain power at all. Yeah. Yeah. You know, I talked to, gosh, when was this? This is maybe a year ago. I was doing a podcast with, Cecil Reynolds and a few other folks and you know, he was alluding to the idea that, we might get to the place eventually where, ⁓ we’re administering measures and it’s doing some of, or,

Dr. Stephanie Ruth Young (52:05)
You

yeah.

Dr. Jeremy Sharp (52:31)
more of what you’re already talking about as far as like eye tracking and know, gaze, ⁓ gauging and things like that, where a lot of those even behavioral observations are somehow happening through the assessment, like through a, you know, through technology. ⁓ Are you seeing anything like that on the horizon or is it still sort of, ⁓ you know, futuristic wishful thinking?

Dr. Stephanie Ruth Young (52:55)
No, I think that

definitely like, you know, with things like ML video analysis, you know, ⁓ they’re already starting to use that for things like gait and just different like behaviors. I think that’s definitely ⁓ on the horizon as well. think things like we’ve looked into a lot and this is something I wrote up in the paper as well is that, you know, the computer interface can…

Dr. Jeremy Sharp (53:02)
Hmm.

Dr. Stephanie Ruth Young (53:23)
captures so much more nuanced data than you can even just observing. So like the variability in like touch speed across the length of a measure and things like that. So just really, really finite stuff that I think can, is gonna be really fruitful. And then hopefully like shorten the testing, you know, I think.

Dr. Jeremy Sharp (53:28)
Mm-hmm.

Yeah, yeah.

Dr. Stephanie Ruth Young (53:51)
One of the things I think about is getting closer to the construct that you’re actually trying to measure and making the tests less culturally loaded and sort of doing less testing, right? Like getting kind of more to the heart of it. And I think that will be interesting as time goes on and these advance how much more ecological validity we can get out of them as well. ⁓

Dr. Jeremy Sharp (54:03)
Hmm.

Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (54:21)
that will be interesting too.

Dr. Jeremy Sharp (54:23)
Absolutely. Yeah. It’s almost like a distilled kind of pure, purer version of assessment or something. Yeah. Yeah. Um, you know, I think my head these days just from where I’m at in my life is like just loaded with existential crises. feel like. And so, you know, this, I, it just makes me think, what are we going to do as psychologists when so many of these, um, quote unquote, important tasks that we do are taken away and

Dr. Stephanie Ruth Young (54:29)
Mm-hmm.

Dr. Jeremy Sharp (54:53)
done better. ⁓ We don’t have to answer that now.

Dr. Stephanie Ruth Young (54:56)
Yeah, no, but like

I do, I think about that too. And I feel like I heard someone say the only people’s jobs who are safe are psychologists and comedians. And I couldn’t, I couldn’t agree more. I think that it can do, you know, because like you said, like when it, when it does all the important tasks, like I, kind of think that it should do every task it can do because it’s never going to do the heart of

Dr. Jeremy Sharp (55:08)
⁓ okay.

Dr. Stephanie Ruth Young (55:26)
the job of being a psychologist, which really is using your humanity to like understand and synthesize and observe all this information about the patient and then like kind of put it together in a way to help them. And that’s not what a computer can do and will be probably won’t be able to do like definitely not in our lifetime. I would imagine. So I think it’s it’s going to do even if it could do the entire battery, we should let it.

Dr. Jeremy Sharp (55:40)
Mm-hmm.

Mm-hmm.

Dr. Stephanie Ruth Young (55:55)
because it can’t give us that human ⁓ like review of the data, analysis of the data, synthesis of the data, working with the patient. Like it’s just not, it’s not ever gonna be able to capture all of that information and like understand it in the way that you can. So I just see it as like these tests, they’re just like thermometers, right? And so if you made a better thermometer, you would want that.

Dr. Jeremy Sharp (56:21)
Hmm.

Dr. Stephanie Ruth Young (56:25)
And there are measurements, you know?

Dr. Jeremy Sharp (56:28)
Yeah. Yeah. Yeah. Well said. Well said. think this is the best version of AI optimism here in assessment or tech optimism.

Dr. Stephanie Ruth Young (56:33)
I mean, I don’t know, like it

couldn’t be terrible as well. but that’s, that’s, guess what I hope for, I guess in the near, the near term, you know.

Dr. Jeremy Sharp (56:45)
Yeah, yeah, for sure. So we’ve covered a lot. This has been tons of fun and very informative too, but let me, I’ll start to land the plane here and just ask, you know, for folks who are really interested in the NIH toolbox and, you know, baby toolbox and mobile toolbox, all these things, what’s a good just first step to maybe start to use them, investigate, play with them a bit. Yeah. Where do people go?

Dr. Stephanie Ruth Young (57:10)
Yeah, I think go to our website. So it’s nhtoolbox.com and you can also email help at nhtoolbox.com and we’re pretty responsive and that’s for all of the measures. So we have all of our measurement systems on that website. ⁓ And then also if you reach out to that with a question, we’ll get back to you it can be about any of those things. And I really encourage people to check those out, see if it could work in your practice.

And then also, you know, for practicing psychologists who want to get involved in research, especially if they have, you know, ⁓ like a patient population that they’re working with that they have a research question about, and they think that the toolbox could be useful for that, like reach out to us because we do collaborate a lot with different folks. And, you know, I’m working right now with, I did my fellowship at

Dr. Jeremy Sharp (57:59)
Hmm.

Dr. Stephanie Ruth Young (58:07)
Children’s Hospital of Colorado, still have a relationship with them where we collect data, like clinical data from the patients that they see. And it’s slow going, right? When you’re actually in practice, but it’s really, really fruitful because you’re getting so much rich information about how it’s used in real life and how these measures can actually help people. So we’re definitely interested in that and feel free to reach out.

Dr. Jeremy Sharp (58:33)
Yeah, that’s great. love that. And very, again, very basic questions. like to get super concrete sometimes. people can go to the website. This is an app. Is that right? Like you just download an app to an iPad.

Dr. Stephanie Ruth Young (58:46)
Yes, they’re in the app store. You would download the app for Baby Toolbox and NIH Toolbox. For the mobile toolbox, it is actually free for the life of the grant. You do need to have access to RedCap. So for your audience, if they have, like I know a lot of psychologists still have an association or affiliation with a university system or a larger healthcare system, they tend to have RedCap.

Dr. Jeremy Sharp (58:53)
Mm-hmm.

Mm-hmm. Mm-hmm. Mm-hmm.

Dr. Stephanie Ruth Young (59:14)
So RedCap is a research platform. And then you go on, can kind of design your battery that way. And so you can access it through that. If you don’t have that, you can email that help desk and we’ll kind of talk about ways to access it there. And then hopefully in future, it’ll be kind of broadly available and we’ll see kind of where it goes depending on how the grants work out and things like that. Yeah.

Dr. Jeremy Sharp (59:40)
Sure, sure. That

sounds great. Yeah. Thank you so much for coming on and having this conversation. This is fascinating. We took some different turns that were unexpected, but really fun and yeah, had a great time. Thanks for being here.

Dr. Stephanie Ruth Young (59:53)
Yeah, thank you. I have like a mind that’s like all over the place and I tend to go on the lots of tangents. So really appreciate you going on the journey with me.

Dr. Jeremy Sharp (1:00:03)
Yeah, absolutely. It sounds like maybe there are more conversations in the future, but it’s a good place to end for now. Yeah.

Dr. Stephanie Ruth Young (1:00:06)
What?

Perfect. Yeah. Thank you so much,

Jeremy.

Click here to listen instead!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.