Hey everybody. Welcome to another episode of The Testing Psychologist podcast. I’m Dr. Jeremy Sharp. I am really excited to be talking with our guests today. I have with me Dustin Wahlstrom and James Henke. They are going to be talking with me all about Q-interactive.
I’m sure a lot of you have heard of Q-interactive. It’s the digital platform for administering a lot of the tests that we use. And like I said, I’m so excited to have these two guys here so that we can talk some ins and outs of the platform, what it’s all about, where it’s headed in the future, and [00:01:00] all sorts of cool stuff. So welcome to the podcast guys.
Dustin: Hey, good morning. Thanks for having us.
James: Thanks so much. We’re looking forward to it.
Dr. Sharp: Oh, good. Same here. Let me do a quick introduction for each of you and then we can dive into it because I think there’s a lot to talk about this morning.
Dustin Wahlstrom is the product owner for the Q-interactive project at Pearson. He has a PhD in clinical psychology from the University of Minnesota and completed his clinical internship at Children’s Hospital of Minnesota. He joined Pearson in 2009 and was a research director for the WPPSI-IV and WISC-V prior to working on Q-interactive.
James Henke is the Q-interactive Product Specialist and National Trainer for the digital system. James’s background is in education, graduating from the College of Education and Human Development from the University of Minnesota in 2001. Prior to joining Pearson, he worked in Japan teaching English and also as a 3rd grade [00:02:00] teacher for the Minneapolis Public Schools. James lives in Minneapolis with his wife and two children.
Welcome, guys.
James: Hi.
Dr. Sharp: Did y’all know each other at the University of Minnesota?
Dustin: We did not. No, I don’t think we were even there at the same time. I don’t think we probably were.
James: We’ve never actually had that conversation before.
Dr. Sharp: Oh, okay. So just a coincidence, it sounds like.
Dustin: Yes. Pretty much everyone in the state goes to the University of Minnesota at some point in their lives, I think.
Dr. Sharp: Got you. That sounds good. One of my professors in graduate school, Bryan Dik, was at the University of Minnesota for graduate school. He does a lot of vocational research, meaning and that kind of thing. I know there’s a great program up there.
So glad to have y’all on the podcast. Between the two of you, it sounds like you cover a lot of the goings-on with Q-interactive. I [00:03:00] think we have a lot to cover today. So I’m just going to jump into it.
My history with Q-interactive, I feel like, has been a long one. As someone who loves technology and tends to be an early adopter of technology, I was so pumped when I found out that we were going to be able to do psychological testing using iPads.
I remember that day, I called my wife and I was like, I really need to buy two iPads for the business, and she’s like, okay, can you explain this a little bit? And then it’s like, no, of course, this is the real deal. We’re able to do tests on the iPad. That was a really exciting moment for me back then.
Dustin: It is a great excuse to buy iPads. I will give it that.
Dr. Sharp: It’s great and always a good reason to have technology in your practice. I find that the kids really like it. We test a lot of kids and they love touching things on the iPad. It’s a nice [00:04:00] transition for them.
Anyway, long history with y’all and you’ve been helpful over the years with getting things set up and troubleshooting some issues here and there so I wanted to give a little bit of an opportunity for others to learn more about it now that things have come along and matured in the platform.
Dustin: It’s funny you say that because this year is going to be the 5-year anniversary of when we initially launched our beta. We would have launched the beta for people at APA in 2012. It’s already been 5 years. It’s amazing how fast time flies.
Dr. Sharp: Oh, I’m sure. Do you just feel like you blinked and that time was gone?
Dustin: Yeah.
Dr. Sharp: I know it’s been busy for y’all.
James: It’s also been nice to in the context of, Dustin and I have been together on Q-interactive for almost all five of those years or four, give or take. By and large, [00:05:00] we have a lot of consistency on our end of people who are working on it on the back end, on the sales and marketing to a certain extent. It’s a nice group, it’s a nice working environment. We’re all excited about how this product has grown and where it’s going to go.
Dr. Sharp: I’ve definitely noticed that from my side too. Your names have been very consistent as some of my points of contact over the years. And that’s honestly surprising. That’s not always the case with a lot of companies and so it’s been really nice from the user side as well.
Just to get started, could y’all maybe just talk about what is Q-interactive exactly, how’d that come about and what’s some background on that?
James: Sure. [00:06:00] Q-interactive basically it’s iPad administered tests. The tests that we’re talking about would be one-to-one administered instruments like the WISC, the WAIS, the WPPSI, achievement tests, some speech tests, and neuropsychological tests.
The one thing that they all have in common is it’s one-to-one testing environments. We’re not talking about group-administered tests here. One iPad is going to be in the practitioner’s hands and the other iPad is in front of the client, and that’s where visual stimuli will appear.
On a test like Matrix Reasoning, for example, you’ll have the picture show up, and the options show up on the bottom half of the iPad screen, and the client [00:07:00] touches their response, it shows up on your side instantaneously as the practitioner. Automatic scoring is integrated into the system. You’re able to move through the content in a very seamless way. There’s a nice flow as you move through your batteries.
From a more technical perspective, Q-interactive basically has two fundamental pieces to it; it has the iPads where you are administering and capturing the data and then it also has a website where your data is organized and stored long term. It’s where you’re able to generate your reports and manage your account.
You have a website where data is stored, you have your iPads, you’re mobile, and you’re able to administer these tests in any location. Q-interactive doesn’t require [00:08:00] you to have a Wi-Fi connection when giving these tests. Even if there’s a power outage, your testing needs, they’re not at risk. You can still maintain those appointments.
There’s a lot of great features within the system that take the feel to the next level. Things like iPads have microphones on them. We’re able to make audio recordings of what the client’s responses are to aid you in the review of the data post-administration to ensure accuracy and that type of thing.
Dr. Sharp: That’s been really helpful.
James: You’re able to instantly get your scaled or standard scores, your index scores. As soon as you swipe off the last item in your battery, you have access to a comprehensive score report [00:09:00] for whatever test you’re giving immediately.
We can talk a little bit about time savings. We’re not changing the actual duration of the tests themselves, because we’re still maintaining some equivalency to our paper counterparts but when we talk about the work you do setting up the test session, or in particular, after the test session is over, there can be a significant savings in time so that you’re not doing as much math and more just taking the data that’s been calculated and working with it, interpreting it and applying it to your diagnoses and your practice.
It’s basically two fundamental pieces, a website and an app that’s on two iPads, and that’s where you’re able to administer the tests.
Dr. Sharp: That’s fantastic. Were either of you around in the beginning when the idea for Q-interactive was being developed? Can you speak to [00:10:00] that at all?
Dustin: Yeah. I think that’s an interesting question because the origin of Q-interactive wasn’t necessarily one around how we digitize tests. The goal wasn’t necessarily just to come up with a way for us to computerize the WISC or do anything like that. The charter was what can we do to transform assessment and make the practice of assessment better for all of our customers.
There are a lot of things on the table for that and not all of them were digital necessarily. So it wasn’t the sort of thing where we wanted to do technology for technology’s sake, just because it was being made available.
As the original team, and neither of us was on that original team, but as that team went through the various options that they had available to them, especially with iPads becoming available at that time what [00:11:00] ended up becoming Q, I bubbled up is the idea because of all of the possible benefits that it could provide us in terms of accuracy, efficiency and yielding better results and ultimately better outcomes for people.
Dr. Sharp: That’s wild. Can you remember, this is just out of my curiosity, what some of the other options for making testing easier were that got cut or didn’t get pursued?
Dustin: That’s a good question. I don’t know what they are because neither of us were around at the time.
Dr. Sharp: Got you. Okay.
James: Tablets first came out when the first iPad came out in 2010. When we look back to those initial stages where Q-interactive was just an idea, from a hardware perspective, there weren’t really a lot of options out there. Laptops were pervasive out there, but in terms of a test-taking tool, it was really when [00:12:00] iPads came out that it started to gel and make a lot of sense.
Dr. Sharp: I think that’s a good example of when preparation maybe meets opportunity, like you were ready, the iPads were ready and just going for it.
James: That’s a very good way to put it.
Dr. Sharp: Sure. I know that a lot of discussion that I’ve heard around Q-interactive happens about how can we translate the paper and pencil version to a digital version, it seems like those are very different on the surface. I’m curious about the development, the research, standardization that went into translating these tests from paper and pencil over to a digital platform.
Dustin: There’s two different ways to think about that. One would have been all of the initial work that was done originally when we were getting Q-interactive up and running, and then [00:13:00] what we’re doing now as part of our standard development process now that it’s been around for a while and incorporated in everything we do.
I don’t know the exact figures on this but when we started, we would have done in the order of hundreds of interviews with people and spent a lot of time talking with psychologists before any test was designed before there was any prototype or concept created a single line of code written.
I think that’s one of the most important things that the Q-interactive team has done in the development cycle was get out there, talk to people, and not even just talk to people, watch them do their work, whether it be the testing itself, whether it be setting up an assessment session, analyzing their data after converting that data into reports. So really understanding that whole workflow so that we could ensure whatever it was that we [00:14:00] built and how we built it was going to match onto that.
Once we did all of those interviews, we came out with a list of things that we thought Q-interactive was going to have to be. So it was going to have to be design-focused. It was going to have to make people more efficient. There are various pain points that we saw watching people do their work that we knew had to be addressed by the system.
It needed to be consistent. So if you know how to give Block Design on the WISC, you know how to give it on
the WAIS. If you know how to give Vocabulary, you should know how to give Comprehension because while the content differs, the mechanism for administering those tests is the same.
Once we had all of those design goals, there was a lot of iterative designing and prototyping. So we’d come up with concepts, [00:15:00] we would go out and test those concepts, based on that feedback, we would go back and redesign and so on and so forth.
A good example of that we talk about a lot is the CVLT. For people familiar with the CVLT, it’s a list learning task. It’s relatively difficult to administer on paper because you need to write down verbatim all of the examinees’ responses as fast as you can, and some people provide those responses very quickly and so it’s hard to keep up.
Dr. Sharp: I remember those days, that was a nightmare.
Dustin: It is. And so you create a workaround, so you write the first three letters of the word or whatever it is, so you can keep up. And so initially we created a design where you simply just use buttons but that doesn’t really work because you need to be able to capture intrusion errors.
And those intrusion errors are important because there’s scores that are dependent on them and there’s scores that are based on the [00:16:00] ordering of the words. And so we then had people just hand write on the iPad. That was a nightmare. No one could keep up.
It was very obvious very quickly that wasn’t going to work. So we went back and we did a hybrid approach where you had buttons and a little area to handwrite. We even noticed there that it was hard to keep up and go fast enough.
Eventually what we did was a hybrid where you have the buttons and a dual handwriting area where you can jump from box to box to write, and instead of hitting a return button to make a word go up in a list, simply by switching boxes, the computer puts it up in the list for you as it was finally was that piece of the interface that we were able to obtain the speed necessary.
All that work was done before a single data point was collected on that test at all. We have stories similar to [00:17:00] that for a lot of the subtests that we ended up creating for the platform.
James: One thing that I could add to that too, Dustin, is the design elements that we developed, for example, the CVLT, were able to apply to several other tests, as Dustin alluded to. This helps significantly from a training and learning perspective, meaning as you become familiar with one instrument from top to bottom, you are going to become familiar with other instruments in your library that maybe you haven’t seen before or that you haven’t administered in a long time.
So you don’t need to spend necessarily a ton of time learning every single test over and over again because of these consistent design elements and a lot of that groundwork was laid in [00:18:00] the early stages. It helps from a training perspective and that is a benefit to our customers and that they’re able to pick things up potentially faster than they maybe initially thought.
Dustin: That’s a good point that at no point where we really designing a subtest, we never really designed WISC-V Vocabulary. So the first step in that process was to group all of our subtests that had similar response demands for both the examinee and the examiner and then look across those and to the best of our ability, design a single interface that could support all those different subtests.
Dr. Sharp: I definitely noticed that with the WPPSI. The WPPSI just came out. I think it was one of the more recent tests to go to Q-interactive. We’ve given the WISC a lot and then the transition of the WPPSI was seamless. The first time through it was certainly doable. [00:19:00] I think that speaks to the consistency that y’all have been talking about working to develop. So that makes sense.
Dustin: And so all of that leads us up then to the point where we started collecting some of that equivalence data. It’s important to note as we start to talk about equivalence, that it wasn’t the case where we went out and did all of these designs and then at the end cross our fingers, do an equivalent study, and hope to God that they come out the same.
We started that design process with equivalence in mind. Knowing that that was where we’re going to go, we made design decisions accordingly and so in that respect, the equivalent studies at the end really become a confirmation of what we were trying to do the entire time as opposed to just a study that helps us figure out what’s going on in terms of the differences.
And so you see that [00:20:00] the equivalent decisions throughout the platform. Block Design, for example, still uses the blocks. You can imagine creating a completely digitized version of that task, but it’s obviously not going to be equivalent and it’s not going to be measuring the same thing either. I don’t know what it would be measuring and how it would be different, but I think it’s fair to assume that they wouldn’t be exactly the same construct.
Dr. Sharp: I don’t want to maybe get bogged down in the statistics, the standardization, and all of that, because largely I would end up sounding like a fool, and ask those questions.
Dustin: Me too, potentially.
Dr. Sharp: I am curious, I think that’s the main question is, can we trust that the scores we’re getting and the performance we get from the Q-interactive interface is the same as what we would get using paper and pencil with these tests? Can you speak to that?
Dustin: Yes. At a high level, the [00:21:00] goal of equivalence is to allow us to use the paper-pencil norms. So for the most part, none of the tests on Q-interactive have been normed independently using the Q-interactive version. And so what we’re doing is using the paper norms but then establishing equivalence through these studies that then allows them to be applied.
And so you’re right, I won’t bore you with the different methodologies. There’s a lot of different types that we’ve used, such as test-retest or equivalent group designs that depend on the test and the construct being measured.
The high-level idea is that we’ve set an a priori threshold for equivalence at the outset. And for us, that’s an effect size of 0.20. So that’s going to be 1/5 of the standard deviation, which ends up being about half of a scaled score point. So essentially we’re saying that when we do these studies, if the paper digital [00:22:00] format effect is less than 0.20, we’ll assume that these tests are equivalent and that the norms can be used interchangeably.
I think there’s maybe two interesting things to point out about the study, that is the studies that aren’t necessarily statistically based but can help give people some confidence in them. One is examiner training. The huge thing for us was making sure that the examiners were trained properly on Q-interactive prior to doing any of the data collection.
The reason obviously is that if we found a format effect, we wanted it to be due to the interface itself, not due to the fact that the examiners felt much more comfortable in one format than the other and thus did a better job in paper than they would have in digital, which could happen. People have decades of experience testing the WISC in paper so you could [00:23:00] imagine that just that familiarity alone could introduce some format effect.
The second was the importance of video recording. So we recorded most of, I won’t say every single session, the equivalence testing sessions. The reason was that there were instances where we found nonequivalence between two subtests.
And in those cases, when we found that, the first step would be to go back to the data and then the videos to see if we couldn’t identify what was introducing the nonequivalence. And then once we did that, to the best of our ability, we would change the interface and then retest it until we could get that equivalence established.
And so through all of those processes, if you were to go and look at our equivalence tech reports, which we have published on our website, helloq.com, you’ll see that almost all of the tests within Q-interactive are under that 0.20 threshold. [00:24:00] There were two subtests early on in the WISC-IV, I want to say it’s Matrix Reasoning and Picture Concepts that had an effect size slightly above 0.20. So it would’ve been like maybe 0.22 and 0.28, both in favor of Q-interactive. So scores were coming out a little bit higher on digital.
Through the videos and everything else, we looked at the data, and were unable to ascertain exactly what was causing those effects. So we don’t know for sure. We think it could potentially just be engagement. Those were two of the only tests on WISC, for example, that required the child to really interact with the tablet device. That’s an educated case guess, so we don’t know for sure.
It is interesting to note that when we redid the WISC-V study, which has both of those subtests in it, we weren’t able to replicate those results. When we did the WISC-V study, both of those subtests without really there [00:25:00] being much change to the design at all, both came back with an effect size below our 0.20 threshold.
People can go check out the research. I think they’d find that the methodology is very thorough. It’s a pretty impressive research program. Almost 100 subtests have been tested now with thousands of examinees. So the database is getting pretty big for us.
Dr. Sharp: That’s great. That’s fantastic. I’ll ask one more quick question then move to a little bit of a different topic. Are there plans to do standardization with the iPads?
Dustin: Yes, some of that stuff is already in progress. For example, hopefully, very soon, we’ll be releasing the WISC-V Spanish. [00:26:00] The WISC-V Spanish will be, I believe, completely standardized with the iPad version. And so we’ll see more and more of that going forward and perhaps the equivalent studies will then equate that standardization back to a paper form as opposed to what we did originally.
Dr. Sharp: Right. Oh, that’s fantastic.
Dustin: One other maybe interesting thing for people is that just because we establish equivalence in a normative sample doesn’t mean necessarily that the results apply to an individual with some sort of clinical condition. I think the next phase after equivalence is establishing the validity of Q-interactive through some of the clinical studies that we typically publish with our paper tests.
We’ve started to do that with the WISC-V, for example. We have tech reports on our website that show the performance on Q-interactive for various clinical samples; [00:27:00] ADHD, learning disabilities, intellectual disability, autism, things like that.
What we found so far is that the patterns of performance are very similar. I don’t think, for any of us, it’s much of a surprise. The reason it’s not is we purposefully tried to keep that test content and the interfaces as similar to paper as possible throughout the development process in order to support that construct equivalence.
Dr. Sharp: Sure. That’s interesting. I like that we’re talking about this a bit because the main thing I would think about is maybe motivation with kids, maybe individuals with ADHD, I don’t know, I’m just guessing, it might be more motivated when there’s an electronic device present. So the fact that y’all are looking at that, I’m trying to account for that, I think it is important.
Let me switch [00:28:00] just a little bit and maybe talk more about day-to-day practical uses for clinicians. I’d be curious who do you think clinician-wise would be a good fit for Q-interactive? When should somebody start to consider using Q-interactive instead of paper and pencil? What are some benefits of switching, that kind of stuff?
Dustin: If we start with maybe the benefits and then go into who would maybe be the ideal type of person to switch. One of the things you just mentioned around motivation and ADHD, for example, points to what I think is one of the best benefits of Q-interactive and that’s the engagement piece. We all knew when we started the project that people, especially kids are probably going to like iPads more than paper but we’ve been pleasantly surprised by how big of a deal this has been to people.
[00:29:00] I think it’s really important for two different reasons; one is test validity. One of the major threats to the validity of something like an IQ test, for example, is going to be motivation. We don’t want Susie getting a low score on the WISC because she doesn’t feel like completing the items, we want her to be able to demonstrate her best ability. By being engaged and finding the test exciting, people are more confident that kids especially are doing that.I think it decreases effort on the part of the examiner. For anyone who does testing with kids, you know that, maybe I was just a bad clinician and that’s why I’m here at Pearson now, but I remember being under the table, for example, especially with the WPPSI, or bribing someone with an M&M to get them to finish off a subtest. That can be exhausting emotionally when you’re doing that for two hours.
I think [00:30:00] there’s a benefit to the clinician doing the work if the person is easier to manage and they’re having more fun. And then I think there’s something about making the whole experience with the psychologist more positive.
I remember an anecdote from someone lately who was working in a school who said, I’ve had kids stop me in the hallway and ask if they can come back to my office to play with the iPads and that never happened before. I think kids can have legitimate test anxiety and so to make the whole idea of coming to see the psychologist and playing these games or whatever a more positive experience is a good thing.
James: Let me also piggyback off of that a little bit and relate another anecdote, which is, I was talking to someone who’s been using the system for two years earlier this spring. [00:31:00] He said something that I hadn’t heard before, which was, when he would be giving a WISC in paper form to one of his students that he’d have to schedule the test session around having two or three different breaks because of not just test anxiety, but just test fatigue. A WISC can put your head through the wringer as you move through the content if you’re being administered it.
After adopting and using Q-interactive, he said that he doesn’t schedule breaks anymore when he sets up the test time because the kids aren’t having that fatigue, and that engagement with the iPads has kept their energy up as they move through it. You’re having a more engaged client in the testing [00:32:00] process versus paper.
That’s just anecdotally. I’m not referencing any studies along those lines, but just anecdotally that in general, that engagement gives you a more motivated client, which you would think might yield more solid data.
Dr. Sharp: Yeah, I could see that. That’s great. What are some other benefits do you think to using Q- interactive? Are there any others that come to mind?
Dustin: Two of the other big ones that resonate with people who are using it right now are going to be efficiency and time savings. A lot of that comes, as James said earlier, from the scoring piece. It may be because of engagement like we were just talking about, or maybe the ease of just swiping through items that you save some time while you’re actually doing the testing, but not having to score a protocol entering that data into a [00:33:00] scoring program to get your report, et cetera, saves a lot of time.
We’ve done some data collection with some customers around how much time they’re saving to see anything on average from around 30%. Some people report more than that and some people less obviously but I think a lot of it is coming from the activities that don’t necessarily happen while the tests are being administered.
Another big one is portability. You’re not carrying around a bunch of kits. We have a lot of people, especially in the school psychology space who, and this was news to me having only worked in clinics, configure the trunks of their cars and turn them into a storage unit so that they can fit all of their kits because they’re going from school to school and then you got to take all those kits out and bring them back into your car and so on and so forth. So the idea of having two iPads, blocks, and [00:34:00] two pieces of paper for response booklets or whatever, and putting them in your bag is a much better thing.
We had someone tell us in New York that they rented space to keep all of their kits. And so by being able to get rid of those, they were saving money by not needing the rent or whatever it was to do all their storage. We hear funny anecdotes like that all the time.
What one person, this was a really good story, was in Alaska in a tiny little prop plane would fly out to these areas. She was on the coast and then she would fly inland to these various places to do testing. These planes are so small and they need emergency materials like food or stuff like that in case they crash, I don’t know, and they have a weight limit. [00:35:00] She had said, gosh, to the extent that I don’t have to bring all these kits with and I can replace it with two iPads, I have more weight for a coat and food. That’s a little dramatic but saving lives.
Dr. Sharp: Yeah, Q-interactive can save lives.
Dustin: That’s right.
Dr. Sharp: Oh, that’s great. We’ve definitely noticed all those advantages here in our practice. It does make a big difference storage-wise. I remember those days in graduate school, I worked for this neuropsychologist and he had three graduate students and we would have to trade these materials between testing days. And so we’re lugging around these two rolling carts and having to coordinate tradeoffs. It was terrible.
Dustin: Or if you want to change what you’re doing in the middle of a test battery and you’re all sharing materials, that can be really hard to do. So now I want to give something from the WIAT but James is upstairs [00:36:00] using the WIAT with someone else.
In Q-interactive, one of the great things is that you’re getting scores as soon as the subtest is over. Even if you’re testing without Wi-Fi, you have the ability to add in new subtests or take subtests out of your battery at any time right there from the iPad. So you’re able to use real-time data to make good decisions as you’re testing and then seamlessly add in new tests so that you can really do a better job of personalizing in an efficient way, the tests that are being given to your examinee,
James: Which also means that, let’s say you were administering a test in a clinic to a client in paper, once that’s done and you take a look at that data, you may have to schedule a second test session with that client a few days later whenever scheduling works out.
[00:37:00] So when you’re in the process of doing an evaluation of a client in private practice, you could be looking at this over several test sessions, spread over a week or however long, but with the ability to act in real-time on this data and pivot on the fly here, it potentially could mean that you don’t need to have that second or third test session where you have that interim of trying to figure out what those scores mean.I think that ability to pivot quickly, that time efficiency is a key piece to using it effective. A lot of our “power” users tout that flexibility piece repeatedly, which is great.
Dr. Sharp: An example of that just for us here very simply is that, say we have two subtests on one of the [00:38:00] WISC indexes that are vastly different and we want to give one of those supplementals to try to clarify things a little bit, the ability to circle back around and add that additional subtest helps a lot, whereas before it’d be the end of the day, we’d be scoring and then it’d be like, oh, wait a second, we need to do another test and have to bring them back. So just a very specific example. It’s been helpful.
One thing that y’all haven’t mentioned is cost. I would imagine that it is potentially beneficial for folks who maybe don’t do a ton of testing to jump in with Q-interactive so that they don’t have to buy a full test kit and commit to that. Is that right or do you have different?
James: Absolutely. If you were looking to hang your own shingle, open up your own practice, for example, money’s tight. If [00:39:00] you want to open up your doors and offer some testing services, the out-the-door costs of Q-interactive can be as low as basically $200 and that can give you access to a multitude of tests. On day 1, you can offer achievement testing services or cognitive testing services or whatever arena you want to go into so it’s very low upfront cost but allows you to span a wide range of services.
If you were to try to do the same thing in a paper environment, I’d have to pull the raw numbers in terms of what each individual kit costs, but you’re talking thousands and thousands of dollars to [00:40:00] offer a similar array of tests. So that upfront costs for people in private practice being so low.
And then as business picks up, you’d be basically invoiced based upon your prior month’s usage. Once the money starts to move a little bit, that becomes a lot easier to manage. So that out-the-door cost can be really low for people in private practice, and it allows them to offer the full services that they want to, which in a previous lifetime could have taken them several months or even longer to amass the funds to purchase all of those materials. I think it allows people to get started faster, wouldn’t you say, Dustin?
Dustin: I would agree. I think one assumption people probably make is that it’s mostly young people who already have iPads and iPhones and are starting their practice who are going to be the ones most [00:41:00] excited about this sort of thing. That may be true in general but we’ve found people at the end of their careers who see a similar benefit.
For example, a new test comes out and they know they’re only going to be practicing for maybe another 3 to 4 years and they don’t want to buy that test kit and make that huge investment because they know they’re not going to be using it for that much longer. And so they see Q-interactive as a cheaper way to get access to that content and be a more cost-efficient option for them at the tail end of their career.
Dr. Sharp: Sure. Could y’all speak just briefly to how the cost actually breaks down; if someone wanted to just get started, what would they be paying upfront and then ongoing? How’s that work?
James: Sure. Yes. Basically, there’s two components to purchasing Q-interactive. There’s a license piece, if you purchase a license to Q-interactive, [00:42:00] it’s 12 months of access to the system. That can range from as low as around $150 up to maybe $250, give or take. I’m just going from memory here, based upon the tests that you want access to. So there’s a license fee.
And then there’s the usage piece. So how much are you using? When we say using, we’re talking about it on a subtest-by-subtest basis. Billing isn’t going to be based around how many WISCs you give, but more how many subtests within the WISC are you giving.
Typically, if you wanted to get an FSIQ off of a WISC, that would be the first 7 subtests of the instrument, you’d be paying for 7 subtests and then the additional subtests to gain additional [00:43:00] indices. However, any more subtests you administer, that’s what you’d be billed off of. Generally speaking, you have a license fee that goes 12 months annually, and then it’s how much you use. And it’s on a subtest-by-subtest basis.
Dr. Sharp: And how much is each subtest?
James: That can vary a little bit. Not all subtests are created the same. For our achievement tests of the KTEA and the WIAT-III, those are priced at $0.75 a subtest, for your cognitive instruments, we have a lot of subtests within them like a WISC or a CELF, those are $1.50 per subtest.
And then we have some instruments that are large, single beasts like a PPVT, for example, or a Goldman-Fristoe 3, and those are priced at $4.50 per administration. That’s because those aren’t made up of [00:44:00] component subtests. They’re single instruments. So $0.75, $1.50 or $4.50.
That is probably what would resonate if you were looking at this from a private practice perspective. Alternatively, if you anticipate being a heavy tester doing a really healthy volume, you can also purchase a volume of subtests upfront. The cost per subtest can get lower the more subtests that you buy. Volume purchases will have discounts applied to them, otherwise, it’s $0.75, $1.5,0 or $4.50. Does that make sense?
Dr. Sharp: It does. I’ve run into at least one clinician who said that they talked with folks at Pearson and they were willing to do a cost [00:45:00] analysis of Paper versus Q-interactive and when it would become beneficial, is that something that’s widely available or did she somehow just work the system a little bit?
James: That type of cost analysis varies from person to person because some people may already have their WISC kits but not their KTEA or not their WIAT and vice versa. We will definitely work with frankly, anybody to help break down what the true costs are.
Q-interactive doesn’t have any hidden costs. It’s how much you use and your license fee. That’s pretty much it. Periodically, you might need to buy a handful of more response booklets if you’re given a lot of WIATs or KTEAs but by and large, that’s it.
[00:46:00] Some people come to the table with different tests already in their bags. So that cost analysis varies on a customer or person-by-person basis but if you just reach out to our sales staff, any one of us would be more than willing to help break that down a little bit and give a better sense of what those true costs would be annually.Dr. Sharp: Oh, that’s fantastic. That sounds great. I’m curious, it sounds like there are a ton of benefits. We’ve certainly seen benefits here in the practice. I am curious, from y’all’s perspective, is there anyone who might not benefit from Q-interactive or that you would encourage to stay away from it? I guess that could include disadvantages.
James: I don’t know how to put it; iPads aren’t necessarily for everyone. My brother, for example, does [00:47:00] some training in a different company. Sometimes his training sessions boil down to here’s how you right-click, and here’s what that allows. iPads are really easy to use once you embrace it a little bit but for people of a certain generation, sometimes they get so frustrated with typing in usernames, passwords, and this technological revolution.
Generally speaking, some people can be tech-averse regardless of what the tech is. When it comes to all the different tablets that are out there, in my opinion, iPads are probably the easiest to use. Apple does a really good job of making things really [00:48:00] intuitive and how Q-interactive is designed is really intuitive.
So if you just give it a shot, the system is fundamentally sound. And so being able to overcome that anxiety pieces is usually not as big of a hill when you look at it in hindsight, as opposed to thinking about how scary it may be.
We did an interview earlier this year with a customer who was talking about anxiety. She was a little bit on the older side of things. The biggest thing that she was really nervous about when it comes to using digital tools like this centered around passwords.
And so this is not just a Q-interactive concern that she has, but just across her entire life is everything that she does; her bank, [00:49:00] her credit card, her testing purposes with Q-interactive and other things, everything requires a different password, and just keeping all of that stuff organized and situated is her biggest concern, and so it’s not a Q-interactive issue, it’s how do I use all these tools and make sure I have continued access to them through login credentials and that kind of thing.
Sometimes it’s not the tech itself, but it’s just the ways in which you access it, usernames and passwords. No one ever said that the tech revolution would require 50 different passwords.
Dustin: That does highlight some of the other things that you do need to be cognizant of if you’re going to use something like Q-interactive instead of paper. I think in the paper world, if at the last minute, all of a sudden where someone were to walk in, you could grab a record form and just [00:50:00] write the name down and go.
And so there are some other things they have to do to stay on top of things in Q-interactive. You have to set the person up ahead of time, which doesn’t take a long time, but it is a step. We make updates to the platform relatively frequently. And so you need to make sure you know how to do those updates and you stay on top of those updates and take the time to do it.
I think that’s a strength to flip it around. I think our ability to make continuous updates as we go allows us to or at least we try to stay very close to our customers, get feedback from them, incorporate suggestions and improvements whenever we can, and then push those out immediately rather than having to wait for the next revision of a test but that does mean [00:51:00] that people have to do updates and stay on top of that maintenance.
We talked about engagement earlier, I think engagement can cut both ways. For every kid who’s super engaged and loves the iPad, there might be one who’s touching the buttons too frequently because they’re too excited. Overall, the engagement story is one of being a big benefit, but I do know that there are people who have reported back to us that in younger kids, especially, for example, they’re so excited by the technology that you have to reign in that excitement.
As psychologists, we know how to do that. You are, as I talked about earlier, always managing the assessment session, and so this just becomes a piece of that but it is something that’s probably different than in the paper world.
Dr. Sharp: Got you. That makes sense. That sounds good. Gosh, [00:52:00] we all shared a lot of information with this. I know we’re getting a little bit close time-wise. I wanted to transition a little bit and maybe do something that’s different here for the podcast, but maybe do a little bit of a rapid-fire question-answer thing.
I crowdsource some questions from The Testing Psychologist Facebook group and those folks have some very specific, but I think important questions. So I’m wondering if I could just throw those at you before we wrap up?
Dustin: Yeah, let’s do it.
Dr. Sharp: Okay. So the first one, one person asked, will the WISC-V be available in Spanish? And you addressed that earlier. So that sounds great.
James: Yes. That release is imminent coming this summer.
Dr. Sharp: This summer. Okay. That is imminent. Fantastic. Another question, will the personality measures ever be available in Q-interactive to do the input right on the iPad?
Dustin: Never say [00:53:00] never, but right now there’s no concrete plans to do that. So those are available on-screen within our Q-global platform so that you can email links to people and have them do it that way using a standard laptop or a tablet device. So because they’re available over there right now, there’s no plan to have them on Q-interactive but that could always change.
Dr. Sharp: Okay. Sounds good. Do you have any plans for the entire NEPSY to be added to Q-interactive?
Dustin: I think we’d like to have the rest of the NEPSY complete but there’s no concrete date that I could give you about when those subtests would be complete and they’re not in active development right now.
Dr. Sharp: Okay.
James: In general, there’s a broader question embedded in there. So I want to say this briefly here, which is, tests, they have a life cycle, maybe it’s every 10 years, give or take. When it comes to [00:54:00] making decisions about what’s going to come and when it’s going to come, we have to look at a lot of different factors; where is the current instrument and when is it up for revision?
For example, we wouldn’t be bringing a test that’s on its last legs to Q-interactive, we would look at the revision or the next generation of that test. Does that make sense? We have to look at where things are in the life cycle how popular of a test is it and so on. In general, we want to bring as much content to the platform as we can. We have to juggle the resources appropriately.
Dr. Sharp: That makes sense. Another question, will the full interpretive reports that are available on Q-global ever be available on Q-interactive?
[00:55:00] Dustin: Yes. The WISC-V already is. So yes, if you’re a WISC-V user right now, you can get the interpretive report. What you need to do is, for any WISC-V administration that you give, when you go and generate your report, there will be an option there to click on a box and get the interpretive report.It’s an extra usage or $1.50 depending on whether you’re paying upfront or getting billed as you go but that is available right now. And then based on how much interest there is in that and how many people are using it, we’ll look at putting some of the other stuff like the WAIS or the WPPSI on in the future.
Dr. Sharp: Okay. That sounds great. And then last question, will it ever be possible to only run Q-interactive from the iPad without having to log on to the website as well on a computer?
Dustin: Ah, good question. I can’t tell you when, but I think what people are essentially asking is can I create like a client, for example, from the actual app [00:56:00] itself? I would say, yes, that’ll be in our plans somewhere. I couldn’t give you an exact day of when that’s going to release, but we do hear that request a lot from people and it’ll be something that we work on.
Dr. Sharp: Cool. Okay. Just before we close, I know that y’all spoke about the long-term vision for Q- interactive and where you see it going. I would love to hear your thoughts on that as we wrap up. I think that’s a nice note to close on.
Dustin: I think you mentioned early on, we were talking about how long James and I have been on the project that maybe that’s rare that you had the same point of contact for so long, and not speaking for James, but I can say me personally, I’m invested in the platform in large part because of all the things that we’re going to be able to do with it in the future.
We talked a lot about equivalence and how right now we’re making purposeful design [00:57:00] decisions that allow us to test the same constructs that we’re testing in paper. I think as you look forward into a future where we’re developing tests specifically for the iPad, you can imagine all sorts of real excitement in terms of us being able to create brand new tests that are possible in paper, measure scores that aren’t really possible in paper.
A basic example of that might be really fine-grained reaction time data. There’s information out there in the basic science literature around reaction time variability being related to frontal striatal circuits and the importance of that and things like ADHD. You can’t measure fine-grained reaction time or reaction time variability at all or at least not very easily in paper.
Those sorts of things start to open us up to all [00:58:00] sorts of things that change the field of psychology. If you think to the extent that in psychology if you think of Paul Meehl and all the construct validity work, our understanding of a lot of these constructs is based in large part on the measures that we’re using to tap into them. And so for us being able to use check technology to make those measures bigger can have a huge impact on the field.
And that’s what’s really exciting to all of us. We have the ability to change those tests, to get new types of data, to partner with customers such as yourself, to share that data, and to provide that data back in order for you to make better judgments and have better insights into people. I think there’s all sorts of really good opportunities that technology is going to afford us in the future. It’s really exciting.
James: It’s also exciting to [00:59:00] see, just earlier this week, Apple had this big keynote address talking about some of the new technologies that are coming on the Apple products, with iPads, for example. You look at an iPad from 7 years ago and an iPad now, and they’re getting more and more sophisticated. Their computing power is getting broader and grander, touch sensitivity and touch ID, and all these different things.
There’s a certain unknown here too, where is the hardware going to take us as well? So we definitely want to think about how can we best take advantage of the tool that we’re using to capture this data as well. There’s things that we can do on the content side, but also maybe there’s ways that we can take advantage of the hardware, the iPad itself in ways that try to make you a more efficient or effective practitioner.
[01:00:00] Dr. Sharp: These are great points. That is exciting. Like I said in the beginning, as someone who loves technology, it’s really cool to hear y’all talk about that. I’m looking forward to see what happens next there at Q-interactive.Dustin: The tech changes so fast too. Apple comes out with new ideas every single year. And so yeah, it’s a lot of fun to be in such an active environment like that working with tech like this.
Dr. Sharp: Yeah. Thank you guys so much. I feel like we packed a lot of really good information into this hour and hopefully, folks will take away a pretty good idea of what Q-interactive is, what it looks like, how much it costs, how to get started, and all those different pieces.
If anyone is interested in learning more, what’s the best way to learn more about Q-interactive or get in touch with someone there, how should they do that?
Dustin: I’d [01:01:00] say the best way is through our website which is helloq.com. When you come there, you can come to the Q-interactive page where we have free trials, we have a constant stream of webinars giving people a good idea of how the system works and contact our sales staff or anyone within the Q-interactive team, and we’d be more than happy to talk further about how the system works.
Dr. Sharp: Okay. That sounds great. We’ll have links to that in the show notes, of course.
Dustin: Great.
Dr. Sharp: Dustin, James, thank you guys so much for spending the time with me and being willing to talk through all these different pieces of Q-interactive. I appreciate it.
James: My pleasure.
Dustin: Thank you. We appreciate it.
Dr. Sharp: Of course. All right, guys. Take care. Hey, y’all. I hope you enjoyed that interview with the guys from Q-interactive. I was so appreciative that they were willing to take so much time to talk [01:02:00] with me about the ins and outs of that platform.
Like I said, I’ve been working with Q-interactive for many years at this point and have really seen it grow from where they started there in the beginning and have just seen them add so many measures and tweak things interface-wise and software-wise. I think it’s super helpful and we use it a lot here in the practice these days.
So if you have any questions, like they said, you can go to the website, which is helloq.com. We have that in the show notes as well. You can learn more about pricing, availability, and things like that to see if it is appropriate for your practice.
As I mentioned the last two weeks here, I have another cool interview coming up with the author of Feedback That Sticks, which is a wonderful book about doing hard feedback sessions. I’ll be talking with Dr. Karen Postal here this coming week [01:03:00] and I should have the podcast released next Monday with her. So that’ll be fantastic.
In the meantime, if you’d like more information or to check out past episodes, you can go to the website, which is thetestingpsychologist.com. You can also join us in some good discussions about testing-related topics in our Facebook community, which is The Testing Psychologist Facebook community. Pretty obvious there. You can search for that in the bar at the top of Facebook and it should pop up.
Note that there is a page for The Testing Psychologist, which is the business page, but then there’s also The Testing Psychologist community, which is the group where we talk about all things testing.
Hope you’re having a great summer. Summer is fantastic. Goodness. Like I said, I’m definitely a summer person and just got back from a great week-long trip to the beach on the East Coast in South Carolina. I’m ready to [01:04:00] hit the ground running and continue on with building things here in the business and the podcast and keep bringing some excellent testing information to you. I hope you’re enjoying your summer and take care until next week. Thanks. Bye bye.