Blind Bargains

#CSUNATC22 Audio: Finding Layers Of Context With The Dynamic Tactile Device From APH


We began our 2022 CSUN coverage with a discussion about some of the upcoming initiatives from APH. But hearing about the Dynamic Tactile Device Project over Zoom wasn t enough for J.J. s curiosity. That s why he managed to brave the conference labyrinth and find Greg Stilson, Director of Global Innovation for APH, and lay his hands upon the DTD himself. In this demo you will learn more about the current prototype unit while J.J. navigates through images, floor plans and other examples. Greg then goes on to explain more about EBRF, how the unit differs from the DotPad and why he is optimistic about how the DTD will bring a whole new layer of context when it comes to reading textbooks in the classroom. To learn more about the project, and to see if there will be a demonstration coming to your area, visit the Dynamic Tactile Device Project site

Blind Bargains Virtual Exhibit Hall coverage is Brought to you by AFB AccessWorld.

For the latest news and accessibility information on mainstream and access technology, Apple, Google, Microsoft, and Amazon offerings, access technology book reviews, and mobile apps, and how they can enhance entertainment, education and employment, log on to AccessWorld, the American Foundation for the Blind's free, monthly, online technology magazine. Visit https://www.afb.org/aw

Transcript

We strive to provide an accurate transcription, though errors may occur.

Hide transcript
Transcribed by Grecia Ramirez

Directly, and actually in person, from Anaheim, it’s blindbargains.com coverage of CSUN 2022. Brought to you by AFB AccessWorld.
For the latest news and accessibility information on mainstream and access technology; Apple, Google, Microsoft, and Amazon offerings; access technology book reviews and mobile apps; and how they can enhance entertainment, education, and employment, log onto AccessWorld. The American Foundation for the Blind’s free online technology magazine. Www.afb.org/aw.
Now, here’s J.J. Meddaugh
J.J. MEDDAUGH: I am here in the APH presidential labyrinth at CSUN 2022. Greg Stilson, once again, and we have, in front of me, the Dynamic Tactile Device. And I’m ready to see what it’s all about, or feel what it’s all about.
Greg, welcome back.
GREG STILSON: Hey. Thanks for having us back on, J.J. Yeah.
So what’s in front of you right now is a proof of concept. It’s a technology proof of concept. The whole point here is that you will feel the technology that will be used to display both tactile graphics and Braille on the final DTD product. So in front of you is a very prototypy type of box.
JM: Right.
GS: And it is got the equivalent of ten lines of 20 Braille cells. So 200 Braille characters can go on this screen.
JM: But you can do text or graphics on it.
GS: You can do text or graphics, but – yeah. If you’re trying to measure it, it’s 200 Braille characters is basically what it is. But it’s – remember – if you remember, the pins are in equidistant pin array. So what I’m going to do first is just show you what ten lines of 20 characters of Braille look like.
JM: So we will be quiet.
(Braille scrolling can be heard.)
GS: All right. So that’s ten lines of 20 characters of Braille.
JM: So let’s see what we have here.
GS: So we’re –
JM: “Big Bear and Little Bear. Big Bear is the big bear, and Little Bear is the little bear. They played all day in the bright sunlight. When night came and the sun went down, Big Bear took Little Bear home to the Bear Cave. Big” –
And there. That’s about where it cuts off, but all of that --
GS: And that’s what – all.
JM: -- was on here.
GS: That was ten lines of – and what I’m expecting – it – if you remember, when you’re reading a single-line Braille Display, it makes the tracking method where your left hand should be going to the next line down and tracking as your right hand is finishing.
JM: Uh-huh.
GS: This -- I believe you are using that kind of method as you were trying to read this. You were reading it like it was a –
JM: I believe so.
GS: -- it was – I don’t know.
JM: Braille is always so subconscious, so –
GS: Yeah.
JM: -- a couple things that I would notice to, like, I guess, explain to listeners. So since this does both text and graphics, the line spacing, I think, is slightly less because you have, what, three dots on, one dot off; right?
GS: Something like that, yeah.
JM: Something like that. So maybe a tiny bit less, but not too much that it stopped me from reading it at my normal speed. And then, going horizontally, you have, I think that would be two dots on for the cell and then one dot off.
GS: Um-hmm.
JM: So the Braille characters are spaced a little wider, but not too wide to be off putting. It certainly was easy to read. There’s just a tiny bit more space between each cell, but each Braille cell feels just like Braille. It’s very easy to read Braille and – as you could tell, we can read kids’ books all day. We should have gotten more of those.
GS: Yeah. And so – and really, where I think you may notice a difference – I don’t really notice it as much within a word. Once in a while, as you – if -- let’s say the word ends in an L, like something in the left side of the cell, and then the next word may start with, like, something in the right side of the cell, sometimes then, it looks like the words are a little bit spaced further apart.
JM: I see. Little Bear. I didn’t even really notice it though.
GS: Um-hmm. Yup.
JM: Honestly, like, you breeze through it so fast –
GS: Yeah.
JM: -- with your fingers that you just – it might be –
GS: Yeah.
JM: -- an extra quarter of an inch –
GS: Yeah.
JM: -- but you don’t notice it.
GS: That’s great to hear. Yup. Because we spent a lot of time working on that Braille spacing, and that was, you know, one of the very – I would say, the very small compromises that we had to do. And hopefully, you’ll agree that having the graphics and the Braille on the same tactile array kind of outweighs that small thing that we’re doing here so –
JM: Well, and that was probably one of the big challenges, looking at your previous work with Graphiti and other things like that is those were not Braille dots. Those were just huge dots –
GS: Um-hmm. Yup. You got it.
JM: -- and trying to put actual Braille – it looked and felt different.
GS: So that was – that was, I think, one of the longest-running processes that Humanware and Dot put into this was that they really were honing in on that original dot Braille character, that Braille cell. If you remember the Dot Watch, for example, the characters were really close together, people were having, kind of, some negative feedback about those actual dots.
JM: Right.
GS: And so there was significant work being done by the – that partnership to just make it look like a standard Braille dot. You know, well, you tell the listeners – I mean, do you feel like it’s a –
JM: Yeah.
GS: -- standard Braille character Braille dot?
JM: The Braille feels just fine. The dots feel a little rounded, so it’s not, like, that sharp signage Braille that some of the displays have. It’s very comfortable to read. And if I had to read a textbook or something else on this format, I would have no problem. I wouldn’t feel any fatigue, I don’t think, by reading it on here.
GS: That’s great to hear. Because that’s the number one thing that we’re trying to solve is getting textbooks on these devices, so –
JM: Well, you’re also doing lots of graphics.
GS: We are. So that’s what I’m going to show you next –
JM: All right.
GS: -- is I’m going to pull up something that you’ll be able to see here.
JM: All right. So right now, this is a prototype, so you’re just sending this over from a computer.
GS: Exactly. Yeah. These are canned graphics.
JM: Let’s see what we have here. This feels like a graph. So on the left side, there are numbers from bottom to top: Zero, 2, 4, 6, 8, 10, 12. Across the bottom, we have cat, pig, ant, dog, and those bars go up in various lengths. So I can feel that cat goes up to 4; pig is 8; ant, I assume, is around 6; and dog is all the way up to the top at 12. So you have nice, solid, vertical bars. So – yeah. It’s the same pitch as a Braille cell, but this time you have – you don’t have spaces between each line because it’s contiguous, and the bars go straight up. So it feels like a graphic that you would feel on a textbook page.
GS: You got it. And this is a sample of how we can – how can we create texture; right? So, you know, this is just one example of being able to pull up a bar graph. What I’m going to send over now is an example of – let’s do this. So I’m going to show you – if you look – so this is a Cartesian plane.
JM: All right.
GS: So you should be able to see that.
JM: Ooh.
GS: You have your quadrants –
JM: Ooh.
GS: -- and then you should be able to see where –
JM: So there are two – oh gosh. Right. You try to explain this on a single-line display. So you have – it wouldn’t work. So you have – it’s kind of like a – is that a cross or a plus sign?
GS: Yeah. It’s like a –
JM: I am not a –
GS: It’s like – it’s like it’s like a big –
JM: I am not a math major.
GS: -- like a big – yeah. It’s a big, kind of, cross.
JM: There are two plot points. I assume that’s what those are --
GS: Um-hmm.
JM: -- which are represented by, looks like a 3x3 array of Braille dots on the bottom left quadrant. What’s amazing to me, though, is how quickly you can pick up tactile information. You know, this is the type of thing that you’d, before, have to find an embosser and then print it out and emboss it. And now, you know, somebody could send this stuff down, and you could have it to -- you know, anywhere in the world and pick up on this thing right away.
GS: You got it. And so what I like to simulate here is, you know, the flexibility of being able to zoom into -- for example, if you wanted to zoom into, let’s say, quadrant 3, which is your bottom left quadrant –
JM: Okay.
GS: -- and see exactly where those points are plotted.
JM: All right.
GS: So I’ll go ahead and zoom in if you want to take your hand off that.
JM: All right.
GS: And let’s do that.
JM: That’s right. That is still true with this technology is you have to keep your hands off for the second that it’s doing the refresh.
GS: You do. Yup. Yup. Yup. That is another – I would say, small compromise, and we’re going to – that -- as we’re looking at the user experience challenges that we’re going to be looking to solve with this, coming up with noninvasive techniques where we can tell the user, hey, we’re going – something’s going to refresh. Let’s lift your hands up for just a split second.
So now, you should be seeing this zoomed in –
JM: Yup.
GS: -- in the bottom left-hand quadrant. And now, what this simulates is the idea that you’ve zoomed in, and now, you’ve revealed an extra layer of information, so you should be seeing –
JM: Numbers.
GS: -- numbers. Exactly.
JM: So now, you could trace these numbers across to the points on the X axis.
GS: And the Y axis.
JM: Or the Y axis. See? I don’t – I’m not a math guy.
GS: Um-hmm. That’s all right. That’s all right.
JM: I always mix that up. But yes. There’s numbers that – and then there’s those same 3x3 little dots that are at various numbers on this. You know, and what I noticed too -- and just, same thing with the graph -- is you have the graphics, but then you also have text labels at the same time.
GS: Um-hmm.
Q. And you’re able to commingle those very easily with each other.
GS: And that’s really what we were so passionate about is when -- you know, tactile graphics for blind people are hard in general for – if you gave me a random tactile graphic – and I’m not going to generalize. So I know there are some people that are way better at tactile graphics than I am, but –
JM: Um-hmm.
GS: If you give me a tactile graphic that I don’t have any context on, it’s going to take me a significant amount of time to understand what it is. Labels allow you to create contextual understanding; right?
JM: Yeah.
GS: And so the Braille labels were crucial in us developing this. But then, in addition, one of the things that the final product will have on it is speakers and text-to-speech, so being able to also layer a multimodal approach where you can have audio, Braille, tactile in the same surface.
JM: Yeah. I noticed that one of the samples that I was able to feel the other day was a bicycle, and without the label of “bicycle,” I wouldn’t know what it was. And that’s no fault to the device.
GS: Um-hmm.
JM: It’s just, you know, certain things, tactilely, you’re just not used to feeling them.
GS: Exactly.
JM: You know, maybe as a kid, you’ve felt one once or twice, but, I mean, that could have been anything with wheels --
GS: You got it.
JM: -- or circles or whatever.
GS: Absolutely. Yup. So this one – I really like this example. So this is a example of a floor-plan of a – let’s say an office building.
JM: Yes.
GS: And if you can go ahead and check that out.
JM: I did feel this one the other day, and I like it. So you have different labels. So actually, you have rooms represented by squares, and some have openings where there would be doorways. There’s a couple openings in the outside where there would be exits. There are a whole bunch of little Braille abbreviations. “OF,” which I assume is office, and “RE” for restroom. And then there’s, like, a little X in the middle, which I’m not sure if that’s, like, the “you are here” thing or maybe that means something else and –
GS: Well, so – what I’m going to show you now is –
JM: Cool.
GS: So imagine that you’re looking at a textbook; right?
JM: Yeah.
GS: And oftentimes, with graphics on a textbook or with maps and things like that, you’ll have the graphic. And this -- actually, this graphic came directly from the TGIL, the Tactile Graphics Image Library; right?
JM: Yeah.
GS: And so the Tactile Graphics Image Library was our baseline; right? So these are graphics that are created by tactile graphics artists that are meant to be easily consumed by a blind or low-vision person; right?
JM: Sure.
GS: So if we couldn’t at least show this, then we had a lot of work to do. And so, you know, there was no filtering necessary, no custom things. We just basically took this from the TGIL and threw it on the tactile display.
So, having said that, the very common thing that you deal with in a textbook is you have the graphic or the map, and then you, on the other page –
JM: Have the key?
GS: -- you have the key. Exactly. So that’s what I’m going to load up now is the floor plan key.
JM: But this is also one of those things that could maybe be multimodal; right? You could have the graphic on there and maybe have the key –
GS: Um-hmm.
JM: -- in audio available as you’re scrolling through –
GS: -- as you’re –
JM: -- on a computer or phone or whatever.
GS: -- as you’re touching it too. That’s another option, so –
JM: So there we go. There we go. It looks like that sign – that’s an elevator was what I – the X.
GS: Um-hmm.
JM: Now I know. “Elevator,” “Storage,” “Stairs,” which are kind of – just a whole bunch of vertical lines stacked. That makes sense.
GS: I love the symbol for the stairs.
JM: That’s a really cool –
GS: Isn’t that cool?
JM: Yeah. It’s really cool. It’s just like a bunch of, like, vertical rectangles.
GS: It looks like a set of stairs going from left to right or right to left. It’s –
JM: Yeah. It’s really cool. Just a box for “Office” and then the circle is the entrance. And what is that little triangle? Oh. “Women’s restroom.” Sorry. Just the women’s restroom. And then another restroom.
But yeah. That’s really cool to see the key. So maybe, actually, my mention of audio for this thing, it actually does make sense to have a tactile key because not everything is just an abbreviation.
GS: Um-hmm.
JM: There are – a lot of these are just symbols and things like that that you do need to feel to really understand what they are.
GS: You got it. And – but you notice here is, because we’re dealing with a Equidistant pin array, we’re no longer restricted to the characters that are in the Braille cell. So you’re – you’re not seeing new symbols and characters that can be created to represent texture, to represent items, icons, things like that that are now – you know, you can use a 3x3 or a 4x4 symbol that was never possible just using, you know, the individual Braille cells. So –
JM: So I know there’s going to be a lot of stuff for textbooks and things like that. That’s certainly one of the areas of focus. But just looking at, say, graphics that are available online, how much of that stuff is going to be able to be rendered in a way that someone might be able to feel it, or how much of it is going to take specialized markup to make it actually understandable?
GS: That’s a good question. So there’s two types of graphics that we expect to see on this. Number one is graphics that are created by a tactile graphics artist. And that’s what you’ve been looking at here -- is these are graphics that would be found in a regular textbook. A graphic artist would create it. Let’s say if you bought, like, an atlas or something like that –
JM: Right.
GS: -- a tactile atlas; right? Those are created by a well-trained person who knows how to abide by tactile graphic standards and create graphics like that. Then, you have what I call the impromptu learning graphics, and those are the graphics that I – honestly, if I’m most excited about something in this field, it’s that, because today, if a teacher of the visually impaired is not given a graphic that’s provided in the classroom, or if you, as a blind professional are not given a graphic that somebody’s referencing in a power point ahead of time, you get nothing; right? You’re just going off of –
JM: Sure.
GS: -- their visual description maybe, or somebody’s done some really cool impromptu mechanism, which, I know teachers of the visually impaired –
JM: Right.
GS: -- are so good at -- to recreate that representation of that graphic; right? But with this, our dream is to have a tactile monitor application on this device that I could connect to an iPad.
And that’s one example of what Dot is doing today, is they’re working with Apple on this tactile API to try to replicate the screen on a tactile surface. Now, with that, there’s going to need to be a lot of work done in this space to create filters that will allow those graphics. Like, if you give me something from Google Images, I’m going to have to do a lot of filtering to understand, number one, what’s the focal point of that image? What’s the main target that I’m supposed to be looking at? And then how do I get rid of any useless, or not useful information to understand that specific one item in the photo? Does that make sense?
JM: Yeah. Absolutely.
GS: Yeah.
JM: And I know Dot also has an API which is meant for developers to kind of create some of these experiences.
GS: Um-hmm.
JM: And one would assume that hopefully, that stuff would also work with whatever –
GS: Absolutely.
JM: -- tablet that you can work with.
GS: Yeah. Absolutely. They’ve laid the groundwork here, and so we’re not going to reinvent the wheel, you know. You know, we commend them for the work that they’ve done with Apple, and I’ve had the pleasure of working with Apple, and it’s a great partnership. In this case, you know, we’re targeting this impromptu learning situation, and that work that, you know, the – iPads are all over the classrooms, right? So our expectation here is that you’re going to be able to use the Dynamic Tactile Device for your textbooks, your classroom activities. But then if you need to cast an image to it, if you need to connect it to your computer and get your computer screen layout, any of that kind of stuff, all that, you know, work is going to be underway, and, you know, Dot’s already got a great start on that. So that’s fantastic.
JM: Sure. APH, through the quota program and, you know, just because of history, has mostly targeted, or has done a lot of targeting the K through 12 over the years. Is that where you see this going most, or do you see this being used just as much by professionals?
GS: I think there’s a lot of learning advantages. And, you know, APH being one of the largest provider of textbooks today – and I always throw these numbers out that, you know, a textbook that we produced last year was – that was a math book, and it took 13 months to produce and costs over 30 grand to produce; right?
JM: Wow.
GS: So – and it’s not just the amount of money, but it’s the amount of time that it takes for these kids to get their textbooks. So I immediately see time-to-fingertips sort of advantage to doing this in the classroom. It doesn’t mean that it’s not going to be popular among professionals. I mean, I’ve gotten so much feedback from the coding world, blind coders who are excited to try something like this because they haven’t -- they haven’t had much experience with developing visual UI; right? And understanding how things are laid out and actually being able to tactilely feel what you’re creating and maybe – maybe that hamburger menu doesn’t look right in that corner. You need to move it in a different area.
And so there’s a lot of benefit there. But I – you know, as APH, we’re naturally going to start in the education space.
JM: Yeah. Screen readers are starting to recognize the need for dealing with multiline Braille Displays and other similar products. And everything before was pretty much single-line. What work are you doing in that area to, you know, perhaps have JAWS or NVDA be able to render ten lines of text or, you know, render a spreadsheet or anything else like that?
GS: Um-hmm. Yeah. No. It’s all work that’s underway. We’re already starting those conversations. It’s – the other piece of the puzzle is coming up with a standard that’s going to work across the board; right? And so you don’t – if you’re using JAWS or NVDA or VoiceOver or TalkBack or any of that, you want to try to not have to do something different in every screen reader. So, you know, trying to keep a consistent interface in the same fashion that, you know, this Braille HID standard was created where you can plug in any device – and this is a standard that we want to be relevant to any multiline display, whether it’s ten lines or two lines or four lines; right? So the work that we do now – and that’s -- I think I mentioned to you in the earlier podcast that we did – we’re doing the work on the EBRF –
JM: Right.
GS: -- which is the electronic Braille standard that we’re trying to put forward. It’s the same thing. APH doesn’t want to own that. That’s a standard that we want the field to have and to be able to – any multiline or single-line or whatever display to be able to take advantage of that. And the same thing with the screen reader work that we need to do as well.
JM: Another area that jumps out at me that could be really interesting is maps and spatial information, perhaps working with TMAPs or something like that --
GS: Um-hmm.
JM: -- is another area, of course, that you can explore or.
GS: Not yet. But I can tell you that when we look at this, there’s so much potential with map – you know, just as you zoomed in on that quadrant of the Cartesian plane there and saw more information exposed; right?
JM: Um-hmm.
GS: Like – you know, I think of those maps that have, like, population density or pollution level or, you know – and you can basically turn on and off layers of data. It’s all stuff that we’ve never had access to before. And it’s visually available. You can go on – I think you can go on Google Maps, and you can, you know, turn on and off different layers of things. And so being able to do that in a tactile form, to say, okay. Now, all of a sudden, you see the city, and you can see where -- textures show you where the population density increases or decreases and things like that. Like, that’s all stuff that keeps me up at night going, all right. What else can we do with this? But –
JM: Sure.
GS: -- you know, we always get back to our north star, which is -- what gets this funded by the federal government is the textbook scenario. We have to do that right. If we don’t do that right, then none of the other benefits come to it. So that’s really our north star.
JM: That makes sense. So we’ve been by the Dot booth, and also looked at the Dot Pad, which is using the same underlying cell technology --
GS: Um-hmm.
JM: -- but is a different device. How do you see what they’re doing – like, how does that fit into what you’re doing? Their Pad might be coming out within the next year or so. So how do you see this all fitting and complementing each other?
GS: So the Dot Pad itself is designed as a developer tool. And whether that makes a mainstream interface with sighted developers developing apps for people with visual impairments or not, we don’t know. But that is a -- strictly a developer kit to showcase the power of that API; right? And they’re working with Apple to optimize that API. And I’m really excited about the potential that that has because, I mean, you think about, like, all these apps that build in accessibility for VoiceOver, and now, you have the capability to actually create a tactile rendering of an app or of a graphic or things like that.
JM: Yeah.
GS: So there’s a ton of potential there. And as I mentioned before, it’s not stuff that is just going to be exclusive to their Pad. That’s something that this device and any other future device is going to be able to take advantage of. So I tell people now, like, if you’re at CSUN, go touch their Pad. That’s what the dynamic tactile technology feels like; right? And so you’ll understand right away that it feels really quality.
What’s different on that Pad and what this device does is they’re device is not showing multiline Braille. They’re device is showing tactile graphics –
JM: Right.
GS: -- and then beneath it is a 20-cell refreshable Braille Display, whereas –
JM: Yeah.
GS: -- our mission here is multiline Braille and graphics on the same surface.
JM: Yeah. I think the analogy that pops into my head is the first version of Google Glass that was really designed just for developers to try to figure out what to do with the technology –
GS: Um-hmm.
JM: -- in -- you know, it just lasted – it was really targeted at developers, and, you know, wasn’t ever targeted at consumers.
GS: Um-hmm.
JM: And, you know, that’s evolved over time. And now, it’s being used in more products, including stuff here. So it’s almost like you have the same thing going on, and that’s an early way – if someone’s really wanting to jump in now, they can do that –
GS: Um-hmm.
JM: -- and then this is going to be out. What are you thinking? Time line?
GS: Our hope is to have field testing going on at the end of 2023. So to actually have this in the hands of kids at the end of 2023. That’s our hope. We have to run through all of the federal quota and the federal government hoops that we have to go through and all that kind of stuff. But, you know, supply chain –
JM: Oh. There’s that word again.
GS: -- willing. Supply chain willing –
JM: Yup.
GS: -- that’s where we hope to be, so – man. I mean, to think that there’s a possibility that kids could be reading a textbook in a multiline Braille Display by the end of 2023 is, like, mind-boggling right now.
JM: I will say, at this CSUN, you know, we’ve gone through, now, several years of prototypes and early prototypes of multiline. And this is the first year where I -- you know, feeling this and the technology behind it, it’s like, oh, my gosh. We’re actually close now. This is actually something that is working. It’s not just some one-off demo doing one thing.
GS: Um-hmm.
JM: It’s, you know, live, doing images and doing text. And I definitely feel like we’re pretty – getting pretty close.
GS: I’m really happy to hear you say that. That actually gave me chills. And the reason I say that is because – and I think I mentioned this to you before -- there’s been so many promises made in this space that you get these start-ups coming in saying, hey. We solved the problem. The problem’s been solve, blah, blah, blah. I felt the same way. I said this is now – we’re very close. This is working.
And that’s really why this box is sitting in front of you – is because this is our prove-it tour that, you know, all these promises have been made. Now it’s time that we believe this technology is ready, and now we need to get people to experience it, to get their buy-in, to prove to them that this is – is ready to go, and the tech is now there.
Now, it’s a matter of building that infrastructure that can support a multiline device like this. Because right now, like I mentioned before, those EBRFs and the textbook navigation capability, marked up files, they don’t exist yet. And so ensuring all of that is ready –
JM: Sure.
GS: -- it’s a big undertaking, but I’m fired up about it.
JM: A lot of moving pieces. In fact, a couple thousand of them in here.
GS: That’s right. Absolutely.
JM: Any idea – might be a little early to talk about what you think this might cost when it finally comes out, but do you have any idea?
GS: I don’t. I know that it’s not going to be cheap. You know, 10 thousand plus potentially.
JM: Right.
GS: However, we – and you know, that was the other side of the thing that we were talking about; right? Is – the federal government is also a big moving part in this, and that’s something that we want to emphasize that we are working with the government to -- you know, federal quota’s going to cover some of this for the education sector, but we’re also looking at, you know, working with the – or, not looking, but we already are working with the federal -- Department of Education to see if we can expand.
JM: Sure.
GS: Because what we’re doing here is providing a new mechanism for delivering of textbooks and delivering of content. So yeah --
JM: And I feel like once you get to version one -- that’d be – that’s the really hard one.
GS: Yeah.
JM: Then a lot of these features, and people talk about doing different dot heights or expanding the graphic –
GS: Um-hmm.
JM: -- you know, the capabilities – version two ends up being a lot easier. Then, you know – once you knock off all these big things --
GS: Yeah. Yeah. Once version one is there and we solved all the UX challenges, the user experience challenges of how do you pan ten lines of Braille? How do you navigate around? You know, those are all – you know, I was doing a UX session with some folks earlier this week, and no – the person I was working with had never used the pinch-to-zoom gesture before.
So you’re teaching people things that are visual concepts that they had never actually used before. And so, you know, once we’ve got those UX challenges solved, then it’s a matter, like you said, of producing the cost, making it faster, creating multiple heights, all that kind of stuff, so –
JM: Do you feel there will be a time where having touch sensitivity would be built into this?
GS: Yeah. Absolutely. I think that that’s -- that’s one of our requirements, actually, is that, especially as you’re navigating around, you want to be able to position, you know, let’s say a zoom point or position a cursor. You know, you need to be able to do that, and touch is the most natural way to do it.
JM: Absolutely. I know people are going to want to feel this. Are you going to be doing a bit more of a tour? Are you going to be at the conventions this summer, or –
GS: Absolutely. Yeah. So we will be doing regional UX sessions, so we’ll be – you know, I tell people, if you want to get added into any of these – if you’re interested and things like that, send an Email to dtd@aph.org. We are going to be working with our regional partners to host regional user experience testing, so we’ll work with them to get the information out to their regions, and bring folks in and, you know, put them through some simulations and give us feedback. I think the feedback is the biggest thing. You know, the -- what I’ve shown you today are very scripted demos; right?
JM: Yup.
GS: And I can’t wait to get something that’s more wide open that actually is running software in the hands of somebody because we are going to learn things that we never even thought of at all. So it’s going to be exciting and terrifying at the same time.
JM: I don’t think I can overstate how thrilled I am to finally feel this, and I’m really excited to see where this goes. Thank you so much for sharing it.
GS: Hey. Thanks so much, J.J. Great seeing you in person.
JM: Yes. Absolutely.
For more exclusive coverage, visit blindbargains.com or download the Blind Bargains app for your iOS or Android device.
Blind Bargains audio coverage is presented by the A T Guys, online at atguys.com.
This has been another Blind Bargains audio podcast. Visit blindbargains.com for the latest deals, news, and exclusive content.
This podcast may not be retransmitted, sold, or reproduced without the express written permission of A T Guys.
Copyright 2022.


Listen to the File


File size: 33.4MB
Length: 28:24

Check out our audio index for more exclusive content
Blind Bargains Audio RSS Feed

This content is the property of Blind Bargains and may not be redistributed without permission. If you wish to link to this content, please do not link to the audio files directly.

Category: Shows

No one has commented on this post.

You must be logged in to post comments.

Username or Email:
Password:
Keep me logged in on this computer

Or Forgot username or password?
Register for free

Joe Steinkamp is no stranger to the world of technology, having been a user of video magnification and blindness related electronic devices since 1979. Joe has worked in radio, retail management and Vocational Rehabilitation for blind and low vision individuals in Texas. He has been writing about the A.T. Industry for 15 years and podcasting about it for almost a decade.


Copyright 2006-2024, A T Guys, LLC.