Blind Bargains

#CSUNATC20 Audio: Expanding VR And AR Into XR With The Smith-Kettlewell | Eye Research Institute


As J.J. says early on during his interview with Brandon Biggs, Engineer with the Smith-Kettlewell | Eye Research Institute, "There's a lot to unpack here". The pair even commandeered an entire Platinum Ballroom, okay CSUN wasn't using it honest, to talk about indoor navigation and mapping techniques. Tune in, or read the transcript below, to hear how audio games like "A Hero's Call" can pioneer a tested user experience for providing and effective way to perform indoor navigation tasks. Also, hear a demo and learn how the term XR came about for describing hybrid reality experiences. To learn more about this technology, visit the Smith-Kettlewell | Eye Research Institute website

CSUN 2020 coverage is Brought to you by AFB AccessWorld.

For the latest news and accessibility information on mainstream and access technology, Apple, Google, Microsoft, and Amazon offerings, access technology book reviews, and mobile apps, and how they can enhance entertainment, education and employment, log on to AccessWorld, the American Foundation for the Blind's free, monthly, online technology magazine. Visit www.afb.org/aw.

Transcript

We strive to provide an accurate transcription, though errors may occur.

Hide transcript
Transcribed By Grecia Ramirez

Direct from Anaheim, it’s blindbargains.com coverage of CSUN 2020, brought to you by AFB AccessWorld.
For the latest news and accessibility information on mainstream and access technology; Apple, Google, Microsoft, and Amazon offerings; access technology book reviews and mobile apps and how they can enhance entertainment, education, and employment, log onto AccessWorld, the American Foundation for the Blind’s free monthly online technology magazine, www.afb.org/aw.
Now, here’s J.J. Meddaugh.
J.J. MEDDAUGH: CSUN 2020. The conference is over. We’re in Platinum 4, and we’ve just taken over the room so I could talk to Brandon Biggs, who is an engineer at the Smith-Kettlewell Eye Research Institute., Smith-Kettlewell, always doing all sorts of cool stuff, and Brandon’s here to talk about it.
Welcome to the podcast.
BRANDON BIGGS: Hey. Thanks for having me.
JM: I feel very grandiose. We – hello. Welcome to Platinum 4. I don’t remember – you did a presentation here, you said?
BB: Yeah. Did two presentations here, yeah.
JM: I hope it wasn’t this empty.
BB: No.
JM: Good. We’ve commandeered a couple chairs, and we’ve just taken over. Anyway, you’ve done several different things. A lot of – they’re pretty much related in a thread of helping people locate things or places or, you know, information around them. Lots of very cool stuff. So why don’t you go ahead and start out telling us what you’re doing.
BB: So I did three different presentations this period, and I’m working on three different projects and one project that combines all of the prompts together.
So first, we are releasing a beta test of CamIO, which is short for Camera Input-Output. And it’s basically an app on your phone that allows you to annotate 3D objects. And then, we’re doing another project called Audiom, which is basically Google Maps, but completely in audio. And then, we have another project, which is an indoor wayfinding app that has – uses computer vision to help figure out where you are combined with odometry. Anyways, it’s about a half a meter accurate. And then, I’m doing a project with the Magical Bridge Playground in Palo Alto to combine all those technologies in an XR experience.
JM: XR. I’ve heard of VR and AR. What is XR?
BB: XR is both VR and AR together.
JM: Oh. Oh. There you go.
BB: So we just decided it was too difficult to say VR and AR and MR because it’s mixed reality, so rather than saying VR, AR, MR, we say XR.
JM: Very cool. Let’s start with CamIO. That’s been around a little bit longer.
BB: Yup.
JM: We’ve watched it develop for a minute, so go ahead and tell us more about that.
BB: Yeah. So it’s been in development since two thousand – I think it was 2008 or ‘6 or – it’s been in development for a while. But we’re finally getting it to the point where it’s good enough to release. And we’ve got a beta test, so if you want to Email CamIO, camio@ski.org, we can add you to the Testflight app, and you could download it on your phone.
JM: So iOS at the moment; correct?
BB: Correct. It’s just for iOS.
JM: Okay. So if someone gets on this beta, how might they use the app?
BB: So basically, we are – it’ll change because it’s still in beta. But basically, we will send you an invite code. You will download the app to your phone, and then we will send you instructions on two sheets of paper. One of them will just be a full sheet of paper, and you use that as the board. And so you can cut it out and place it around objects that you want to annotate. Like, if it’s a microwave, you can place it around the edge of the microwave. It doesn’t really matter which direction you put it. You can also do a – print out a stylus, and we have a method to fold that stylus. And unfortunately, we’re still trying to figure out how to do that completely nonvisually. Right now, I would recommend using something like Aira or Be my Eyes to cut out that. Or we can send one to you.
JM: So when you say print out, we talking 3D printed out or regular –
BB: No.
JM: -- regular printer?
BB: No. No. No. No. On a regular paper printer. So –
JM: Okay.
BB: -- you can print out the board, on any paper. And I’d recommend printing out the stylus on card stock because it’s sturdier.
JM: So the board is a piece of paper with -- annotations or – describe a little more what you mean by that.
BB: So they’re basically – both are QR – they’re similar to QR codes. They’re called ArUco markers, and they are – how the system works is it looks at the board and says – and it sees the stylus and the board at the same time, and it says, okay. Where is the board in relationship to the stylus at this point in time? And then, that’s the annotation. So there’s really nothing about the object. It’s all about the relationship between the board and the stylus. So it does allow you, though, to annotate with very good accuracy, different objects in 3D space.
JM: What’s the density of that? So how closely can you put two marks together?
BB: About a centimeter apart.
JM: Okay. So you would kind of diagram, say -- take a microwave as a simple example.
BB: Uh-huh.
JM: You would diagram the buttons on a board on a piece of paper; right?
BB: No.
JM: Okay.
BB: So what you would do –
JM: I’ll have you explain it, not me.
BB: Sorry. The board – the board is literally just a piece of paper. You can cut that piece of paper out and glue it or tape it –
JM: Oh.
BB: -- to whatever you want to tape it to.
JM: It’s a page of these codes?
BB: Yes. Exactly.
JM: Got you. And you can cut them out. Can you print that on a tactile embosser, so when -- a blind person could feel the codes?
BB: It needs to have ink. So it needs to be black and white.
JM: Okay.
BB: You could, I guess, put it on a PIAF, and then you can feel that. But most of the time, it’s easiest if you just print it out on a printer, and then the app will actually tell you if it sees the side of the page that has the codes. So you just point it – the camera – at the board, and if it’s – it makes like, a cricket sound if it’s not seeing the codes. And if it sees the codes, it will be quiet.
JM: Okay. So a sheet of these contains a whole bunch of these –
BB: Yes.
JM: -- codes? I’m assuming little squares or rectangles or whatever they are.
BB: And just cut it out, paste it on the edge of your microwave, and then it should be – you shouldn’t cut it out too small because more codes, more accurate and easier it is to aim the phone.
JM: So it’s not one code per location. It’s actually like a spatial representation to an extent; right?
BB: No. It’s just a bunch of codes. It’s literally like a really dense page of codes. And what the camera does is it looks and sees if it sees one of those codes, then – or two of those codes, then it – and it also sees the stylus at the same time, then it can interpolate between which code it’s seeing and the stylus. So that’s all.
JM: Okay. So excuse me while I’m unpacking this, but I think I’m figuring it out. Maybe. But we’re doing this for our listeners as well.
BB: Yeah. Yeah.
JM: So you cut out a square or rectangle with these codes or markers, whatever you want –
BB: Uh-huh.
JM: -- to the size of, what, like the touchscreen of the microwave?
BB: Yeah. So – no –
JM: Does one –
BB: No. No – well, you could do that, totally.
JM: Okay.
BB: But you can also do – even easier – if you want to allow for sighted people to use it, you just put it around – you cut out like a D shape – Braille D shape and put that around the corner of the microwave.
JM: Oh. Okay.
BB: And then that way, there’s code sticking off the edge – you know, the edges, but it still leaves the keypad free.
JM: So how much distance can there be between the codes and the stylus that you’re pointing?
BB: As much as the camera can see. So it can really be any distance, just as long as the camera can see both the stylus and the codes at the same time.
JM: Fair enough. Okay. So you are holding your phone with one hand and the stylus with the other hand? Is that how you would do this then, or –
BB: Yeah. So we’re still working exactly what it would be, but right now, that’s how I do it. And then we’re – you can also use a Google Cardboard that has, like, a headband strap on it, or you could also use a chest mount. So if you have a chest mount –
JM: -- lanyard.
BB: -- lanyard, then you can use that as well. Or you can use a tripod. I have a gooseneck tripod that works really well, and you can just point that at whatever you want, clamp it down, and it can see really well.
JM: So explain to our listeners the advantage of why it was chosen to use a stylus plus camera as opposed to just using the camera.
BB: So we tried using the finger – to recognize where the finger is, but the cameras – sorry. Computers are not smart enough to figure out what is actually a point. So if, say, for example, somebody had their hand in an open position and they were touching someplace on the microwave, then the camera was able to see that there were two fingers pointing somewhere, it couldn’t tell which one –
JM: True.
BB: -- you were actually pointing to. And so we tried using a marker on the finger that was pointing to differentiate it, but it wasn’t as effective as a stylus. And also, if you have mobility impairments, there’s no way for you to put, you know, that marker on your finger. It’s much more difficult to do. And so, with the stylus, you can do – you can strap it to your hand, you can do all kinds of stuff to make it more accessible to you to hold. And so that’s why we’re doing it.
JM: Could one use an actual stylus as opposed to the one that’s printed out?
BB: Absolutely. So we have little one-inch dowels that we have carved into a pointy end, or a pointy shape that are about six inches long, and – big enough for my hand. And we paste the QR codes on there. And basically, when we fold the card stock, you could totally just put a dowel in there, but instead, we – we’re just doing it with air. And so it works, but it’s not the most sturdy.
JM: Sure. So we talked a lot about microwaves. But what are some of the interesting things that you have found or other testers have found that can really benefit from this?
BB: So any sort of – if you have a TMAP, you can label different areas on there. If you have somebody, and you want to remember where the different restaurants are, you can label those spots.
JM: Let’s mention quickly. The TMAPs, for those who might not be familiar. These are the maps that you can print out or order from the San Francisco lighthouse that are for – you put in an address, and they’ll give you a map of the nearby area in a tactile format.
BB: Yeah. lighthouse-sf.org/tmaps.
JM: Yes.
BB: You can order them. So – yeah. So that’s also a very useful thing. If you have any appliance, really. So, like, for example, in the hotel room, there was a thermostat, and I didn’t know what all those buttons were. So if I wanted to, you know, get somebody sighted to help me label those, I could do that really easily, and I could just paste up some – a board there on that thermostat and then label those buttons. And then I can go back later and remember – I don’t need to remember where the buttons are. So that’s another thing.
If you are in a classroom and have, like, a cell model, for example, you can label all the different parts of that cell model. If you have, like, a mixing board or something with a bunch of different knobs on it, you can label what each one of those knobs does and how – the kind of information you want when you’re feeling those. If you have something with a bunch of abbreviations, for example, in Braille, then you can expand those abbreviations with a CamIO annotation.
So there’s tons of stuff you can do. It just, you know, once you start thinking about "What would I like to make interactive?”, then you start thinking of a ton of different things in your life that could be made interactive.
Coffee machines are a big one –
JM: Yes.
BB: -- for people who are working in offices.
JM: And let’s not overlook – this is 3D in nature as well; right?
BB: Yes.
JM: It’s not just flat surfaces.
BB: Exactly. So yeah. You can totally do -- like, in our study -- we performed a study, and we had a little car, and basically, we can do inside the car. So not just outside, but you can also do internal things, just as long as the stylus can still be seen by the camera, you can annotate anywhere. So you can even annotate inside something, and you can annotate, you know, underneath something.
JM: Very cool. So again, that’s available in beta now, or there’s a beta test starting –
BB: Yup.
JM: -- so you can Email camio@ski.org.
BB: -- dot org. Yup.
JM: But wait. There’s more.
So I’m really interested in these next two as well. Not that I wasn’t interested in the first thing, but -- I’ve, for years, been trying to find ways to make Google Maps more accessible, and it’s one of those things. The interface for Google Maps is really just not very usable for a lot of different reasons. So tell us what you’re doing to try to solve that problem.
BB: Yeah. So I haven’t quite integrated with the Google Maps API. But for people who don’t understand what I’m doing, it’s basically, I’m creating an interface – so very similar to if you find a Google Map on a page, there is a graphic there of the map. And so there are specific tools that help render the map data. And so that’s what I created. I created a renderer for the data.
JM: So are you using that, or are you using OpenStreetMap?
BB: Right now, the data that I’m using is custom made for a generic JSON format. So it’s basically generic – in a generic format, and I have the frontend. And then I can hook up any backend to that that I want. So I just need a couple days to hook up, like, OpenStreetMaps. So it’s just a matter of timing.
JM: Okay. So if I came –
BB: That’s –
JM: Sure. So if I came to CSUN, and I wanted to explore the area, say, before I went out somewhere, how will this represent my surroundings? How can I use this cool?
BB: Yeah. So it is very – I took a lot of the interface conventions from audio games. So basically, if you’ve ever played A Hero’s Call, I took a lot of those conventions. So basically, you have a grid view where you can go and maneuver around using a grid interface. So everything is locked to a north position, and then you’re moving around at a particular zoom level. So default is by meter. And so you can just go meter by meter and figure out what is there.
So depending on how detailed the data gets, you could even see, you know, where the toilet is in the bathroom is because it allows you to go that zoomed in. But the data might not necessarily be there. So it has the potential to be extremely detailed. Or you can zoom it out and just see, you know, here’s the Marriott Hotel, and here’s the, you know, different pieces of the road.
So basically, there’s three different views in there. The first one is a grid view, and so it’s very similar to what I just said and –
JM: Yup.
BB: -- it’s similar to maneuvering in Tactical Battle or a spreadsheet. So that’s the first view. The second view is first-person view, and that’s very similar to Swamp or maneuvering around, like, if you’re actually walking there. And the textures change as you move around on different polygons and different shapes on – like, different rooms in the building.
And then the next one is a tree interface. So it’s very similar to what you have right now with Nearby Explorer or if you’ve played a game like any of the web games. So it’s basically, you have a list of areas, and then you can zoom in. So they could say Marriott Hotel, and then I’m going to go to the Elite 3 room. And there’s a list of all the rooms in the Marriott Hotel, and you just click on Elite 3 and then click go, and then your character will go there.
JM: So all dependent on data. You probably have a much better shot in the short-term of making this work for outdoor navigation; correct?
BB: Probably. But I’ve decided that I’m starting with indoor navigation –
JM: Uh-huh.
BB: -- because it’s a lot more difficult, and if I can make an interface that’ll work for that –
JM: Right.
BB: -- then it’s going to be easy to just scale it out to the easier one. So yes. Outdoor navigation is going to be easier. In fact, there’s already an application – I can’t remember what it’s called off the top of my head, but it already does this with OpenStreetMap. And so they do that in a very good way for OpenStreetMap. But mine does indoor spaces, primarily.
And so – yeah. That’s what –
JM: So it sees audio modes. To me, that are the most interesting. How are you – when you’re walking around virtually, the Marriott, how does it represent what you’re passing or what’s near you?
BB: Yeah. So It’s still a little bit – we’re still figuring out exactly how we want to do it, but I’ve been taking a lot of inspiration from, like, different audio games like Manamon. For example, the doors are represented with, like, an opening-closing door sound. And so if you can go through a doorway, that’s very reminiscent of what we’re trying to do. And if there’s, like, a hallway – I’m trying to figure out how I want to do that. But one way is to do it how Shades of Doom does it: with, like, a wind sound. And then, if there’s a -- like, an elevator, we could do it with, you know, the ding of the elevator, very similar to how Swamp does it.
Anyway, so I’ve got a lot of different ideas based off of different audio games that I’ve played, and I’ve played a lot, so – yeah. That’s – it’s – I’m still trying to figure out all the little, you know, details for each data set, and each building is a little bit different. So I’m still trying to scale it up. But I’ve done about five or six different maps with it, and so – yeah. So some of them can be – yeah. As I do more maps, it’ll get more generalized. So that’s –
JM: And you’re totally blind, so creating these maps too, are you able to do them yourself or –
BB: Yeah. So right now, I do it completely using data, and then I’m able to view that data in realtime. So that’s very useful. But I am also building an editor so that I can do what-you-see-is-what-you-get, kind of the -- without doing any code because coding is slow. For those who know, it’s not the fastest thing on earth, so if you can do it with, like, a list of menus and stuff like that, then it’s faster. And so that’s what I’m going to start building pretty soon, because I got a lot of people who want these maps. So I need to make them faster.
JM: I think you said you had a bit of an audio clip demo that you created; correct?
BB: Yes, I did. Uh-huh.
JM: Well, let’s put that here so people can hear a little more.
(Audio clip.
BB: So I’m here in the playground. And you can hear on the left, there are some bells, and on the right, there are some kids playing. I’m going to go and walk forward.
Long ramp.
You can hear the footsteps on the ground, and as I entered the new object, the long ramp, you could hear the sounds change. I’m going to hit the left arrow and go to the left.
Ava’s Bridge.
And we entered a new object, Ava’s bridge. I’m going to keep going to the left.
Ground carousel.
Now, we’re in the ground carousel. You can hear the kids playing on the ground carousel around us. And I’m going to hit enter.
COMPUTERIZED VOICE: TT. Object menu: Bench.
BB: And I’m going to down-arrow till I hear the Laser Harp, which is where I want to go.
COMPUTERIZED VOICE: Ava’s bridge, long – creek bridge, rock benches, performance area, chess table, chess table, chess table, play house – climbing – climbing – rocking horse – mini bench, mini slide, cozy corner, walking -- exercise – bench – bench – bucket – disk – swing -- roller table -- Laser Harp.
BB: All right. I’m going to hit enter.
COMPUTERIZED VOICE: Options for Laser Harp: Go.
BB: And I could go to that object --
COMPUTERIZED VOICE: Listen.
BB: -- listen to the sound of the object --
COMPUTERIZED VOICE: Description.
BB: -- hear the description of the object --
COMPUTERIZED VOICE: Directions.
BB: -- get directions to the object. I’m going to get the direction to the Laser Harp.
COMPUTERIZED VOICE: Laser harp is very far off, ahead and to the right. The nearest point is at 27,44.5.
BB: Perfect.
COMPUTERIZED VOICE: Press D to repeat the directions.
BB: So now, we got the directions to the Laser Harp, and I’m going to go ahead and start walking forward.
Disk spinner.
COMPUTERIZED VOICE: Laser Harp is far off, ahead and to the right.
BB: All right.
Laser Harp.
So now, we’re in the laser Harp. And I’m going to switch to the grid view.
COMPUTERIZED VOICE: Grid view. Laser Harp: 20 -- Laser Harp: 25,46. Playground walkway: 25,45.
BB: And as I move through this Laser Harp, I can hear --
COMPUTERIZED VOICE: Laser Harp: 25, forty--
BB: -- the Laser Harp. And I’m going to go back down.
COMPUTERIZED VOICE: Playground walkway, 25,45.
BB: And I’m going to move back to where we came.
COMPUTERIZED VOICE: Playground – playground walkway, 25,45. Playground walkway –
BB: I’m just hitting down arrow.
COMPUTERIZED VOICE: Playground – playground – disk spinner, 25,24. Disk – disk – ground – ground carousel. Ava’s Bridge: 30,17.
BB: And I can get a list of all the objects around me by hitting S.
COMPUTERIZED VOICE: Ava’s Bridge is nearby, behind and to the left. Long ramp is nearby, behind and to the right. Ground carousel is nearby to the left. Bench is nearby, behind and to the left. Chess table is nearby, ahead and to the right. Disk spinner is nearby, ahead and to the left. Kindness Wall is far off, ahead and to the right. Chess table is far off, ahead and to the right.
BB: So I have a list of all the objects around me.
(Audio clip ended.)
JM: This whole thing fascinates me. I mean, who -- you know – audio games; right? That’s one of those things that there’s so many people that have played many, many, many hours of audio games, including myself. And so it’s really an interface that might be more familiar than many for people; right?
BB: Yeah. Absolutely. But more importantly is that audio games – interfaces in audio games have gone through rigorous testing by user testers and beta testers. And if an interface is terrible, the gaming is not going to succeed.
JM: Right.
BB: And so the whole idea of looking at audio games rather than doing some sort of interface that was dreamed up by a researcher and tested by a researcher was that all the testing’s been done already. We’ve got good interfaces in the most popular games, which is Swamp and A Hero’s Call and Tactical Battle, which are the most -- you know, some of the most popular games out there -- audio games out there. And the reason – one of the reasons why they’re really popular is because they’re very pleasant to play. And so that’s why – that’s one reason why they’re popular. And so my theory behind choosing audio games was that if I went to, like, an older person or somebody who’s never played an audio game, they’re going to find that interface much easier to use than, say, something that I create and, you know, just dream up out of thin air.
And so I’m getting ready to do a study, actually, to test this because the sighted people in my lab don’t believe that the interface is very intuitive, and they want proof. And so they – I need to show them that it’s actually true. So hopefully, it’s true. I haven’t tested it yet. But that’s the theory behind it is that blind people built these interfaces for blind people, and blind people have tested them very extensively, and they will be the easiest to understand and use.
JM: Absolutely. I mean, even the potential implications for ONM instruction as well, being able to let people explore places before they go --
BB: Yeah.
JM: -- visit them in person. There’s so many ways you can go with this, so it’s really – it has a lot of potential for sure.
BB: Absolutely.
JM: So where is the project at? You’re about to do some user testing? So where are you at in the release cycle for this?
BB: Yeah. So I’ve released one for the Magical Bridge Playground in Palo Alto, and I think they’re trying to put it on their website. It’s hard for sighted users to put these on their website because it’s literally just a button, and they look at that and say, well, what -- what is that? Just like we look at a graphic and say, what is that?
JM: -- what is that?
BB: Exactly. So it’s a little bit hard for them to understand that, yes. Within that button, there is a whole world to explore, so –
JM: So it’s a web app?
BB: Yeah. It’s a web component, yes. So it’s very similar to – if you Meta a YouTube video into your webpage or if you embed a Google Map into your webpage, it’s exactly the same technique.
JM: Sure. Okay.
BB: So, I mean – now, I’m looking for different places to partner with to start making maps of, like, indoor spaces and outdoor spaces. So right now, I’m just at the point where I’m making more maps and I’m building the platform to scale it, you know, adding all the boring stuff like user roles and all that.
JM: Uh-huh.
BB: And –
JM: Awesome.
BB: Yeah. So that’s it.
JM: Very cool. And there’s even another – so there’s a third thing that you’re also working on that’s kind of related to all this, or –
BB: Yeah. So there’s a turn-by-turn navigation system that we’re working on as well that uses a mixture of computer vision and the initial odometry, which is basically, it figures out how you’re stepping and where you’re stepping and – to tell exactly where you are in space. And so it’s what a lot of VR and AR applications use to determine positioning. But the problem with it is that it – when you start navigating a lot of different spaces, it becomes less accurate over time. And so what we’re using computer vision for is updating the location of the user in space every so often so that you remain very accurate. And so we got it down to about half a meter accurate.
And so now, we can navigate around Smith-Kettlewell very well. And so now, we’re going to scale it out to different spaces.
JM: Yeah. There’s been a lot of work with indoor navigation. A lot of different systems. But I guess the accuracy, to me, is the one thing that jumps out as why you feel this one has promise over some of the other things that are out there.
BB: Yeah. It’s definitely very, very accurate. And I think part of the reason is is the type of data that we are using to figure out where you are in space. So right now, I think we’re using exit signs, and almost every single building has exit signs. So it’s a very broad, generalizable feature that most buildings have.
JM: Interesting. So the cost to implement – you don’t anticipate having to buy a whole bunch of beacons or other technology to make this work; right?
BB: No. No. You would just need to have some sort of basic map of the building, I believe, to have that. And so access explorers, actually, that’s what they’re doing. They’re building maps of buildings and then using that to – for these types of apps to use.
JM: Somebody basically runs into one of these buildings, uses software to scan it as an image; right? And turns it into a certain format, and then I –
BB: Something like that.
JM: That’s very simplified. I realize.
BB: Yeah. Yeah.
JM: We’re not a research podcast, but you know -- try to break it down and --
BB: Yeah. Yeah. And even if you have, like, a -- what do you call it? Like, some sort of map to help first responders and – or blueprints of a building from architects, you can use that as well, just as long as it’s in a data format, like, either pixels -- and we can figure out what’s going on through pixels or even polygons. Polygons would be better, but – yeah.
JM: As I said, yeah. We could spend all day. And I’m sure people will have a – some people will have very technical questions and –
BB: Yeah.
JM: -- want to learn a lot more. This is very interesting stuff, though. I definitely appreciate you coming on and talking about it. Go ahead and let people know how they can get in contact with you or if they want to become part of the beta one more time, or –
BB: Yeah. Absolutely.
JM: -- have ideas for the stuff.
BB: Yeah. Absolutely. So if you’re blind and you want to be a postdoc at Smith-Kettlewell, please apply. Or if you know anybody who wants to do a postdoc, please apply because we need more blind people there. Right now, I think we have two, and that’s not good. So we need more blind people who want to build cool apps like this.
So you can get in contact by us – with us by either going to ski.org and going to the Apply for Postdocs link. Or you can Email me at Brandon.biggs@ski.org. Or you can Email and get on the cam IO beta by Emailing camio@ski.org. So it’s camio@ski.org.
JM: Thank you so much. I guess we can give back this Platinum 4 room now.
BB: Not that they’re going to use it for anything, but –
JM: That’s true. Thank you so much, Brandon.
BB: Yeah. Thank you.
For more exclusive audio coverage, visit blindbargains.com or download the Blind Bargains app for your IOS or Android device. Blind Bargains audio coverage is presented by the A T Guys, online at atguys.com.
This has been another Blind Bargains audio podcast. Visit blindbargains.com for the latest deals, news, and exclusive content. This podcast may not be retransmitted, sold, or reproduced without the express written permission of A T Guys.
Copyright 2020.


Listen to the File


File size: 44.5MB
Length: 31:12

Check out our audio index for more exclusive content
Blind Bargains Audio RSS Feed

This content is the property of Blind Bargains and may not be redistributed without permission. If you wish to link to this content, please do not link to the audio files directly.

Category: Shows

No one has commented on this post.

You must be logged in to post comments.

Username or Email:
Password:
Keep me logged in on this computer

Or Forgot username or password?
Register for free

Joe Steinkamp is no stranger to the world of technology, having been a user of video magnification and blindness related electronic devices since 1979. Joe has worked in radio, retail management and Vocational Rehabilitation for blind and low vision individuals in Texas. He has been writing about the A.T. Industry for 15 years and podcasting about it for almost a decade.


Copyright 2006-2024, A T Guys, LLC.