Blind Bargains

#CSUNATC22 Audio: Envision AI's Glasses Are A New Ally For Viewing The World Around You


Shelly brought us our first look at the Envision Glasses almost two years ago at CSUNATC20. A lot has happened since the team debuted that initial pair. And J.J. caught up with Karthik Kannan, Cofounder and Chief Engineer of Envision AI, to find out more. In this interview you will learn about the expansion of the platform, how the new Envision Ally app works and what upcoming features are on the horizon for the Glasses and Envision AI app. To learn more about the products mentioned in this interview, including the ability to schedule a virtual demonstration, visit the Envision AI website

Blind Bargains Virtual Exhibit Hall coverage is Brought to you by AFB AccessWorld.

For the latest news and accessibility information on mainstream and access technology, Apple, Google, Microsoft, and Amazon offerings, access technology book reviews, and mobile apps, and how they can enhance entertainment, education and employment, log on to AccessWorld, the American Foundation for the Blind's free, monthly, online technology magazine. Visit https://www.afb.org/aw

Transcript

We strive to provide an accurate transcription, though errors may occur.

Hide transcript
Transcribed by Grecia Ramirez

Directly, and actually in person, from Anaheim, it’s blindbargains.com coverage of CSUN 2022. Brought to you by AFB AccessWorld.
For the latest news and accessibility information on mainstream and access technology; Apple, Google, Microsoft, and Amazon offerings; access technology book reviews and mobile apps; and how they can enhance entertainment, education, and employment, log onto AccessWorld. The American Foundation for the Blind’s free online technology magazine. Www.afb.org/aw.
Now, here’s J.J. Meddaugh.
J.J. Meddaugh: CSUN 2022 in the Blind Bargains room with Karthik Kannan, CEO and cofounder of Envision, here to talk about new features with the Envision glasses.
Welcome back to the podcast.
KARTHIK KANNAN: Hey. Thanks so much, Jason. It’s really, really nice to be back at CSUN, here again after two straight years.
JM: Yes. Absolutely. It’s really good to be back and good to see new features and a lot of updates with the glasses. So why don’t you go ahead and bring people up to speed a little bit on what the glasses are, and then we’ll get into what some of the new features are.
KK: Sure. Sure. Yeah. And, you know, it’s been two years since we launched the glasses here at CSUN, and, you know, we’ve been hard at work on a lot of things. But for folks who are not aware of what the Envision glasses is, it’s basically a smart glasses that can help you read text from pretty much any surface in over 60 different languages, can help you recognize faces of friends and family members. You can even make video calls directly from the glasses to a friend or a family member if, at any point in time, you want some help and you want some trusted member of your friend or family group to just be able to help you with it. So Envision offers a lot more, so we’re going to be diving into all of that today, I hope.
JM: Great. So one of the big things that is new that I noticed from the press release is a lot of new OCR features, both online and offline. So tell us about some of the new features in that area.
KK: Sure. Yeah. I mean, when we launched the glasses, you know, the OCR, our – you know, being able to read text was one of the most asked-for features. It is the most used feature on the glasses as well. So we worked quite hard to improve both the online and offline text-recognition capabilities of the glasses. So what we did on the offline side was – so we have this feature called Instant Text on the glasses, where you can read short pieces of text instantly. We increased the number of languages that are now supported offline. So we, earlier, used to support, like, Latin script-based languages, so, for example, English or Dutch or German. We also now support Japanese, Hindi, Chinese, Korean. So that’s been one huge update.
The second thing that we’ve also added is now, Instant Text can read, regardless of the orientation of the image. So you could be holding the image upside down or, you know, right-side-up or whatever it is, and Envision is able to read the text out to you properly, and it’s – and we’ve also improved the speed of the Instant Text feature dramatically as well. So we have actually increased the speed by about 40 percent, so you’d be able to, you know, read text a lot faster with the new Instant Text than before.
We also put significant amount of work on the online side of things. The online text-recognition capabilities are mostly used for scanning documents and books and things like that. So we have two big features that we have come out with. The first one is document guidance. So a lot of users, when they use the Envision glasses, or, you know, any other kind of wearable, the common feedback that we got from them was they kind of sometimes find it hard to align the glasses, or the camera of the glasses, with the document or the book that they’re trying to read, and they’d like to have some guidance of how to move the document in order to be able to have the glasses fully capture the document in one go. And that’s what we’ve introduced. So the glasses will now give you audio feedback on how to move the document or your head, if you’re, you know, more comfortable with that. So you can capture the entire image. The second thing that users have been asking for a lot is being able to understand the layout of the document. So if you’re trying to read a document, are there any headings in the document? Can you, like, skip through the headings in the document? Are you – if there is, like – if you’re trying to read a newspaper article or a magazine article, and if there are columns in the document, are the glasses going to be able to, like, understand the columns and be able to read them out? So that’s the big feature that we have introduced. So with the new addition – new update to the glasses, users will be able to also, you know, get the layout of the document.
And later this year, what we’re planning to do is we’re also planning to add the ability to recognize tables with the glasses. So that is one thing. So if you’re trying to read, like, a bank statement or a menu card or anything that has tables in them, the glasses would be able to do that. We’re going to come out with this update a couple of months down the line. And -- yeah. So these are the two big things that we’ve added on the OCR or text-recognition side of things.
JM: You brought some demo material with you.
KK: Yeah.
JM: Could we get a demo of how – see how fast the new features are or –
KK: Sure thing. So what I have with me right now is, like, a document. It’s just –
JM: Okay.
KK: -- a document that talks about the features of the Envision glasses. So I’m – what I’m going to first demonstrate is the Document Guidance feature. And what users will now hear is, as soon as I turn on the document scanning feature, it tells me how to move the document and as soon as I go ahead and have the entire document in frame, it’s going to go ahead and capture the document automatically for me. And as soon as it captures the document, it’s going to be able to go ahead and – like, it sends the document over to our secure servers, it will read the – we extract the information from the document, the layout and so on, and then read them out to you.
So I’ve got a beta with me here, just to give a fair bit of warning. So if it does throw in an occasional error message, you guys will have to be a little bit this one.
JM: Okay.
KK: But – yeah. When – by the time this podcast goes out, we’re going to be having this into production, so – yeah. I’m just going to go ahead and turn them on right now.
COMPUTERIZED VOICE: Home.
KK: Yeah. So I’m at home right now. I’m just going to navigate to Scan Text.
JM: Um-hmm.
COMPUTERIZED VOICE: Read.
KK: And then start it. And then as soon as I start it, it’s going to give me the audio feedback. And as soon as I have the audio feedback, and I’ll be able to move the document around. And when the document is entirely in frame, the glasses basically captures the document automatically and then extracts information. From there, it starts speaking it out. So that’s what I’m going to do right now.
COMPUTERIZED VOICE: Read. Scan Text. Move document up or your head down. (Small click. Slight beeps.)
KK: So the document is basically processing. Right now, it’s taken a picture.
JM: Okay.
COMPUTERIZED VOICE: We were not able to capture your document. Make sure to not block any borders –
JM: You want to redo it? You want to redo this.
COMPUTERIZED VOICE: -- of the document on arm-length distance from your right eye. To practice capturing a document, open the document guidance practice – reader. The Envision glasses are a wearable assistive device that allows users to independently access the world around them in a way that –
JM: Oh.
COMPUTERIZED VOICE: -- is convenient, hands-free, and discreet. Heading.
JM: So what happened there? I thought we were about to retake this, and then you were able to –
COMPUTERIZED VOICE: -- dense and complex pieces of text.
KK: Yeah.
COMPUTERIZED VOICE: This feature – Instant Text reads short pieces of text around you -- Batch Scan: Read multiple pages of text in one sitting. This feature allows you to scan multiple images – reader. To pause or play the text, do a one-finger single tap. Scroll through the text by doing a one-finger swipe forward or back.
KK: So I’m just going to quickly scroll –
COMPUTERIZED VOICE: -- finger single-tap to export the text to the Envision app.
KK: So I’m just going to quickly --
COMPUTERIZED VOICE: Envision glasses feature September 2021. One of four.
KK: Yeah. I’m just going to – you know, maybe scroll through the headings again.
JM: Okay.
KK: So, for example, if I want to get to the read features, which is a heading that it detected, I just, you know, scroll –
COMPUTERIZED VOICE: Envision – back – in – scan to – read features: Scan text reads – Instant Text reads.
KK: So it’s –
COMPUTERIZED VOICE: Batch Scan: Read multiple -- home.
JM: -- some text that was larger or more bold or something, and it realized it was a heading.
KK: Yeah.
JM: And then allowed you to – you can move be-- forward and back through those headings.
KK: Yeah. Exactly. So it basically picks up headings based on that, how they – primarily, basically how they look, but also what that particular heading is in relation to the other text on the page; right? For example, you might have headings that are not really bigger than other text on the page, but they might be bold; right? So the AI is able to identify the headings based on, like, a number of factors, and then it just, you know, reads out all the headings that are there in front of you, and it’s able to go ahead and – and you’re able to, like, easily jump through the headings that way.
JM: So you have online recognition, and you have offline recognition, and you have documents, and you have quick text. What is the best use case for these, and how easy is it to switch between those modes?
KK: Oh. It’s really easy to switch between those modes. If you want to read short pieces of text really quickly – let’s say you’re at a train station. You want to read what’s on the display at the train station, I would recommend using the offline Instant Text. Or if you get mail at home, and you want to sort through the mail really quickly, you could use the Instant Text feature on the glasses to do that.
If you’re, for example, at, like, let’s say, if you’re, you know, at work, and you want to read documents that are there like printer documents in a table, you could use the scan text feature to do that. And we’ve basically built a really simple menu system, so if you have a list of favorite features that you use on a regular basis, you could just go ahead and put all of them in the Favorites menu, and then when – just two swipes get to them and use them; right? So it’s really easy to switch between these features, and each of these features have, like, a recommended use. Like, Instant Text is more for short pieces of text, really quick stuff you want to read, and the scan text is when you have a book in your hand or if you have, like, a magazine article, and you just want to scan that and read that printed material. So you could use scan text to do that. And you also have a Batch Scan mode with the glasses, where you’ll be able to go ahead and scan multiple documents at once in one go. So if you’re trying to read, for example, a book or if you’re trying to read, like, a contract or a document or even a letter and want to read both sides, you could just use the Batch Scan to really quickly scan multiple documents at once and then – on multiple pages at once – and then have the glasses read them out to you in sequential order.
JM: You mentioned that the quick text is 40 percent, or up to 40 percent, faster. That’s why I’m kind of reaching around to see if we just have any random pieces of print that –
KK: Oh. Yeah.
JM: That, I think, is Braille. What – is that a –
KK: So I do have, like, a – like, for example, a recipe for a brownie in front of me. So that’s what –
JM: Yeah. We could – we could use one of yours. I was also trying to see if we could find something else that you didn’t bring, you know.
KK: Oh. Okay. Yeah. Sure. I mean –
JM: Go ahead and do that first, and then –
KK: Sure thing.
JM: -- we can pull something out.
KK: So if I want to read – like, if I have this document in front of me – or wait. Let me go ahead and get my CSUN presenter badge. That’s something that –
JM: Okay.
KK: -- you know. So let’s say –
JM: -- you haven’t read.
KK: -- you know, you’re at CSUN, and you want to read what’s on the – on the badge of a presenter. So you could just go ahead and turn on Instant Text, like what I’m going to do right now. And just, you know, roughly look at the direction of the text itself, and then it’s going to be able to speak out everything that’s on there; right? And it’s really quick that way. Let me show you.
JM: Okay.
COMPUTERIZED VOICE: Home. Read. Scan Text. Instant Text. Thirty-seventh annual CSUN assistive technology conference, PRS37, PCA, cc, Envision, founder and CEO. Presenter.
JM: Wow. It pretty much got everything.
COMPUTERIZED VOICE: Cc. Presenter. Save the dates. 38th annual CSUN assistive technology conference, March 13-17, 2023, Anaheim Marriott. Call for presentations, pre-conference workshops, August 18-September 20 – Instant Text.
KK: So that’s how quick it is. I just turned it on right now, and the moment I turned it on, it just started to read everything that was there in front of me.
JM: You know, what’s interesting about that too is a blind person just might not realize how much information is packed into a name badge, and especially if you’re wearing something, it’d be nice to know what exactly you are wearing. And that actually had quite a bit of information about –
KK: Yeah.
JM: -- you and –
KK: And, in fact, like, what I did when I was trying to read the presenter badge the second time around is I was holding it upside down, you know. And I was trying to read it upside down. And, like, you know, the Instant Text – like I mentioned, like the first version that we had just worked with – you had to hold it in the right orientation. If you didn’t, then it wouldn’t really be able to pick it up offline. But this time, I hold – I just, you know, held it in an incorrect way, and it was able to pick it up very easily.
JM: So it would get upside down and sideways?
KK: Yes. Upside down and sideways.
JM: What about at angles?
KK: At angles as well. Correct. So –
JM: Okay.
KK: -- it works in pretty much any orientation completely offline.
JM: Is there a way with Quick Text or documents to have it tell you the orientation? Say you have a job where you need to sort papers and have them all facing the same way.
KK: Hmm. That’s a good point. So we are working on something like that, but we don’t have it in this version right now. But yeah. Largely, what we’ve noticed with people is that they’re not super interested in knowing what the orientation is, as long as they’re just able to read it correctly. But we could, in the future, also add an option to say, “This is the orientation,” so if they want to set it up correct, they can.
JM: Okay. So a lot going on with OCR, but there are other features that are coming, or are already available, or, I guess in the next version as well, including an Ally feature. Is that new, or is that coming in the next version?
KK: No. So it’s – Ally feature’s always been there since the first –
JM: Okay.
KK: -- version, but we’ve really improved the Ally feature quite a bit. So the improvements are, one, you know, now you’d be able to use Ally even more, you know -- like, you’d be able to go ahead and use it in situations where your network might not be perfect.
JM: Let’s back up for a second. So Ally feature. Go ahead and explain that for people.
KK: Sure. So Ally is basically a feature on the glasses where you can make video calls directly from the glasses to a friend or family member, and they can see everything from your perspective. So since the glasses come with a camera, a speaker, and a microphone, they’ll be able to see and hear what you see and hear – like, what the camera sees, and, you know, what you’re speaking to them. And you’ll be able to hear them from the other side. So it's like making a video call, but then the only thing is you don’t have to point the phone around or -- you know, and stuff like that. They would just see stuff directly from the glasses.
So this is actually -- after the text recognition capability in the glasses, this is the most used feature as well. And, you know, ever since we launched the glasses at CSUN in March 2020, you know, we’ve had more than 30 thousand minutes of Ally calls being made using the Envision glasses. So people use it for different reasons; right? Some people use it to pick the right outfit, if they’re going out on a date, or, you know, some people might need help at work, so they call their colleagues to help them out with stuff. So it’s really convenient to make a video call without having – and having your hands free so you can do stuff, and it’s also convenient for the other person because they can see more because the glasses have a wider field of view than a regular smartphone camera; right?
JM: How does the other person receive the call? Do they download an app?
KK: Yes. So they have a – we’ve built a special app. It’s called the Envision Ally app. It’s completely free. It’s available on both iOS and Android. We’re also looking to bring that into more platforms in this year. But yes. It’s an app they have to download. And the moment you make a call, the – you know, from the glasses, they get a notification on their phone, they just open the notification, and then they’re able to see a video stream of what your glasses are. Yeah.
JM: So you have one-way video and two-way audio.
KK: Yes. You have one-way video and two-way audio. So it’s one-way video from the glasses to your, you know, to your Ally, and then it’s two-way audio so you can hear what they’re saying and they can hear what you’re saying.
JM: And I assume there’s some sort of process that you would connect to an Ally or approve certain people, or they would approve receiving calls from you.
KK: That’s correct. So we do have, like a way to really – a really easy way to add Allies using the Envision app. So, you know, you can just go ahead and directly add them, and then once you add them, they’re notified they’ve been added as an Ally for so-and-so, and, you know, you can go ahead and reach out to them any time you need help from them or someone else.
JM: There’s more in the new version, and things are on the road as well, including some third-party integrations, I guess, starting with currency and perhaps more to come. Tell me about that.
KK: Yes. So the main idea from the beginning with the Envision glasses was to always have this as a platform. So we wanted to make this into, like, an assistive tech tool, which basically is able to bring other assistive technology apps as well the people really love and use. The first one that we’ve had since we launched the Envision platform was the Cash Reader app. So it’s an app that I think a lot of people in the community love and use. And so when you get the Envision glasses, you also get, like, the Cash Reader app on the Envision glasses through which you can recognize currency, you know, over a hundred different, you know, currencies. We’re also in talks – you know, I mean, it’s a little bit early to, you know, really reveal them, but we’re in talks with some of the big names in this assistive tech space. You know, some really popular names, we’re in talks with. So we can also bring them on to the glasses this year. Hopefully, I’ll be back on the podcast to give you, like, a breaking exclusive on this, hopefully with the other parties as well. But, yes. Our main aim is to insure that the glasses can also have other apps that people love and use on it in a way that is really easy for them to access.
So we’re starting with the Cash Reader app, and we’re going to keep adding more apps. And if there are listeners out there, and if you guys want a particular app that you love and use on your smartphone, on the Envision glasses, do let us know about it, and – yeah.
JM: Yeah. I can certainly envision – no pun intended, although I’m sure you’ve heard that a lot – a lot of different categories that would make sense in this space perhaps. Navigation apps, visual assistants, maybe there’s others. Are there certain categories that you think would work best?
KK: Yeah. Like, visual assistance is one category that we’re super, super excited about. Wink, wink, nudge, nudge. I think we’re definitely excited about visual assistants and all the big names in that category. We’re also talking to a few of the navigation apps at the moment. So navigation and visual assistants are the two big ones that we have in the pipeline, but – yeah. We’re open to more suggestions. You know, we really want to know what the community loves and bring them on to the glasses.
JM: Do you anticipate that all of the third-party apps would be free, aside from -- perhaps, if it were Aira, it would be a subscription, of course. Or do you anticipate there might be some that would be a paid add-on?
KK: It might be the case with some apps that, you know – if they have a different business model than ours, then, yes. You know, or if they have, like, a paid business model, we’re going to respect that paid business model, and we’re going to bring them on to the glasses and then help them keep that same business model. We’re not going to, you know, make it free just because it’s on the glasses. But I suspect that once the Envision platform becomes more popular, you could have a lot of free apps, completely free apps, coming on to the platform as well. But for now, some of the apps that we’re in talks with, yeah. They do work on a subscription model, so we’re going to continue to keep respecting that. But their subscribers should be able to very easily – they wouldn’t have to pay extra just because it’s on the Envision glasses. So if they’re users of the Envision glasses, they basically get those apps on to the glasses itself without any additional cost, but they might have to pay a subscription to use that service. But they can just, like, take the regular subscription they have already and just use it on the glasses.
JM: Great. So people have downloaded and been using the Envision apps on iOS and Android, of course. There were a couple really big sales early onto try to build up users, and you have a lot of people on those platforms. If you buy the glasses, it’s my understanding you can get the apps, and you get them for free as well. What do you think the advantage is of using the glasses versus people that use primarily the apps on their smartphone?
KK: I think it’s – that’s something that when people actually use the glasses, they realize that it’s a very different user experience than having to use your phone. Because with the phone, you always need to hold your phone in one hand and then you have a cane or a guide dog, and you have to keep pointing your phone around. Some people are comfortable with it, but I suspect, you know, a lot of users who are not very tech savvy aren’t really comfortable with that. And the second thing is that on the glasses, you have really exclusive features like the Envision Ally, which only makes sense on glasses because, again, we can’t put video calling on a smartphone, and we might in the future as well, but when you think about it, the whole idea of just wearing the glasses and just moving your head around and having the other person automatically go ahead and see what you’re seeing, or what the camera sees, I think that makes a huge difference. So I would say it’s predominantly the user experience and the fact that using wearables, we can now have very different or unique experiences that we have not been able to have with smartphones; right? I think that’s the big thing. And, of course, we are having some features that are exclusive to the Envision glasses, like, for example, do more if we onboard someone like in Aira, it’s not going to be coming on the Envision app on your phone, but it’s going to be on the Envision glasses because, again, wearables and visual assistants, that’s, like, a category that goes hand in hand.
JM: Absolutely. It makes a lot of sense. Any other new features that we didn’t mention or that are coming in the near future that we should talk about?
KK: No. We have a huge list of features this year. Like, we’ve been working really hard on the AI side of things, so you can expect some big, big stuff that have been in the works for the past year or two. So – yeah. This year is definitely going to be the biggest year for Envision, and I hope everyone tries out the Envision app. Also the Envision glasses is something that, you know, you can – what we’ve done – what we’ve started doing recently is we started doing virtual demonstrations of the Envision glasses, so anyone listening to this podcast anywhere in the world can request a free demonstration of the Envision glasses, and then we’d be happy to give you a virtual demonstration. Or we can also – since the world is starting to open up a little bit, we can also arrange for physical demonstrations in your city or in your country, depending on whether we have a distributor there. For example, here in the U.S., we have a lot of distributors in almost all major areas, so we might be able to go ahead and organize an in-person demo. If not, we’d be more than happy to give you a virtual demonstration, so I will definitely urge everyone listening to the podcast to go ahead and get your free virtual demonstration with the glasses.
JM: Great. Before we end, I do want to talk a little bit about how you are handling privacy since you are collecting so much information, or using so much information, both online and offline. You are doing online text-recognition, you’re doing video chat. How much of that data is saved and used, and how do you use that data? And how do customers maintain privacy?
KK: Sure. So I think one thing is that we don’t store user data without explicit consent. And when we say, “explicit consent,” it is definitely, like, users – if, for example, users are getting in touch with us saying they’re having an issue, we request them to share the data, for example, the document that they’re trying to scan and things like that. So that’s the only time where we actually capture and store user data. So whenever we do store user data, it’s always with consent.
And the second thing is that all of the AI features that are there, both online and offline, they’re stateless AI features. Stateless in the sense – what actually happens is the AI never really stores the data once it’s done processing. So you send an image to our server, the server is processing it, and then once the server is processing it, it’s out and it’s not stored on the servers at all; right? So that way, we don’t really need the user data, or we don’t really store user data anywhere. And if – even if we store them, it’s only with explicit consent. So that’s one thing.
So consent is the main factor for us here, and we always going to be putting consent first because I know a lot of people trust us with, like, information that they’re reading. And so it’s our responsibility to insure that – yeah. Whenever we do collect user data, it’s with consent. Second thing is that our AI really improves a lot, not by, like, you know, using the user data, per se, but we also rely a lot on open data sets. We also rely on other techniques to go ahead and insure that whatever data goes through the AI, it does help improve the AI. But we don’t necessarily need to look into the data to improve the AI itself; right?
So there are a lot of techniques that are coming up in today’s AI, research that allows an AI model to really improve itself without having to necessarily store user data for long periods of time to be able to learn from it; right.
JM: Right.
KK: So these are some of the techniques that we use.
JM: And what about the video chats? Are those stored in any way?
KK: Oh. No. Absolutely not. It basically – it’s a peer-to-peer connection --
JM: Yeah.
KK: -- which insures that – yeah. Like, whatever stuff is coming out of your glasses goes directly into the phone or the iPad of the Ally, and whatever audio comes out from the Ally is then streamed directly to your glasses. So usually, with video chat applications, there is like an intermediate server that it goes to, and then the server’s – you know, relays the data back to the participant in the room. Here, what we have done is we have built a completely peer-to-peer system where your device itself acts as its own media server. So whatever stuff you get – whatever happens on the glasses doesn’t go through a third-party server like Envision’s. It just directly goes to the participant’s device and then audio from the participant’s device comes over to – the Ally’s device comes over to the glasses. And we really don’t have any need for storing video chat data at all, you know. It doesn’t really help improve our AI in any way. And so we’re never going to be having to store video chats. Yeah.
JM: Great. Before we go, of course, let’s talk about the pricing of the glasses currently and – if existing users – do they have to pay for this update, or is it free?
KK: No. The update that’s going to be coming on the glasses, that’s entirely free for everybody. So they don’t really have to go ahead and pay additionally for updates to the glasses. And for users who want to make a purchase, or listeners who want to make a purchase of the Envision glasses, they can go ahead and make a purchase of the glasses today for $3,500. You can go on our website, letsenvision.com and make a purchase of the glasses. And if you want to get a demonstration of the glasses, either in person or virtually, you can go to our website, letsenvision.com as well, and then you can go ahead and request for a demo, where you just have to fill out a really simple form where you put a little bit information about you so we can connect you either to a local distributor who can do an in-person demo, or we can arrange for a virtual demonstration. So we would love for you guys to try out the glasses virtually or in person, so please do go ahead and check us out at letsenvision.com, and you can make a purchase of the glasses there as well. So thank you.
JM: Thanks, Karthik. So great to learn about everything new at Envision and great to be back at CSUN.
KK: Yeah. Thanks so much. It’s been two years, and it’s super nostalgic because we launched the glasses here, and at that time, it was still in beta. And it’s great to see – it’s come really far in the last couple of years. We now have, you know, hundreds of users of the glasses across the world and also here in the U.S. And really great to be back here. And we been having a really enthusiastic audience so far, so – yeah.
JM: Maybe you could take a trip to San Diego.
KK: Oh. Yeah. Yeah. We are actually planning a trip, you know, across the U.S. So once CSUN is over, we’re going to be moving – going to San Jose, San Francisco, meeting up with the – you know, our community over there. We’re also going to be traveling, you know, a little bit, you know, throughout the United States all the way to New York to meet with different distributors in different regions, local communities, lighthouses, veteran’s associations. So we’re going to be doing a little road trip after CSUN, so – yeah.
JM: Sounds great. Thanks so much.
KK: Thanks so much, Jason.
For more exclusive coverage, visit blindbargains.com or download the Blind Bargains app for your iOS or Android device.
Blind Bargains audio coverage is presented by the A T Guys, online at atguys.com.
This has been another Blind Bargains audio podcast. Visit blindbargains.com for the latest deals, news, and exclusive content.
This podcast may not be retransmitted, sold, or reproduced without the express written permission of A T Guys.
Copyright 2022.

KK: I don’t know what’s going wrong with my 5G. It’s really horrible today.

Listen to the File


File size: 19.5MB
Length: 27:23

Check out our audio index for more exclusive content
Blind Bargains Audio RSS Feed

This content is the property of Blind Bargains and may not be redistributed without permission. If you wish to link to this content, please do not link to the audio files directly.

Category: Shows

No one has commented on this post.

You must be logged in to post comments.

Username or Email:
Password:
Keep me logged in on this computer

Or Forgot username or password?
Register for free

Joe Steinkamp is no stranger to the world of technology, having been a user of video magnification and blindness related electronic devices since 1979. Joe has worked in radio, retail management and Vocational Rehabilitation for blind and low vision individuals in Texas. He has been writing about the A.T. Industry for 15 years and podcasting about it for almost a decade.


Copyright 2006-2024, A T Guys, LLC.