On Tech & Vision Podcast

New Approaches in Access: Smart Tools for Indoor Navigation and Information Transfer

Artifacts from Blackbeard’s sunken pirate ship are on display in the North Carolina Maritime Museum in Beaufort, North Carolina. But now they are also accessible to visitors who are blind, thanks to the efforts of Peter Crumley, who spearheads the Beaufort Blind Project. In this episode, we ask: how can new technology help make sites like these as accessible to people who are blind as they are to sighted people? We profile 3 companies applying new technologies paired with smartphone capabilities, to make strides in indoor navigation, orientation, and information transfer. Idan Meir co-founder of RightHear, which uses Apple’s iBeacon technology to make visual signage dynamic and accessible via audio descriptions. We check in with Javier Pita, CEO of the NaviLens QR code technology which we profiled in our first season to see what they have been developing in the last two years. Rather than iBeacons or QR codes, GoodMaps uses LiDAR and geocoding to map the interior of a space. We speak with Mike May, Chief Evangelist. Thanks to Peter Crumley, the North Carolina Maritime Museum is fully outfitted with GoodMaps, and will soon have NaviLens as well. As the prices of these tools come down, the key will be getting them into all the buildings, organizations, and sites of information transfer that people who are blind need to access – which is all of them.

Podcast Transcription

Roberts: Ever wonder what happened to the pirate, Blackbeard?

Cromley: I think it was like ’95. They discovered the ship, and of course, it took them about 10 years to bring everything up.

Roberts: For a long time, people thought Blackbeard wrecked the Queen Anne’s Revenge off the coast of Beaufort, North Carolina in 1718 when the town was just nine years old.

Cromley: All the literature pointed to a lot of things that kind of indicated where it might be, and, of course, once it was found, how do you really verify? Well, pirates being pirates, they don’t like to put the names of their vessels on their vessels. So, there is no plate or any information that has come up saying that this might have been Queen Anne’s Revenge. But that’s where the historical science comes into play. In the cannons that were recovered, their cannon fodder was actually pages out of a book and the museum was able, I think it was down in Texas, they found the book. But they have a copy of the 300 year old book under glass and the same page that came out of the cannon that was cannon fodder.

So, they were able to date it. It is the right date. You go back and start looking at what was found, and piece it altogether.

Roberts: In 2011, after they examined all the weapons and loot the divers had surfaced, experts confirmed that the wreck off the coast of North Carolina was in fact Queen Anne’s Revenge.

Cromley: So all the artifacts that came in, all of that has come to the Beaufort Maritime Museum. My name is Peter Cromley and I’m a retired physical scientist from NOAA. NOAA is National Oceanic Atmospheric Administration. I live in the town of Beaufort, NC. We’re on the coast here. It’s a unique town. It’s a 300-year plus old town and we’re very fortunate to have the North Carolina Maritime Museum here in this town.

The key to this museum is that it tries to host all maritime museum history of the North Carolina coast

Roberts: Until they were surfaced the mysteries of Blackbeard’s ship languished underwater for nearly 300 years. Similarly, so much of our world remains inaccessible to people who are blind. Peter’s trying to change that.

Cromley: I came up with what I’m calling the Beaufort Blind Project to bring the accessibility to the town for the blind. I made a relationship with American Printing House for the Blind about five years ago now, and then went to the museum and said to bring interactive experience for the blind. And while they were excited about that they realized that’s going to be hard to do. It might be costly and therefore it was like, well, that would be great. You know, we don’t get that many blind people through here and I

say, well, it’s going to be going from blind perspective philosophy. So, not only will it work for me…

Roberts: Peter lost his vision in 2013.

Cromley: …it will work for anyone totally blind to fully sighted to give him an active experience. The museum is really done by volunteer staff in a lot of ways. So, there is not staff to just take people through the museum and give them the stories and the artifacts that are there.

Roberts: Surfacing the artifacts on Blackbeard’s ship has offered a wealth of new knowledge to maritime experts and enthusiasts and making the Blackbeard exhibits in the North Carolina Maritime Museum accessible to visitors who are blind shares that wealth.

I’m doctor Cal Roberts, and this is On Tech & Vision. Until now, people who are blind have not had reliable solutions to help them navigate indoor spaces or interact with venues in the same way sighted people can. But now, thanks to the cell phones we all carry in our pockets, game-changing technologies are right around the corner. In some cases, they may have already arrived. To me, the big idea is, how can you substitute the cues that a sighted person gets about their environment with other cues that are delivered either by or audio or touch?

This episode is unique in that we feature three developers working towards this goal, each from a different part of the world. Each developer has chosen a different source of information to that process into navigation cues. Clearly there’s not just one way of doing this. Each approach is novel, technologically advanced, and innovative. This should be fun.

Meir: Specifically, here in the US, there are over 100 million braille signs wherever you go. It’s an elevators, restrooms, meeting rooms, etc., over 100 million of them. Over 90%, even 95% out of the blind community cannot read braille at all. So the 5% or the 10% who can read braille don’t necessarily know where the braille sign is, right, to be able to touch it and therefore know where they are. And those who are lucky enough to find a sign not necessarily want to touch the sign because, COVID and because hanging, whatever reason not necessarily want to touch it. And those who brave enough to even touch it a lot of times got a very limited amount of information, like restrooms. Good luck with that. You probably need a little bit more information than that going into a public restroom.

So we ask ourselves, what if these could signs actually speak and tell us a little bit more about where we are, what is there, what’s around us. And that’s what led us with the development of the system. I’m Idan Meir. I’m the cofounder and CEO of Right Here, and we are a startup company headquartered in Israel and also with offices here in the US.

We started actually with the different concept at the time. It was around the couponing and shopping experiences and retail and we were basically providing coupons to users

who came into different stores. You’re getting to a store, you getting a 50% discount as long as you’re inside the store, the moment you get out of it, it’s vanished for good. So to be able to actually provide such a cool experience, we need to know precisely where you are and obviously in indoor environments, there is no GPS. iBeacon was really new at the time.

Roberts: Apple introduced the iBeacon technology in 2013. An iBeacon device uses low Bluetooth energy to transmit its unique identification number within its local area and your iPhone can pick up the signal coming from that beacon and then perform certain tests like pushing you a notification or an Idan’s case giving you a coupon so you can get a discount on your purchase.

Meir: At the time we’re looking to know whether you’re inside the store. Then we realized that we can do so much more than that, so much more impact with this technology and that’s how we pivoted right here.

The system basically has three main components. One is the app, it’s a free mobile app. It always has been free. It’s important to us because we don’t think that our users need to pay for the world to be accessible. It’s our duty as society. The other component, the second is the beacons, the little small, Bluetooth devices that we installed at the facility and the third is the online dashboard, the cloud, which allow us and to the facility to control, edit and manage all the audio descriptions that our users hear.

But basically, the app allows our users to first learn about the places they want to go to, then actually going there. We do have GPS experience within the app, so all the way to the destination or whatever that be. And then, especially, we shine at the facility itself. We give the whole indoor orientation experience.

To give an example, you are at the main entrance to McDonald’s.

Roberts: Right Here is available in all the McDonald’s in Israel.

Right Here: Welcome to McDonald’s. You are next to the entrance door. To the counter, continue in this direction for 20 feet. The seating area is on your right and on your left. You are next to the counter. To the seating area, continue in this direction for 10 feet and turn left.

Meir: The open hours are Monday to Friday or whatever, whatever they want to welcome you there. Then, no matter, where you point with your smartphone 360 degrees around you, it will let you know what is there and what distance.

Right Here: To the restrooms, continue is this direction for 10 feet and then turn right.

Meir: Overall, the idea is to provide full independence to our users wherever they go. The beacon is in the size of an AirPods case, I would say. Really small. And it has batteries inside so you don’t need to have electricity in the facility or an outlet or not even Wi-Fi or basically nothing. Just a plug and play type of sticker you stick on the wall or on the ceiling.

And all the information, all the audio subscriptions that are being heard once the user is in proximity to that is all controlled in the cloud, it’s text to speech. So, we basically describe the environment based on the layout and our understanding of the facility and the facility manager together. What’s exactly the information you want to structure into it and that’s it.

McDonald’s is right here.

Roberts: Idan stresses that Right Here orients a user in a space rather than navigates them to a space. The subtle distinction is that orienting lets the user know what’s around them, and lets them navigate to areas of interest on their own.

Another technology that brings science to life? NaviLens.

Pita: I have here another example of a sign inside a train in Germany at the beginning of the pandemic. And this sign is in German. And my question is, do you speak German? No? Me neither. OK, and the point is that this information is invisible for the blind and it’s invisible for you because you don’t understand German.

With the NaviLens code that is in design, if I scan the sign with the NaviLens app I will obtain this.

NaviLens: Stop the spread of the coronavirus: One wash your hands regularly for at least 20 seconds. Two, when coughing and sneezing, cover your mouth or nose with you elbow…

Roberts: We profiled the NaviLens system of four color QR-like codes in this podcast in the Spring of 2021, so we know that the codes are inexpensive, simple to store information on, and, thanks to a special computer vision algorithm for the iPhone camera, easy for a blind user to access with their phones.

Since then we’ve installed NaviLens codes all over Lighthouse Guild and tried them for ourselves to great result. We checked in with CEO Javier Pita on how the technology and the business have developed in the 18 months since we last spoke.

Pita: Some significant things are the new 360 vision. That is a new feature that is incredible, especially for transit and the use of the NaviLens codes to consumer goods products like Kellogg’s or Pantene from Proctor & Gamble. That is really interesting and very important for the visually impaired community.

40 years ago we started to collaborate with MTA and are very, very excited to communicate that the two pilots that we have done with the MTA, that the first one was the subway station of Jay Street in Brooklyn, and the second was all the bus stops across 23rd Street. Both projects were very successful with very good and very positive comments from all the users that test the technology there.

One of the things that we are very proud of in the developments of the last two years is the new 360 vision, the magnet feature. Magnet feature – it was born in New York City. It was born when we saw the bus poles in New York City from the Department of Transportation. And working with the MTA and DOT of New York City, we thought about a new kind of technology that will be more easy to detect. What exactly is our last pole in New York City? And there was for the new 360 vision.

But what happens if you are working across the across New York City, you see the code at a bus stop pole and you and the camera of the mobile phone lost the vision of the code. So we developed a combination between the cutting edge NaviLens code with augmented reality, and you say I’m into reality. Okay, but instead of a visual experience using sounds to communicate to the user where the NaviLens code is even if the camera lost the vision with the NaviLens code. So, camera would detect the NaviLens code will fix where the code is.

NaviLens: Activating 360 vision, 11 feet away, bus stop.

Pita: So, I know where exactly it is. And if I move to the other place on the right.

NaviLens: Behind. On your right. Ahead.

Pita: So if I move ahead, I would say I wouldn’t know exactly the distance. Ten feet? Nine?

NaviLens: Seven feet.

Pita: Seven feet.

NaviLens: Six feet.

Pita: Six.

NaviLens: Four feet. Arriving. Bus stop. On south 23 St Park Ave South, next arrivals. The next 23 SBS bus to Eastside Ave C, is one minute away and 15 minutes.

Pita: The point is, we have solved the last few yards of the wayfinding problem that is super important for a blind user and this was born in New York City with the collaboration with the MTA and the Department of Transportation of the New York City and we’re very proud of that.

Roberts: They’re also partnering with Kellogg’s. To put NaviLens codes on products across Europe and now North America.

Pita: So, imagine that you are at a supermarket and you are in front of products with the NaviLens code. NaviLens can detect several codes at the same time. In order to make a demonstration here I have three Kellogg’s products, more or less at 10 or 12 feet away from me. I’m going to open the NaviLens app, more or less point to that area, and automatically I will receive the information.

So, I open NaviLens.

NaviLens: Kellogg’s Special K Oats and Honey cereal, 420 grams. Kellogg’s Special K original cereal, 375 grams.

Pita: So, the user can navigate through the list in order to select which product they want to pick up and automatically the user can select for example, in this case Kellogg’s Special K original cereal. Tap on the filter and locate this code.

NaviLens: Filtering and locating Kellogg’s Special K original cereal 375 grams.

Pita: So, it doesn’t see another product, this is not the Kellogg’s Special K, the user will hear this, something like this note. And the moment that NaviLens will see the Kellogg’s products automatically we show with an error what exactly is the product with incredible accurate. At this moment it’s is informing me at 7 feet away. So if I move forward. Down.

(beeping noises get faster as he gets nearer to the product)

I’m receiving, additionally, haptic vibration exactly when I am very close to the product. Imagine if this technology will be in all the products, we will solve the problem of the accessible packaging for all the users. And one more thing. You are hearing the information in English, but this packaging that I’m showing is in Spanish. But depending on the language of your mobile phone, you will read the information in your own language. And this is amazing because the same code can deliver information in 34 languages.

And one more thing. The good news is that this that was launched in Europe in 2020 and 2022 is going to be available very soon across all North America because Kellogg’s North America has decided to launch some Kellogg’s products with NaviLens code. So any user across the United States will be able to try very soon this experience that I hope that will be good for everybody.

Roberts: Goodmaps is another wayfinding and navigation technology. Instead of using beacons like Right Here does or QR-like codes like NaviLens does, Goodmaps uses LIDAR technology to map an interior space. Mike May, is Chief Evangelist for Goodmaps.

What I love about Goodmaps is that much of the technology, the LIDAR technology that I’ll ask Mike to talk to us more about, has been around for maybe 50 years and now finds a new life and a wonderful life helping people who are blind and visually impaired. So, explain what LIDAR is and how it works.

May: Most people are familiar with the Google cars running around on the street and imaging the buildings and the curbs and all the environment outdoors, and those were on big vans with spinning LIDAR sensors and cameras. So really, what we’re talking about today is much smaller, cheaper technology, as tends to happen with these things in a handheld form where we can now afford to go into a building and do the same sort of laser range finding to map a building.

Roberts: So, LIDAR works by shining a laser at an object and measuring the time it takes for the light of that laser to come back. It’s often described as a pulsating laser which measures the distance between the device and the surrounding objects, and then the distance is calculated using the time it takes for the laser to return using the velocity of light.

May: Yeah, that’s right. And of course light is pretty fast. So, we’re talking about minuscule details in terms of timing. Those millions of laser beams are shooting out there and they’re geocoding every single one of those pulses. So, you’re mapping whatever is returned to the laser in this very dense 3D point cloud.

Roberts: When they’re mapping a new space, a Goodmaps representative walks through the indoor area with a LIDAR sensor. That sensor takes 360 degree images, laser measurements, and video footage, all of which will be used to map the space.

May: It’s the distance, and it’s the geocode that counts.

Roberts: The geocode?

May: So, every square meter on Earth has an XY coordinate that’s calculated by latitude and longitude. So, those are of six, nine digit numbers that change every time you move.

Roberts: So, every laser beam that returns to the LIDAR device gets mapped in XY coordinates. Every place that Goodmaps LIDAR identifies literally gets put on the map.

May: So, you can map that wall or that roof or floor in much the same way that the streets are mapped outside with XY coordinates.

Roberts: Mike became blind at age 3 in a calcium carbide explosion. He’s dedicated his career to building accessible wayfinding tools. As part of that, he founded the Sendero Group in 1999, where he and his team developed the first accessible GPS and talking map software.

May: I was working on indoor navigation technology in 1995-96. There was always a huge interest, so a lot of funding from DARPA, from the military and elsewhere to conquer indoor navigation and it was always based on dead reckoning. Meaning that if I know my starting point and I’ve sensors to track my movements, sophisticated pedometers let’s say, then you could do some sort of indoor mapping and navigation and that was the Holy Grail because you didn’t have to have any infrastructure if you were dead reckoning.

But, there were lots of efforts made and billions of dollars spent trying to resolve this and the dead reckoning never really came to fruition.

Roberts: Dead reckoning is an old maritime term. From their initial position, sailors would calculate their speed, their direction, and the amount of time they’ve been traveling to figure out their boat’s new position. It’s quite possible Blackbeard got around this way

May: So, the next stepping stone were beacons, the iBeacons from Apple. The cool thing about that was they actually made indoor navigation viable enough where a number of companies got into the game and started doing indoor mapping and positioning, but Bluetooth was not designed for this purpose. So, it didn’t, in the end, turn out to be something that was very accurate, but it was a stepping stone to being able to then use some other technologies like LIDAR.

Roberts: Give me more information about how Goodmaps specifically uses LIDAR.

May: And we’ve just, by the way, evolved in the last two months from a backpack rig that if you look on the Goodmaps.com website, you’ll probably still see a Ghostbusters

backpack that carries all the equipment. Now, it’s in a handheld device that’s let’s say five pounds, so a huge jump forward in terms of convenience.

So, our mapper walks into a building and in the time that it takes them to walk around, let’s say they’re mapping an airport, it might take them four hours to walk up and down all the corridors and nooks and crannies and create that initial 3D point cloud. And that’s how we get started. And then of course, there’s processing, labeling, referencing, quality control, lots of things you have to do to those maps before you can turn them into something that’s useful. Useful meaning I can now come back into that airport with my phone camera, hold it out in front of me, just like you were taking a picture and that camera picks up the environment and it compares it with that point cloud and says aha, I see based on this particular image that I’m streaming to the cloud that you are near the Starbucks or you’re Gate 27. So, it’s using those georeference laser points as picked up by your phone camera, you don’t have to have LIDAR on your phone. You just need a camera in order to close the loop and use this navigation technology.

Roberts: Great. So, now the person who wants to use it, they have an app on their phone that they download and is there a charge for the app or is the app provided to users for free?

May: The app is free. The venue owner is the one that pays the freight. So our business model is to go to whether it’s airport or airline or hospital, train stations and get them to pay for the mapping. Then the map is free.

Roberts: So then, for someone who is visually impaired, the information is then conveyed to them by audio so that the app tells them what they’re seeing.

May: Right. And so things like points of interest that are names, that’s pretty straightforward. It says Starbucks in text and you read it or I listen to it with my voice over. But the harder thing is direction. So, you might have an arrow that points which way to go. How do I get an auditory version of that arrow? That’s where we have to have a little bit of magic to say turn left, turn right.

Roberts: So, we’ve had this discussion with other innovators before about what’s the right amount of sensory information. When does overload occur? When is information not enough? Should it be auditory? Should it be tactile? Should it be a combination of it? Tell me how Goodmaps thinks about this topic.

May: This is something I’ve been working on and contemplating for the 25 years I’ve been dealing with accessible technology and getting feedback from users. And there’s a trade off between giving people options, because that’s the logical way to deal with this.

Not everybody’s the same. Some people want high verbosity, somewhat low verbosity. So let’s just put in a verbosity switch.

Every time you add a setting, then you have the more complexity to deal with. So, the hope is that you design for 80% of the people, 80% of the time. Typically, it’s better to design for too little information than too much because you’d rather somebody’s begging for more than to have them throwing up their arms going, Oh my gosh, shut this thing off.

Roberts: And so does good maps just use auditory or do you use haptics?

May: Yeah, we use haptics and auditory. Haptics are great because they can give you an easier way to detect something without overloading your brain. And with auditory, there’s really three levels. There’s words, so turn left and there could be a tone. It says “dee dee” for left and one “dee” for right. Or, you could have a tactile that goes “buzz buzz.” And all of that is something that we really try to test and get feedback from users on.

Roberts: So, we are looking forward to having Goodmaps here at Lighthouse Guild. That’s a hint that you promised. So, once we are mapped and someone comes into our office and they want to find out how they get to my office, how would they do that?

May: It depends on the setting, but having been to the Lighthouse Guild, I know it’s pretty straightforward. You come in, as you do with many buildings, you have a lobby or receptionist or somebody that clears you to go into the building. And rather than having to have somebody accompany me, I could put your office number into my app. I search for it and the app tries to be smart and tell you some of the high priority things like elevator, restrooms, and you can just pick from that list or you can do a search, find the office number you’re going to, and then when you click on it, it’s going to give you turn by turn directions, including what you need to do in terms of take the elevator, take the stairs.

There’s a step-free option, so if there are stairs involved in somebody in a wheelchair, it’s not going to use the stairs. There’s other people who want to take the stairs for exercise reasons. So the app tries to allow the options to make it customizable depending on your needs.

Roberts: So why, with all the technology he’s seen, did Mike want to be Chief Evangelist at Goodmaps?

May: Well, I’m always skeptical, because I’ve been working on indoor navigation since that first point I mentioned in 1995, and so many things have seemed promising. Then there’s some impediment and then there’s something better. So, I’m cautiously optimistic that this is the real deal, but it’s happened in the last five years where there are some techniques that are truly affordable, where we can see the scaling happening. And it’s not just going to be a pilot location at a university or at the Lighthouse or at the Goodmaps headquarters, but it’s something that’s going to be in Penn Station and other places where it really has significant value.

And of course, that applies to to everybody. To sighted people as well. You go into locations like that, they’re super complicated and people get stressed out about travel when they can’t find things and they’re going to miss their train and they need directions. We can all benefit from that. So, I think that’s what has me excited is that, hey, maybe this is finally the real deal.

Roberts: As part of the Beaufort Blind project, Peter Cromley worked with the North Carolina Maritime Museum to make the exhibits accessible to people who are blind, using tools like the ones we’ve profiled today. It’s been an evolving process.

Cromley: From a technology standpoint. We’re pushing the envelope here for Goodmaps, a little bit in the fact that it is a very, very complicated venue to do this in, unlike say, the airport terminal where it’s a straight tube where you have items on the right and items on the left. That is not the case here.

So, that was the premise of bringing that kind of technology to the museum so a blind person could actually try to walk through the museum. Also, that technology is improving and their coming up with all sorts of routes and other functionality that will hopefully really improve the apps as we move forward here.

So, I was thinking this will be the navigation component, but we’ll need an information transfer component. And, just by happen chance one day I was listening to Blind Abilities podcast, and Dr. Calvin Roberts of Lighthouse Guild was on there and he was talking about this new app called NaviLens. And when I heard that podcast I said, aha! That is it.

After that I was able to make a relationship with Lighthouse Guild and they connected me with NaviLens directly which led to a relationship with NavilLens. And, we now have the process started to go ahead and code out the museum. We have a small demonstration project set up in one space with five codes that Lighthouse Guild graciously loaned us and we could go ahead and show the people before we actually had purchased codes that this is what it would be.

And, I’m very excited about that because NaviLens provides not just information transfer but mobility. And the plan is to go ahead and code the interiors to all the space, so literally you can just use the NaviLens codes to leapfrog through the facility. When you get into a space then you can scan around and find those codes.

We are putting a lot of information in some of these codes and this will allow the untold stories of the artifacts to be told.

Roberts: When it comes to accessibility, Peter is passionate. His Beaufort Blind project goes beyond universal design to a blind perspective philosophy, designing for blind people first and letting everyone else benefit.

Cromley: There’s a new generation of blind people like myself and I am a little older, but definitely the ones. And we do just technology. We don’t take this way or no for an answer. We try to be more independent. We know we can with a little help and not have barriers and obstructions put in front of us. But the advocacy is so important. That’s one of the things that this project here at the museum, working with NaviLens, working with Goodmaps, is trying to work directly one-on-one with developers as a blind user to try to guide them to develop their apps, to change their apps, to add features to make the app more approachable from a user standpoint when you’re actually interfacing with the app to make the app better and make it work in the way that a client person really needs it to work.

May: Usually apps are designed for sighted people and then blind people figure out how to use them. This app was the other way around, so we started with an app that’s fully accessible to people who are blind or visually impaired using voice on their phone. Now we are evolving that to be more site friendly. So we’re putting on a map and a blue dot and all those kinds of things. So within the next couple months we’ll have an app that it’s more visually attractive, but that wasn’t our initial market – the sighted customer.

Roberts: Beta testers who are blind were crucial for developing Right Here’s as usability.

Meir: Tally actually was one of our first users who tried that. It was important and kind of obvious for us from the very early on that nothing about us without us. It was clear for us that we have to involve users in the process. So Tally, she’s from my city in Israel, I knew her just before that. Hey, we have this new idea. Do you mind to give us some feedback? Ever since then she’s part of our community. She’s part of our beta group. By the way, I invite those listening now who’s interested to kind of check our newest

developments and join this beta group. Provide us with feedback. We’re always looking to have more and more.

Roberts: Javier Pita and his team have also worked closely with users who are blind, including Peter, to improve NaviLens usability.

Pita: So, we are at the disposal of the users at anytime.

Roberts: Each developer we profiled today harnesses the increasing power and functionality of smartphone technology to propel development and to get these tools into the hands of people who need them. That smartphones are becoming ever more dynamic and powerful only bodes well for the future of orientation, navigation and accessibility for people who are blind and visually impaired.

In this episode, we profiled 3 developers from three parts of the world using distinct protocols and relying on unique underlying technologies. But they’re all working towards the same goal of bringing full scale indoor autonomous accessibility to users who are blind. Momentum is gathering to make the world truly and dynamically accessible. The next step is to distribute it.

Peter Cromley does heroic work with the Beaufort Blind Project, outfitting Beaufort, NC, including its Maritime Museum, with accessibility options for people who are blind. His advocacy work is a safety line for those who will visit the Maritime Museum after him. The key will be getting these tools into all the buildings, organizations and sites of information transfer that people who are blind need to access, which is all of them.

Did this episode spark ideas for you? Let us know at podcasts@lighthouseguild.org. And if you liked this episode please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts.

I’m Dr. Cal Roberts. On Tech & Vision is produced by Lighthouse Guild. For more information visit www.lighthouseguild.org. On Tech & Vision with Dr. Cal Roberts is produced at Lighthouse Guild by my colleagues Jaine Schmidt and Annemarie O’Hearn. My thanks to Podfly for their production support.

Join our Mission

Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.