On Tech & Vision Podcast
Robotic Guidance Technology
The white cane and guide dogs are long-established foundational tools used by people with vision impairment to navigate. Although it would be difficult to replace the 35,000 years of bonding between humans and dogs, researchers are working on robotic technologies that can replicate many of the same functions of a guide dog.
One such project, called LYSA, is being developed by Vix Labs in Brazil. LYSA sits on two wheels and is pushed by the user. It’s capable of identifying obstacles and guiding users to saved destinations. And while hurdles such as outdoor navigation remain, LYSA could someday be a promising alternative for people who either don’t have access to guide dogs or aren’t interested in having one.
In a similar vein, Dr. Cang Ye and his team at Virginia Commonwealth University are developing a robotic white cane that augments the familiar white cane experience for people with vision loss. Like the LYSA, the robotic white cane has a sophisticated computer learning system that allows it to identify obstacles and help the user navigate around them, using a roller tip at its base. Although it faces obstacles as well, the robotic guide cane is another incredible example of how robotics can help improve the lives of people who are blind or visually impaired.
It may be a while until these technologies are widely available, and guide dogs and traditional canes will always be extremely useful for people who are blind or visually impaired. But with how fast innovations in robotics are happening, it may not be long until viable robotic alternatives are available.
Podcast Transcription
Panek: I have a guide dog named Blaze. His primary role of a guide dog is to help a person navigate around obstacles and to ensure that the person is clearing those obstacles. And so, we could think of a guide dog as a navigation tool. They avoid obstacles. They don’t find obstacles like a cane might.
Roberts: That’s Thomas Panek, President and CEO of Guiding Eyes for the Blind, an organization that trains Guide Dogs for people who are blind or visually impaired. He’s been a guest of this podcast before to talk about his work with Project Guideline for Google, but today he’s here to talk about guide dogs.
Panek: So, think about going through a door. Whereas you might find the opening with your cane and establish where that opening is, the guide dog’s job is to get you through that door without really even knowing it’s there. So, it’s different in terms of mobility and orientation, but really is a navigation aid.
Roberts: I’m Doctor Cal Roberts and this is On Tech and Vision. Today’s big idea is the major impact robots could have on people with vision impairment. How will blind and visually impaired people interact with robots? What’s the potential? When we first started developing this episode, we thought we’d be` discussing robotic guide dogs that look like the four-legged animal-like versions that are starting to be deployed by police and firefighters, but it’s much bigger than that.
Let’s start with navigation. Biological guide dogs and the white cane are foundational tools for navigation. But can robots do the same thing? Let’s find out.
So, recently there have been tech developers who want to talk about robotic guide dogs as an alternative to having a live animal and, to me, I can’t see this as an either/or – that potentially the best combination is technology with the dog. So, talk to me a little bit about how technology can help a guide dog user.
Panek: Yeah, that’s a really great point about it not being and either/or. There are robot dogs out there already performing important work, including police work and rescue work and going into situations where perhaps it’s more beneficial to have a robot that a dog.
For a guide dog user, the guide dog provides not only the safety and navigation but it also, you know, for many people provides a companion. You also have the guide dog kind of in tune to things that artificial intelligence is still not yet quite there. We think about autonomous vehicles and how we’ve been promised for many, many years that we’re going to have self-driving cars and then they seem to always be 10 years away because there’s complexities to how travel happens and humans are very, very good at driving, phenomenally good relative to our autonomous counterparts.
And so, the guy that’s kind of that same way – they’re phenomenally good at navigating with humans. We’ve had our relationship with dogs for 35,000 years. And a relationship with robots for maybe you know, 50 years. So, the ability for a robot to take over that task is a way off, but technology is moving quickly and so using a navigation aid, using something to help with the detection of overhead obstacles, it’s something that I worked on. Being able to have augmentation, I’m going to call it dog augmentation, to help the guide dog with things that might otherwise be a challenge, I think is going to happen first, and it’s happening now.
Roberts: Researchers around the world are working on ways to innovate and augment the guide dog and white cane. One of these projects is being developed by Vix Systems in Brazil. It’s called Lysa, and it’s described as a robotic guide dog, but not in the way you might initially imagine. I talked to Kyle Ribeiro, one of the project’s researchers, and asked them to tell me more about it.
Ribeiro: So, here Brazil we developed a robot called Lysa. It’s a guide robot made to help visual impaired and blind people. So, Lysa is like a traveling bag. It goes in front of the user and it has like a handle where the user can grab with the hands and through an application that it’s accessible for blind people, you can select the destination and Lysa will start guiding the user through the path. So, it can bring a person from point A to point B autonomously and avoid obstacles and dodge obstacles in the path.
Roberts: So, when I look at this on the videos, it looks something like I’m pushing around a vacuum cleaner or maybe a small lawn mower. I want to give our listeners kind of an idea of what this looks like. And so, the base on the ground is relatively small and then has long handles so that the user can hold it without having to lean over. Am I being fair?
Ribeiro: Yeah, it’s like a traveling bag with wheels on the ground that can move and guide the user.
Roberts: How much does Lysa weigh?
Ribeiro: 4 kilograms.
Roberts: Then when I get some place, it folds up or something so that I can put it someplace.
Ribeiro: Yeah, we created this new version in mind that we can bring Lysa to wherever the user needs to go. So, when the outdoor version is released, the user can maybe fold the handle and store it on a backpack maybe and bring it to the bus or on to the train. I don’t know. But, we have it mind that these can be accessible for everyone.
Roberts: Right. So, let’s talk about the unit itself first, and then we’ll talk about its functionality. So is it propelled? Is there a battery that’s moving the wheels forward, or does the user actually push the unit?
Ribeiro: There’s a battery and it’s like a properly robot. There’s a battery that can last for I think six hours. And we have sensors in the robot that can detect obstacles and we have 3D cameras that can sense that and it helps in the obstacle avoidance. So, in the handle we have a couple of buttons that the user can press like the volume button, the start and stop button so you can control it with the handle.
Roberts: And so now it’s going to first, you’ll program it so that it knows where you want to go, and then as you’re going along, it will – tell me what obstacles will be able to detect?
Ribeiro: So, it can detect holes on the ground, obstacles in the air. If there’s something like in the way of a person, it can detect and tell you, oh, it’s a person that is in front of you or maybe a chair or a table and you can go and dodge from it and end up in the final destination.
Roberts: So how does Lysa know the difference between a person and a chair?
Ribeiro: We have some code running in the camera running image detection artificial intelligence. So. we teach Lysa about the things that we can see and Lysa also detects those things and tells the user.
Roberts: Other developers have used LIDAR in order to map out the area and so that they can three dimensionally know that. Is your technology somewhat similar to that?
Ribeiro: Yeah, we have LIDARs that can see 360 degrees. We have stereocameras that can see that. And we have sensors that are pointing to the air and to the ground, too.
Roberts: So, now how does the user know? Is this verbal? Does Lysa speak to them or are there haptics? Does it give vibrations or sensory? Explain how the user gets the feedback?
Ribeiro: Yeah, in the handle we have some vibration motors that can give these haptic feedback and these can also work with audio because you can plug your earphone in the tablet that’s attached to the Lysa and you can also use the speaker that is built in in Lysa.
So we always want to have the maximum feedback to the user when we find some obstacle Lysa stops and tells the user about this obstacle. When it finds a hole on the ground the vibration on the handle alerts the user. We can detect 40 or maybe 60 objects.
Roberts: So how does a user tell Lysa where it wants to go? Is it Is it verbally? Do you have to program it on a smartphone or an app?
Ribeiro: We have a tablet you attach to the Lysa’s handle. We developed a smartphone application. We have partnerships with establishments like shopping malls or airports and we leave Lysa available in those places to users use. So, we do the mapping process et cetera and let Lysa there available for uses. So, the users can connect to Lysa’s Wi-Fi and the application will do the setup of showing the points and the user can select the points using the Talkback functionality of the smartphone.
Roberts: Lysa’s abilities mirror a lot of what a biological guide dog can do, like taking its owner on pre-planned routes in busy areas. Thomas Panek discussed what his dog, Blaze, can do in this respect.
Panek: There’s a big difference and when you’re navigating indoors with a guide dog in a familiar space, there’s almost no real net benefit if it’s your house. My dog is a pet at home, but if you’re in an office environment where you do have a lot of obstacles, you might be familiar with getting from one office to the next, or from a cube to the next office. But. The benefit of the guide dog there is in case somebody puts something in your path or if something changes, there’s a real benefit to using indoor travel.
Airports – incredibly useful. I love having the guide dog in the airport getting me from the gate to the Uber taxi stand. And because I travel frequently, my guide dog actually has patterned the roots from terminal to the taxi area in several airports. Outdoors is where they really excel now, where you’re not familiar with the path. So if I was to land in another city and I need to go to my hotel and the Uber drops me off at some location, the guide dog will be able to get me there with my navigation aid, whereas I sometimes struggle without the guide dog getting there on my own. I know some people are really proficient at that, I’m just not that good at it. So, they excel in outdoor environments. I mean, look, dogs are outdoor animals. They love the outdoors and I think that’s where their strength is.
Roberts: In comparison, robotic eye technology like Lysa has a harder time navigating outdoor environments. Kyle Ribeiro told me about the challenges his team has faced with that.
Ribeiro: Outdoor navigation is a whole new world because if you go like on the streets it could be dangerous. You have to be very careful because you are driving a person, driving a human being. You cannot create a map of the whole city because it will be enormous, like the size of the map. So what we’re trying to do is, using artificial intelligence, maybe detect where the Street only goes to the sidewalk. We need to detect the traffic sign to see if it’s a red sign or the green sign or you know. There’s a lot of trouble because the sunlight get in the sensors and give false information to us. So, we have a lot of things to do before going to outdoor.
Roberts: So, the analogy to a service dog has been made of Lysa. So, Lysa is smarter than a dog. But the dog does certain things really, really well, which is that object detection and the ability to sense things that are low to the ground. And so, if a dog saw a pothole in the street, the dog would walk around the pothole and lead he person. How about Lysa?
Ribeiro: Lysa probably will do the same thing. Lysa will try to avoid as much as obstacle as she can, like if it detects a hole on the ground with the sensor, it will try to create a new path around the hole on the ground.
Roberts: So, compare this to a smart cane.
Ribeiro: A smart cane cannot move the user you know, and Lysa can guide the user. You trust Lysa and Lysa will try to find another way to avoid that obstacle so you don’t have to be worried about.
Roberts: Kyle’s point about smart canes is an interesting one and something that Dr. Cang Ye and his team at Virginia Commonwealth University have been working on with their robotic guide cane.
One of the benefits of the traditional white cane is that people who are blind or visually impaired can use it to independently explore their surroundings. Dr. Ye and his colleagues are seeking to improve on that ability by creating a white cane that can interpret the user’s environment and, like Lysa, help guide them to a destination.
Ye: In the beginning the white cane has been used for one century, right? And I think every school that is teaching a mobility class, actually they use that. People have been trying many different ways to try to replace that and without any success on that. I don’t know the reason, but basically it’s the symbol and also can be effective, right? So and also it gives people a feelingof that user him or herself actually is drawing. Not the tool. So at any particular time point you only have one point contact, right? So and then you only know that point and then to form a larger picture of the environment you need to tap or maybe basic touch the cane with the environment many times. And that’s not efficient.
And another thing is that there’s no really guiding. So, if we want people to turn for example 15 degree, that’s not one o’clock, 2:00 o’clock and also following the instruction like people sometimes they use the vibrator or they use audio. Following that accurately is very difficult. And then we thought that by integrating robotic technology into that address that problem, so you don’t need to use audio. The audio also have problem like in a noisy environment it will not work right and it can cause interference to a pedestrian surrounding. But, by using robotic technique like our robot cane, actually it can guide you precisely toward a direction without using that kind of audio and vibrator.
Roberts: In order to accurately guide the user, the robot cane uses sophisticated imaging technology to create a virtual map of the surrounding environment.
Ye: It’s ranging device actually can generate a – we’ll call it depth image. So basically mean that it’s a image but at each piece on that image has a depth information and then by processing that you actually can detect target object, whatever object you want to detect it right? You’ve trained the system, it can tell you what is surrounding and also it can kind of process that data and then to generate like the navigational guiding instruction for example, where to turn. And that’s a computervisual enhancement and another thing as I mentioned that actually is the guiding. So that means we actually, you know that every white cane has a roller tip, right? We put in a kind of something like a motor and also a clutch and then you can actually engage and disengage that motor with that roller tip.
So, once that’s engaged, you actually turn the cane into a robot. You can steerthe cane by using the motor inside, and that’s a very small motor and they’re very lightweight. You basically you can tune precisely and point the cane toward a direction and the people just follow that and that actually eliminates the need of using the audio or vibrator.
And also our system actually can sense the human intent. So now because we put this robot into this cane then there’s two modes. One is that you can use the passive mode, so that actually is the same as the white cane, except for there’s a computer vision system there so that you can view that as a white came with computer vision enhancement. And then another mode actually is that is the guide cane mode, so that means the robot actually take over to steer the cane into the direction and guide that for you. Because there’s two mode then it can cause some workload if you require the user to do the switching between them.
And we do have that manual mode to allow you to switch. You can override the system, but the system itself has some intelligence. It actually can sense your intent. So if you try, for example, you try to scan the cane, then that cane’s movement will not be in compliance with the robot cane’s movement. And that disagreement actually tells the system, oh, the user now wants to use white cane mode, not robot cane. And then the system actually can detect that and switch that automatically into a white cane mode and when the condition is right so that it detects that motion compliance then it actually can switch back that into robot cane mode. So that way actually the system switches back and forth without the interventionof the user. So, I think that’s three functions within the cane.
Roberts: Being able to switch across multiple modes helps make the robotic guide cane an incredibly versatile tool. To improve that versatility, Dr. Ye and his team are working on the transition from indoor to outdoor environments.
Ye: In our newest version we actually move the entire system onto iPhone. And you know, iPhone has a camera module here. So we just need one camera, and then there’s also a LIDAR. And this LIDAR works outdoor. So, this actually made the system actually good for both indoor and outdoor. So now there’s several things that the system actually processes. One actually is that with now the deep learning thing, the advancement there. So we can use the image to pretty much locate object. It will detect that very fast in a matter of milliseconds. But it won’t tell you exactly where it is. But you use a bounding box there. And then the LIDAR, we just need to use LIDAR data to process the boundingbox way as you can analyze the shape and things like that. And deep learning based on image can tell you the object already and then we just need to use that 3D data to confirm.
So, that makes things very fast as long as you train that with sufficient image, camera image and then you actually can locate those things for you.
Roberts: Even though the computer vision technology can create a sophisticated understanding of the environment, one challenge that is hard to overcome, is guiding around fast moving objects.
Ye: Right now we actually have demonstration for indoor. And mainly it’s that environment. For dynamic it can be a little more challenging. Sometimes the computer system is not that reliable. And then basically it can fail. So you lose the track about your movement. And then at that point usually we will need to use like computer vision techniques to detect a landmark and then once you detect that landmark, you restart your system.
But really, when you move in a real environment, there’s always a moving object and then you have to deal with that and wait, the system now actually is not very good in that. So that needs more research to actually enable the system to navigate well, that in both indoor – outdoor with moving object like the shopping mall and like when you walk towards a subway station and things like that, you will need to walk around the street, right. So you’ll have pedestrian in a shopping mall, you will probably have shopping cart and all of that type of different thing and then in that environment how you can deal with that. And I think communication there is OK. It’s not that very difficult. The most difficult thing actually is the guiding. Because you have object moving around, so you need to turn very fast in time. And following that guiding, and that’s a difficult thing.
Roberts: As with any cutting edge technology, there are many ways the robotic guide cane can still be improved. But for now, this kind of emerging tech shows that there are ways to augment traditional navigation devices. Thomas Panek even sees this as a possibility for guide dogs.
So you use the term augmentation. How can technology augment the benefits from the guide dog?
Panek: Yes, I think there are three ways that augmentation can help. The first is, you know, we have been working with MIT and also I know IBM and others have been working on a device that uses technology whether it’s a LIDAR sensor, measuring distance or computer vision as we call it to establish where you are relative to stationary moving objects. So if you think about color coding or if you’re looking at kind of referencing a moving object and putting you in that space. So, the simplest way to put that is, helping you navigate where you are relative to objects around you and working around it.
For example, MIT came up with a belt that could tell you when there was a door frame passing and you could actually move around in a closed office space, determining where the openings were, and so that was kind of early stage.
The second part of augmentation would be not only this kind of navigation, but also specific object detection. So. if there’s something in your pathway as an overhead for example, that warns you that you’re going to bump into that. And that’s complicated. For example, if you’re walking down the street and a car opens its door, how does the computer vision establish that that obstacle in your way, that it’s a door and it’s telling you that’s an obstacle?
So that’s kind of like phase two that we’re working on now. And I know that takes a lot of computing power to do that. Everybody’s working on it and none of these things really work well. They work well enough, but not well enough to put your safety in its hands, let’s say. You sort of need computing power and battery power to do that. You need connectivity as well and that’s all shrinking and that’s great. But to have an on-device program that helps with navigation as we would call a robot or a robot navigation would be that third level I would call of augmentation, where really it’s doing just as much as the dog is doing.
So the dog is doing in this augmentation model, the navigating, the obstacle detection and the thinking around where you need to be, where you’re going to bump into. Right now we’re able to navigate safely anywhere with the dog. The second is where the computer is kind of overcoming some of the dogs limitations, overhead obstacles, things that the dog wouldn’t ordinarily look at. Maybe a block away, there’s an impediment path.
And the third is where the dog can basically walk along your side and you’ll do it. There’s been a couple of Level 3, I call it, augmentations where it’s a suitcase-style navigation aid with lots of sensors and bells and whistles, and it’s trying to help a person navigate. So we’re kind of at the cusp of that account to see who’s going to break open that space. And I think what we’re seeing is it’s first happening with some of these organizations like Boston Dynamics and others that are creating what we might call the robot dog.
I think the trust of technology is it’s an interesting conversation because how frustrated do we get when a device locks up or fails and we have to reboot it and you sort of lose that trust in the device? And, you know, I think that we’ve all become accustomed to sort of trusting our devices, but not wholly trusting our devices so, I think there is a trust component to having a guide dog. You know, as soon as I stand up after talking with you, my guide dog will stand up. I’m not even sure if I’ll have connectivity through the rest of this podcast. So, there is a trust factor that I think is different with technology. But, I think there is a role for these robot dogs.
Roberts: So, the robot dogs that we’ve looked at so far have some limitations. They have a lot of trouble with stairs. They/re are very good flat surface. The dog doesn’t think twice about going up and down the stairs, that’s just something animals are able to do. How do we overcome that with a robot, you think?
Panek: First of all, we’ve come a long way in my household. I’ve got a busy household. I’ve got four kids, young adults. And yeah, we use a robot to vacuum. And that robot stops at the top of the stair and turns around and goes the other way. And you know, it is able to navigate around my house pretty successfully. The challenge is not one of sensors, it’s one of mobility. So we have the ability to detect that there are stairs there, even with your common robot vac. But the challenge is the actual navigation part. And as much as we think that we’re going to get robot navigation, you know, we’re complex organisms, and so are dogs.
Dogs have, you know, four feet. They are able to balance. They’re able to go up and downstairs and really be able to navigate successfully, just like humans. The ability to get robots to do that is possible and we’ve seen some some success in that regard, but it’s actually soft robotics that are going to get us there and soft robotics is using very human or dog like movement in what I would call a soft shell robot using pneumatics inside. And there’s been further development. There’s robotic fish that swims this way. It’s going to be really soft robotics that are going to take us there. It’s not that traditional cyborg robot they think about that’s stiff and has joints and things of that nature. We really have to move into soft robotics to be able to accomplish the type of activity that a human or a dog can accomplish.
And while it’s in development, it’s a ways off, but very smart. Anybody who is an engineering student who wants to get into soft robotics, I think that’s where it will be in the future. And how we’ll be able to accomplish that goal.
Roberts: Sometimes it seems like these possibilities are on the distant horizon. However, breakthroughs in technology can lead to exponential gains. Dr. Ye describes how far his research has come in just 10 years.
Ye: Your program also talked about that iPhone LIDAR, I believe, and that has been something we have been using and that’s also very state of art. It’s so small, tiny and it’s so affordable. When I started this robot cane, at the very beginning we called it smart came because we don’t have that robotic roller tip yet and the first version of camera we use is close to 500 gram. It only worked indoors. But now you think about this so small thing like that iPhone’s LIDAR and it works outdoors and you can see that so fast that the technology development actually is and it just takes 10 year. A little bit more than 10 years.
Roberts: A decade may seem like a long time, but when it comes to developing technology, it can go by in the blink of an eye. And when there’s a major breakthrough, what we think of as impossible can become possible in an instant. Like the tech itself, innovations and robotics are moving fast. While guide dogs and white canes will always be reliable for navigation, this episode has shown us the robotic technology is augmenting these tried and true tools. So, will robot dogs be curling up by their owners feet anytime soon? Only time will tell. The question is how long will it take?
Did this episode spark ideas for you? Let us know at podcasts@lighthouseguild.org. And if you liked this episode please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts.
I’m Dr. Cal Roberts. On Tech & Vision is produced by Lighthouse Guild. For more information visit www.lighthouseguild.org. On Tech & Vision with Dr. Cal Roberts is produced at Lighthouse Guild by my colleagues Jaine Schmidt and Annemarie O’Hearn. My thanks to Podfly for their production support.
Join our Mission
Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.