This podcast is about big ideas on how technology is making life better for people with vision loss.
When it comes to art, a common phrase is “look, don’t touch.” Many think of art as a purely visual medium, and that can make it difficult for people who are blind or visually impaired to engage with it. But in recent years, people have begun to reimagine what it means to experience and express art.
For this episode, Dr. Cal spoke to El-Deane Naude from Sony Electronics. El-Deane discussed the Retissa NeoViewer, a project developed with QD Laser that projects images taken on a camera directly onto the photographer’s retina. This technology allows people who are visually impaired to see their work much more clearly and with greater ease.
Dr. Cal also spoke with Bonnie Collura, a sculptor and professor at Penn State University about her project, “Together, Tacit.” Bonnie and her team developed a haptic glove that allows artists who are blind or visually impaired to sculpt with virtual clay. They work in conjunction with a sighted partner wearing a VR headset, allowing both to engage with each other and gain a new understanding of the artistic process.
This episode also includes an interview with Greta Sturm, who works for the State Tactile Omero Museum in Italy. Greta described how the museum’s founders created an experience solely centered around interacting with art through touch. Not only is it accessible for people who are blind or visually impaired, but it allows everyone to engage with the museum’s collection in a fascinating new way.
Finally, a painter and makeup artist named Emily Metauten described how useful accessible technology has been for her career. But she also discussed the challenges artists who are blind or visually impaired face when it comes to gaining access to this valuable technology.
Sturm: It’s the first sense that we use. It’s the sense with which we discover the world when we’re little kids. We want to touch everything. We want to keep everything in our mouth and like just go around and then soon after that we’re told no, no, no. You shouldn’t touch. You should look and not touch, and so it just becomes the reality and it becomes what you’re supposed to do.
Roberts: This is Greta Sturm, an operator of the State Tactile Omero museum in Italy. This unique museum allows visitors to fully experience its artwork through touch.
Sturm: And so, our museum is structured this way because it was created by two blind people: Aldo Grassini and his wife Daniela Bottegoni. They were both very passionate about traveling and about the arts. They visited almost every country around the world, and every time they went to a different country, and they wanted to go to museums and they wanted to visit them and have an experience there that was full. And of course, it never happened because they only could rely on what the person accompanying them was telling them.
And so that caused quite a bit of frustration. And for that reason, they came up with this idea: “Why don’t we create a space that is fully accessible in our city?” And the museum opened its doors in 1993, and at first it had a few copies of famous sculptures from other museums that they had managed to find and put together. And of course, the majority of the people going there were blind people or people who had some kind of visual impairment just because they were the ones who needed this kind of experience the most.
For the first time they could have this experience and create this new reality in a way. But their ultimate goal was to create a space that was really for everyone. And they slowly but surely managed to do that. So, you get in and you are told that you should touch everything. We have visitors, of course, who are blind and who have visual impairments, but we have sighted visitors who come as well. And we just really try to make it a place and a space that is fully accessible and where you can really regain familiarity with the sense of touch. They really have fun in the sense of like, have a full experience and have a new perspective and really relate to art in a different way.
Roberts: I’m doctor Cal Roberts and this is On Tech and Vision. Today’s big idea is how technology is reshaping how art is experienced and expressed. Much like the visionaries behind the Omero Museum transformed our interaction with art, developers in assistive technology are opening exciting avenues for artistic creation, impacting both the technical and experiential dimensions of art.
Today, we explore technology that allows artists who are visually impaired the freedom to create without outside influence or assistance. We’ll start with the world of photography and a partnership between Sony Electronics and a company called QD Laser.
Our guest today is El-Deane Naude from Sony Electronics, who’s here to tell us about some really extraordinary technology. So let me explain. If someone who is visually impaired looks at a beautiful landscape, even if they don’t have central vision, even if they can’t read, they can see the landscape because they can use their peripheral vision in order to see the whole picture. But now try taking a picture of that landscape, and now it becomes much harder. Because when you look at a camera, when you look through a viewfinder, when you look even at the small screen on your digital camera, you have to use the central vision. You have to use the same vision that you would need in order to read.
So, how does someone who doesn’t have central vision because they’re visually impaired, who has trouble reading, how do they use a camera in order to achieve that image that they want to be able to share? El-Deane, explain what Sony’s doing.
Naude: So first of all, great introduction. Thank you very much for having me on the show as well. I am actually one of those people. I have macular degeneration, I’m legally blind. I’m also a senior product manager in the imaging division at Sony, and I’ve been working with our cameras for about 19 years now. So I fall directly into that category that you just explained.
I love photography, I love everything about it, but I have a really, really hard time seeing your traditional methods of either framing or having a look at a menu, right? So optical viewfinders that use pentaprisms to bounce the light into your eye, or if it’s generated off the sensor and you have a look at, even though our cameras have extremely high resolution and a good magnification, OLED viewfinders or even the LCD on the back of the screen, I typically can’t see what’s going on there.
So, we’ve been working with a number of different companies and technologies. One of them is a company called QD Laser out of Japan. And they started by making devices, they make all types of laser devices for different fields, all different kind of use cases, and they use a very low emission laser which can be projected directly onto a person’s retina. But you need to be able to generate an image in order to do that. So we utilize a Sony camera and it sends a signal through to the QD Laser projector, it’s called the Retissa NeoViewer, and that basically scans a laser image directly onto a person’s retina.
So, you bypass any of the optics in front of the eye and it shines directly through and literally gives you a scanned image. If you remember in the old days, I say old days, maybe about 15, 20 years ago, when CRT TVs, old tube TVs started disappearing. And you started getting the projection TVs. Basically the same type of technology as that, but this is a laser projection. And now the modern day movie theater or home theater laser projectors do it the same way. It’s basically a very high speed scan with a single laser that essentially will scan the image across the screen. In this case the screen would be your retina.
Roberts: So now it’s going to scan this image across my retina, and now what is my retina going to see?
Naude: So you have to see your retina. In fact, the human eye is very similar to a camera, right? So if you have a look at a camera, you have a lens which is there to focus the correct amount of light onto a sensor. The sensor then does what we call analog to digital conversion, right? It’s got these photosensitive diodes, and when analog light waveform hits those diodes, that has a transistor attached to it. And that photodiode basically measures the amount of light that hits it. It converts it through the transistor and it sends an electronic signal to the processor, and the processor then stores that on the media card on the memory.
Your eye is exactly the same as that. You have your optics in the front of it, your cornea, which is essentially like a filter in front of the camera lens. Your lens, which then focuses the light onto the back of your eye. Your iris, which is the same as an aperture, an iris on a camera, which opens and closes depending on how much light that gets through. But when it hits the retina, think of the retina as the sensor of the eye. So those basically convert light into an electronic signal that goes to your brain. So it’s basically like the sensor of the eye. And then takes that and sends it down the optic nerve to your brain and your brain processes and stores the memory of that image.
So that’s essentially what’s happening is that the light is being transmitted directly with pinpoint accuracy onto the retina, so it doesn’t need any optics to decide how much light or where to focus it. So that’s really the benefit of this type of viewfinder is that anybody with poor optics, this could benefit them so that they could then see and take photos. And then the same thing is, if you have damaged parts of your retina, you can then tilt the camera forward, backwards, or left and right and scan onto a different portion of the retina, with being able to utilize the the full peripheral field of view versus damaged portions like central macular degeneration. Or that type of thing.
Roberts: Which means that even someone who doesn’t have the ability to read, because they have, as you say, macular degeneration or another condition that affects their ability to read, may still be able to see the image because the image is projected on the parts of their retina that can see. And so they can put that together to have an image and be able to take a great picture.
Naude: Correct. And actually, that’s where it really benefits somebody like myself, right? I cannot read any of the text on the screen, so whether I’m looking at the high resolution OLED viewfinder, if I’m looking at the LCD screen, there’s no way I can read what my camera settings are. I can’t see what my shutter, aperture or ISO is, that’s the exposure values of the camera. I can’t go into the setting menu and change any of those.
And then also being able to frame up my image. It becomes extremely difficult for somebody like myself. I actually attach a larger monitor to my camera in order to be able to read that and zoom into a larger monitor. I carry a monocular, little optical scope, like a magnifier that I can read the screen. But it becomes extremely difficult when you’re out and about and you trying to take photography, trying to change your settings.
This method, the laser projection directly, I can actually read the tiniest little settings, would normally be on the LCD screen too small for me to read, I could see them clearly. A note on that: although this sounds super fantastic, which it really is, it doesn’t benefit every single eye condition. There are people that have varying eye conditions and I’ve been at trade shows like CSUN, which is the assisted technology trade show that happens every year in the beginning of the year, and standing next to somebody who has the same situation as me, the same conditions, and even maybe can see a little bit better than me, this camera technology doesn’t necessarily work for them. ,
So it’s kind of a hit and miss. We’re not 100% sure why it works better for some people versus others. But this is something that we’re still discovering. I’d say probably about 30% of people that we’ve tested with various eye conditions, they can pick up photography and videography to some degree. For me, obviously it benefits a lot because. I can actually see very clearly through it.
Roberts: What does a camera like this cost?
Naude: We actually bring this camera to market almost 1/3 of what it costs us to produce, so it only costs $600.00. We want to make it accessible to everybody, so we actually bring this to market at quite a large loss to us. But we want to be able to expand the ability for people to pursue photography and videography regardless of what situation they’re in.
Roberts: Of course, participation in the arts isn’t limited to photography and videography, and there’s all sorts of great technology that can assist in the creation of many art forms. Emily Metauten is a painter and makeup artist who is visually impaired. She spoke to us about how assistive technology has impacted her work.
Metauten: Artist wise, I go by herminia blue. I am a blind disabled fine artist, as well as hair and makeup artist. I am a painter. I also do hair and makeup for film, photography, things of that nature. My visual condition is Stargardt’s disease, which is a rare genetic form of macular degeneration. So I have a completely blind spot in my central vision and use my remaining peripheral to see. I was diagnosed when I was 10 years old. So, it progressed fairly quickly and I was diagnosed at that age.
I did start doing art before that, before I could read or write honestly, I was doing art. That was always kind of how my parents kept me entertained. I always had art supplies around the house. And, growing up at such a young age, I didn’t really necessarily understand what my vision loss meant and the things that came along with that. So I always kind of used art as an outlet to express all the ranges of emotions that I had that I didn’t necessarily have the language for when I was a child.
I often paint faces and portraits because, I think this is partially because of my vision loss in that, in the everyday world, navigating the world, I don’t necessarily see facial features unless I am, you know, very close up to the person, which is the case when I’m doing their makeup, et cetera. But in terms of painting, I love creating my own faces. I often use themes of like the third eye as a way of seeing because, you know, these things don’t work too good. So my mind’s eye often has to step in and do some extra work for me.
Roberts: Assistive technology allows Emily to work much more comfortably, which in turn allows her to better express herself.
Metauten: So at the Lighthouse Guild, at the Technology Center, on the third floor, we love them very much, I test drove a variety of technologies. Some of which being the virtual reality headset glasses. I tried out a good amount of those. I believe it was the Iris Vision headset that I had found the most useful.
So the Iris Vision, it is a large set of glasses. They fit just like any normal pair of glasses would, and I was able to zoom in at a fairly large resolution. There is also settings to adjust the brightness and also different settings to adjust based on the environment that you’re in. When I was trying it out with my art, I also have back issues, so posture is an issue for me. So with glasses I was able to sit up in a proper posture and do the little zoom in on the Iris and be able to get those details done.
The VR glasses definitely unlock an ability to see more details more easily for me because peripheral vision isn’t designed to see fine details. That’s what the central vision is responsible for. So that’s what I have trouble with. But it made what I was already doing easier, and it also did give me inspiration because we’re trying to unlock the greater things in life that aren’t just beyond the basics for people with vision loss, and I think that was a great way to experience that and it gave me more of an inclusive sort of motivation.
Roberts: And that’s what inclusivity is all about. Making sure everyone is able to participate on their own terms. It’s something especially important to Bonnie Collura, a sculptor and professor at Penn State University. Her project, Together, Tacit, utilizes exciting new technology in which artists with vision impairment use haptic gloves to sculpt virtual clay. And not just that; they can collaborate with sighted partners through a virtual reality headset.
Collura: I teach at Penn State and a partner museum is called the Palmer Museum, and they had an exhibition called Plastic Entanglements, and I had two pieces on view in the exhibition. In the summer, the museum did a workshop with the Sight Loss Support Group of Central Pennsylvania and asked members to come visit the show. And in doing so, they contacted the artists exhibiting and asked for permission, if their work could be handled. And I not only gave permission, I was local and I went.
And a woman who I would later learn is named Michelle McGinnis, felt my sculpture, and it was just a very profound moment for me as a maker and, you know, a human. I wanted to know how she interpreted my work and she was sharing what she imagined it to be, and I felt that I lacked the tool to understand her visualization. And, at that moment, I felt like we were two parallel sculptors. She could not see what I had made, but I could not see what she was making, and I really was interested in imagining what could bridge this.
Roberts: Right, so you have this idea of bridging the two. OK. So then what did you do?
Collura: Well, then there was a long gestation period because of the pandemic. And so, after about two years, really focusing on the place that I work in the sculpture department at Penn State, I had the opportunity with the Studio of Sustainability and Social Action to begin sort of funding the start of it. And I worked with Penn State College of Engineering’s Learning Factory Capstone program. You can propose a project and students from all the engineering disciplines can choose projects they’re interested in.
And I also reached out to the Sight Loss Support Group very early to discuss how virtual mark making for a creative endeavor might manifest. Did people want to mark in space as if they were drawing in space with their body? And we’ve got feedback that really like manipulating form would be much more desirable. And since I am a sculptor and I’m very familiar with working with oil-based clay and material and handling, I really thought about well, why not virtual clay?
And what we were thinking of was that someone could shape something in virtual space and it would be limitless. And so, what I learned was I needed to develop the technology and the analog approach simultaneously. We started with a glove because people felt that working with the hand was natural. And we started working with small haptic motors that would create vibration. That led to the next year of development, and so the glove has been 3 semesters over 2 calendar years in the making, and the last semester’s team of students are phenomenal and really made the glove a functional tool that can be used. It’s awesome.
Roberts: You call your project Together, Tacit. Explain the name.
Collura: Tacit knowledge is knowledge that often, it’s not explicit knowledge. It’s not knowledge that we read in a book. It’s sort of knowledge that each of us possesses, often through the act of doing. When you have practiced something repeatedly, the procedural knowledge to fly fish, the procedural knowledge to do a pirouette, procedural knowledge to arc weld turns into something that’s more automatic. Your muscle memory knows what to do, and after you’ve been told something or shown something with a series of steps, not only do you do it automatically, you often change it or you augment it so that your signature is involved.
Roberts: So, if a visually impaired person has experience sculpting in clay, now, if I understand what you want to do, is create that same experience for them, but without the clay.
Collura: Yes, I am interested in the numerous and minute sensations that come from handling, pushing, moving clay. I think that the range in which those sensations can occur through the glove really have to do with the haptic feedback. I’ll speak specifically about the spring ‘23 gloves because that’s the most developed product.
So, it has printed circuit boards. It has six motor drives and four flex sensors. The flex sensors, they lay on the back of four fingers on the wrist, and the haptic motors, there’s five of them on each fingertip and one in the palm. The code can be altered to change the intensity of motor vibration based on the flex sensor and how it communicates with the code that’s developed in Unity, which is a gaming software. That communication establishes our unique Together, Tacit software. And that software provides a visual of the carving block to the sighted person, and that also allows us to save the rendering into a file that can be 3D printed.
Roberts: And how does the virtual reality aspect of the technology work?
Collura: The virtual reality works through Unity. So a sighted person puts on an Oculus. They establish a boundary. And once that gets established, we link it to the Unity software. And then, basically, you’re in a grayed out room, and when you launch the Together, Tacit program that works with Unity, you’ll see a large orange sphere.
And then the visually impaired or blind person puts on the glove. There’s a wire that connects to the back of the laptop. And they don’t feel vibration in their palm until they actually hit the virtual material in space. And so once they do, then as they move through space, they’ll get a whole range of vibrations. And as long as the sighted person is wearing that Oculus, they can see there’s a one to one in-time correlation between what the visually impaired blind person is doing as they’re moving and what the sighted person is seeing.
Roberts: Great. So we know that the sighted person uses the visually impaired headset so that they can follow along with what the visually impaired person is creating with the glove. Now how do they function though as a team to create the final product?
Collura: So, the sighted person is functioning mostly as a spatial assistant. When the blind person is making the virtual sculpture, the visually impaired or blind person chooses to engage in the much larger collaborative idea. They do that after the sculpture is printed. Then there is a whole series of discussions of now that we’ve got this thing, how do you think you might want it to translate? Because the hardware is making a lot of the decisions in terms of what it feels like, the range of colors we can choose and those things are options and those could be customized to a point.
A lot of people ask me, when asking about the two people together, they say, “why don’t you just use real clay?” And it’s a good question, and I really want the object that’s getting made to be indeterminate to both people. And I’ve learned through teaching that if a visually impaired or blind person was to use real clay, and they would start to form it, with all good intention a sighted person would inevitably start to signify it in terms of what it can be called. Like, Oh, that’s really interesting. It’s starting to look like a bunny or it might look like a cloud and already immediately, that begins to change the power dynamic on how something is created.
And so, if the sighted person cannot feel it, then they are without a sense. And if the low vision or blind person cannot see it, they are without a sense. So what gets created is, I feel like it leans more towards inclusivity in terms of what is made. Then after what’s made, if it wants to be transformed into different materials, I think that can come through a whole range of possibilities.
Roberts: Which brings me to my next question, Professor Collura. What’s next?
Collura: Honestly, what’s next is really to develop this technology a bit further. I have all of these ideas and how potentially the objects in museums could be coded. There could be a code made for every single object in a museum collection, and then it could be felt by someone who is visually impaired or blind. There could be ways in which the architecture of spaces that house objects and or art, how the architecture could be coded to deliver sensory information. Not to mention the architecture itself in terms of how it’s built, right? Should be more inclusive, friendly.
I just want the people who are interested and feeling the agency of art to be able to do that. If that can happen through the product successfully, then I’m all for any kind of offshoots from the technology doing other benefits. A lot of people have said that the virtual reality and haptic merger could be used so that it could recreate a place of employment for a visually impaired or blind individual so that they can feel out the location of everything in their workspace and memorize that on their own time in their own private space as they work through their job. If it can bridge into those ways, I think that would be absolutely amazing.
Roberts: I asked the same question to El-Deane: what’s next?
Naude: This is our first one that we’ve put into the market. We’ve just started working with QD Laser about two years ago. We did a lot of testing. This is a long process in order to try out which cameras work best, we did try it on various different cameras. We found that the Ultra Compact HX99 is the ideal camera because of its size and weight. It also means that you can have a smaller overall device and carry it with you anywhere. But I can also see this technology developing further.
Roberts: So, is Sony thinking about how this technology could be applied to other products? How does this technology potentially help everyone?
Naude: So, I think it’s actually the other way around at the moment. We have great technology within our camera division. We put a stake in the ground about a decade ago saying that mirrorless was the future and we discovered that a lot of our competitors, especially the traditional competitors in the market, didn’t really believe that mirrorless was the future of camera technology.
They’ve all now denounced DSLRs and they’re feverishly trying to catch up with us. So we are currently the market leaders in the camera space and we’ve been the leaders in terms of product development with a lot of really key features, especially the development of AI technology that wasn’t really presented as AI technology in the past but is, right. So, face detection technology evolved to eye autofocus technology, it’ll track a person’s eye for AF. Then it evolved into adding pets and animals into it. Now it’s planes, trains, cars, insects, you name it. There’s AI technology for autofocus tracking and that really helps everybody in the industry, whether you’re a professional photographer or you’re an entry level photographer.
And that’s just one example, right. So, that goes across the entire camera space. There’s lens technology that we’ve been developing. There’s sensor technology that we’ve been developing, but that’s really great for everybody who has access to be able to use those cameras. So I see this more of taking that technology that we’ve developed on our camera system that makes these cameras fantastic for everybody and now making them accessible for everybody.
Roberts: So, I’m looking at Sony’s mission statement, and Sony’s mission statement says Sony’s mission is to be a company that inspires and fulfills your curiosity. Our unlimited passion for technology, content and services creates new experiences for people around the world. We make what tomorrow holds, and together we can build a smarter future. So El-Deane, what does that mean to you?
Naude: So, that really encompasses everything that Sony is about. We’re a creativity company, so everything around Sony is based on creating content and enjoying content, creating movies, creating pictures, creating music, but then also enjoying those. And we want to expand that to everybody. So, this is one of those devices that doesn’t basically look at limitations of a person, but it gives them the ability to go and create their own content as well. Being able to go and be creative. And this is really at the core of the philosophy around Sony is this artistic ability, everything is based around creating art, creating something that either you or other people could enjoy.
Roberts: And the challenge is making sure that technology is available to all, including independent artists like Emily.
Metauten: As an individual with vision loss and pursuing a career in the arts, it is not always easy to obtain assistive technology and other resources because there are government funded agencies that often help provide these sort of materials. The government as a whole hasn’t quite caught up with the times in the sense that our social and financial climates are different than they used to be.
And people of all abilities, but particularly people with disabilities, are, like myself, are starting to pursue careers that are outside of the box, beyond nine to fives. When I decided to freelance and pave the way for myself, the government does not necessarily see that as valid work, let’s put it that way. So, I would like to see people gain more access to technology, not only for work, but also for recreational purposes. Not only is assistive tech there to help us in our careers, of course, but it’s also to just help us. Us, meaning people who are blind and visually impaired.
It is meant to help us just enrich our life in just everyday aspects. Because, you know, the stereotype is that people who are blind don’t engage in things like watching movies and TV, or doing visual art or doing their own makeup, picking out their own clothes. There’s a lot of stigma around that, and the more we are able to continue developing even better technology and just getting as much of it out there as we can. That will just better our society as a whole, particularly for people with disabilities, but just society in general as a whole.
Roberts: Emily said it perfectly. When everyone has the opportunity to participate in something, we all benefit, and that’s especially true in how we create and consume art. Whether it’s in photography, painting or sculpture, the development of assistive technology and access to it is changing our relationship with art and in the process to each other. At its essence, art is a pursuit that deepens our comprehension of the world around us. And now, new technological advancements we heard earlier offer remarkable avenues for achieving this understanding, breaking barriers, redefining how we engage with our environment and fostering meaningful collaborations. And, ultimately, expanding our world beyond one dimension.
Did this episode spark ideas for you? Let us know at firstname.lastname@example.org and if you liked this episode, please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts.
I’m doctor Cal Roberts. On Tech and Vision is produced by Lighthouse Guild. For more information visit www.lighthouseguild.org on tech and vision with Doctor Cal Roberts produced at Lighthouse Guild by my colleagues Jane Schmidt and Anne Marie O’hearn. My thanks to Podfly for their production support.
Join the Tech & Vision Mailing List
Receive exclusive invites to virtual and in-person sessions on accessible technology for people with vision loss.