Toilet seat design, slave harness, cyborg-like look. These words were what people used to describe NOA in the past, a mobility aid device whose developers call it "An AI mobility companion on one's shoulders".
Last week I visited SightCity, an assistive tech exhibition taking place every year in Frankfurt, Germany, and had a chance to test the NOA again for a second time. Below are my detailed impressions that I could have from a very short test drive.
What is NOA and what does it promise to users?
NOA is a mobility device with a vest form factor that is worn like a backpack. Its features include
- navigation
- obstacle detection
- descriptions through AI.
I will talk about my impressions of how every feature functioned in a moment, but let's first focus on how the device looks in general.
Physical Description
The device itself consists of three parts: Imagine a vest made of tech components that sits on both shoulders and partly covers the area between the back of the neck and scapula. The part that falls on your right shoulder houses the computer, the part on your left shoulder sports the cameras and sensors, and right below your neck there is the battery pack that powers all the tech inside the device. There is also an adjustable strap that connects all three sections and enables you to position everything, especially the part carrying the camera, for best performance and according to your liking.
Let's address the elephant in the room early on. This device is huge. I'm not joking, it is probably as bulky as you think based just on my descriptions. Moreover, it feels like a device straight out of a science fiction book or movie which gives you the appearance of an alien. As a person with a big love for science fiction who has nerdy tendencies, I find the whole package very cool, but for those who have appearance-related concerns, this thing will confirm every fear you’ve ever had.
However, much as it is bulky, it doesn't feel heavy despite having a weight of 1.2 kg because I believe they distributed this whole weight very well on both shoulders and back.
Regarding the "toilet seat" design criticism, I saw that they updated how the device looks and feels this year. Before, it had a physical footprint of a wider and flatter block with an opaque feel. But now, the bulk is concentrated more tightly in the form of an almost cubical shape, and the casing feels shinier. For what it's worth, they also told me that the device comes in various different colors. I also asked if they would constantly tweak the design, but they said this is the final one and wouldn't change much unless there would be a great need.
Buttons and controls
As for controlling all the features, the device comes with 9 physical buttons that are situated on the right shoulder. The most important of these buttons are clustered together in a 2 by 3 line, meaning there are two lines of 3 buttons, totaling 6 in 3 columns of 2 buttons. These 6 buttons control the navigation, obstacle detection, and AI functions of the device.
When you position your index, middle, and ring fingers of your left hand on each column with two buttons, your index finger falls on the buttons that control navigation, your middle finger on obstacle detection, and your ring finger on AI. The top button of each column moves you through certain options whereas the bottom ones are programmed to quickly take specific actions related to the main feature category. When your hand is positioned like this, your thumb comes from behind and naturally falls on the “select” button, which is used to confirm the selection you make by pressing the top buttons I mentioned earlier.
Obstacle detection
Up until this point, I’ve tried to be as neutral and objective as possible, but what this piece of tech achieves with regards to obstacle detection makes me as excited as a kid about to go trick or treat. Here’s the main deal: The device has a camera that has a field view of 170 degrees, very close to a human’s vision, that can see up to 10 meters even at night and detect obstacles at foot, chest, and head level. In addition, it can measure depths from 30 cm to I believe 3 meters, so it can also alert you about holes and stairs.
But wait a minute, this is not the end of what it can do, far from it actually. Once it detects obstacles, it alerts the user via sound that the user hears with the help of a pair of bone conduction headphones. However, we are not talking about ordinary beeping that only gives you info about zeros and ones, meaning only knowing there are obstacles or the way is clear. What NOA does is exactly what I was dreaming all along of an obstacle-detecting system doing.
According to what the developers claim, thanks to their collaboration with Honda research, their computer vision algorithm operates on a logic of preventing collisions and thus it only warns users of obstacles whose collision likelihood is highest, so unlike most solutions in the market, it doesn’t react to anything and everything.
We are not done of course. The way users get notified of obstacles is three-fold. First, you hear a sound whose intensity increases or decreases based on the distance or collision probability. Second, the directional stereo sound gives you an idea of where the obstacle is. Last but not least, the tone of the sound lets you know about a rough estimate of the level of obstacle, whether it’s at chest or head level. What’s more cool is that you can set the device in the app to only give you warnings about head or chest level if you believe your cane skills are good enough that you don’t need NOA’s help to detect the obstacles that a regular white cane would.
Personal impressions after testing obstacle detection
Oh. My. God. I think I would put it lightly if I said I was impressed. I was mind-blown. The obstacle detection worked like a charm, even in such a noisy indoor environment as an exhibition. Here’s how I set up the test:
I put on the bone conduction headphones and the demoing person set the obstacle detection distance to 1 meter, which she said is ideal for indoors. Then, I thought it would make it more realistic to leave my white cane behind even though they say they came up with this solution to be used alongside a white cane or a guide dog. So I folded away my white cane, and found myself back in my childhood days when I would roam freely in a limited space.
I started out slowly, and first tested if my above description of capabilities worked as advertised or not. Everything worked beautifully. The device gave me feedback about obstacles in three different aspects: The intensity of the sound increased when I approached an obstacle, I was able to make sense of the direction of an obstacle perfectly through stereo sound, and the tone of the sound changed based on the height of the obstacle.
As I became more comfortable with the way the device functioned, I increased my speed and almost started to jog. Dodging between different booths and racing among people, I felt like I was literally inside an audio / video game. The reaction time was so good, the latency so low that I bumped into no one or nothing even at the speeds of what I estimated to be 6.5-7 km/h. Whether this was owing to luck or the device itself I don’t know 100%, but based on my limited observation, I strongly believe what enabled this feat was the device, the low latency to be specific. I had asked about the latency before, and if I’m not mistaken, they’d said it was around 250 to 300 ms, which is an outstanding success made possible by locally processing the data on the computer side of the device.
AI features
Like every company nowadays, Biped also claims that they have AI offerings that can enhance our mobility experience. But is it really the case? I did a few quick tests of the AI features, and found it promising but there is room for improvement.
NOA is capable of giving AI scene descriptions, finding stairs, recognizing bus stops, recognizing crosswalks, and detecting empty seats like on trains and buses. There is also a live AI feature, but it works a bit differently than the live AI we have been hearing about from OpenAI and Google Gemini. Now to my personal impressions:
The way you use the AI features is through the column of two AI buttons. When you press the bottom button, it gives you a very quick but detailed scene description that takes about 3.5 seconds to get following the press of the button. They use Gemini as the AI model and the description I got was everything I would want to hear in a mobility context. It was apparent that it operated on some kind of a special prompt because the descriptions emphasize mobility-related aspects of a scene such as the place of escalators relative to your position.
According to a sighted friend, the info the model gave was always accurate but like with every language-model-based AI, it’s prone to hallucinations so you should use your own judgment at all times.
I should also mention the feature that they call “live AI”. They said this feature is still in beta stage, but what it ultimately does is to take 30 frames per second via the camera, process them, and alert users about important things in their surroundings like stairs, escalators, elevators, crosswalks and such. It’s not like a live AI that you can interact with and ask questions about your surroundings; it’s rather like an always-aware mobility instructor that gives you the most relevant information regarding mobility.
I said there is room for improvement when it comes to AI mainly because it would be great if we could interact with it by asking questions. They say they’re working on this feature, but it’s also important to note that they are limited by the capabilities of LLMs open to developers, and as soon as a continuous live AI model becomes available, I’m sure this company has the ability and technology to capitalize on it quickly.
Navigation
This part will be the weakest part of my quick review because being constantly inside, I couldn’t unfortunately try their GPS navigation feature and couldn’t see if they came up with a better way of providing GPS directions, or if it offered more than any other app already available on smartphones. However, I should give them credit because they regularly organized demo sessions outside to showcase the navigation and "finding the crosswalks features" of the device. It’s just that I couldn’t make time to go outside and try it. To be very honest, I had already been so impressed by the obstacle detection and AI features that I just went over testing the navigation part.
Anyway, with a disclaimer that this is not my personal experience with the feature, I can say that the device gives GPS directions probably by using Google Maps or Apple Maps API, and the camera with AI comes in at the last 10 to 15 meters to try to solve the last 50 feet problem and find the door for you. They have a video on their website to demonstrate this last part of finding the door, so those who are interested can go and watch that.
Battery life, water resistance, price
They mention on their website that the device can be used for 6 hours with the included two batteries, but I’m not clear if each battery can last up to 6 hours, or each with 3 hours totaling 6 hours.
As for water resistance, the user manual states that the device has an IP43 certificate and can be used under light rain, while the FAQ on the website mentions that it can be used under light and moderate rain. I assume camera performance would also be affected, but I guess this is a topic of another discussion.
Everything I talked about above comes at a premium hefty price. The device sells for a one-off payment of EUR 4990, or with a subscription price of EUR 2899 upfront + EUR 49 monthly for 48 months in my country. As a broke student, there is no way I can pay that price anytime soon and I suspect it would be the same for the majority of the community. Nonetheless, this broke student was impressed by what he saw so much that he set aside an afternoon and wrote this review.
Last Words
Finally, I would like to say again that I’m really impressed by NOA and what it achieved with a camera and a computer. I couldn’t try Glide because for some reason they’ve been missing the SightCity fair in Germany for the last two years, but I can confidently say that NOA has been the best package among what I’ve seen so far, which has come very close to what I want from a mobility device as an active blind person. The obstacle detection part is very well implemented and leaves me nothing to complain about. The AI is going up there, but the company is constrained by what the tech providers offer through their APIs. I only hope the GPS navigation experience would be as good as the rest of the device.
My criticisms include the bulky design and the price, two things that can’t be easily fixed. The majority of the bulk comes from the processing unit which is the computer and the battery to power all that hardware. Mael, the CEO of Biped AI, says they had to compromise on size because of the need for processing images locally in order to reduce the latency. I want the same instant reaction of NOA but in a smaller form factor, but I guess you can’t have your cake and eat it. I’m just imagining a future when 5G could finally deliver those promised latency rates of 5 ms so that we could talk about less bulky options where all the processing would happen on the cloud.
I tried to be as objective and unbiased as possible, but I was excited about this device so I think it may have made me slightly biased. So feel free to ask me questions or ask for clarifications, and I will be happy to tell all I know based on my short demo and obsessive research of the device online. I hope I could at least give you an idea of how NOA looks like and what it can do.
Comments
Interesting
I came across this at Sight Village last year. I was able to try on the jacket, which mrs grieves said made me look like RoboCop, and hear some of the noises, but didn't really get to actually go anywhere with it. At the time it felt like there were a lot of fiddly buttons which felt a little awkward to use, but I didn't have long enough with it to really try them.
I think the two big problems it has are the price and the form factor. I don't really like the idea of getting dressed before I go on a walk - it;'s just another thing to have to put on.
But it's really interesting to hear your comments on the obstacle detection - wish I had managed to try that out as if you are standing still it's not that impressive.
Did you happen to try WeWalk for a comparison? Personally the biped was bottom of my list of navigation aids - it didn't feel as revolutionary as Glide and I preferred having the cane with WeWalk.
I completely agree with…
I completely agree with everything anyone would say about the form factor, the price aside it's the biggest detractor about this device. It really feels like you're putting on a jacket, albeit a tech device.
Even though there is a possibility that I can get this through my insurance, I still haven't made up my mind because of the exact concern that I would get it with public money but let it rust somewhere in the closet after using it only a few times. For example, nowadays before I leave home, I've started to ask myself if I would take this along with me and every time I run this scenario in my mind, I realize that I would probably feel too lazy to deal with it and leave it home.
But in rare cases, I really want to have something like that because you need it at that very specific moment. For me in an ideal world, It would be great to have a pair of AI glasses that gives me constant feedback along with a regular white cane. This setup would solve 95% of problems specific to my use cases.
I was actually planning to compare it to WeWALK in the article, but then I thought it would be unfair because one is like a 800 Euros device, the other costing an arm and a leg. If you really wanted my opinion, I would say this NOA has a way better hardware and based on 15 minutes of interaction, a better software that utilizes that hardware. Also, WeWALK's obsticle detection can't keep up with my pace at all. I need a more responsive hardware. One question about WeWALK though, can you get a richer feedback of obsticle detection with WeWALK when you use it with headphones?
It was a frustrating experience for me
In fact, when I saw the device and tried it, I felt a headache, and its annoying appearance contributed to that as well.
So, I didn’t inquire further and quickly moved on.
I admit that I’m sad because I didn’t give it a full try.
Thank you for writing
Your writing is good, detailed and concise. I've never heard of Sight-City, but sounds interesting.
As for the device itself, even if it's bulky and cyborg inspired, If it gets the job done, who cares.
I hope similar startups decide to grace the stalls at Purple-Fest Goa happening in October this year in India.