Description of App
HelpUSee – Be My Eyes, Offline
Helping those who can't see, to see the world.
HelpUSee is an AI-powered assistant designed for people with visual impairments or low vision. Just take a photo, and the app will instantly describe the scene using cutting-edge AI technology — clearly, concisely, and completely offline.
Key Features:
1. Instant Scene Descriptions
Capture any scene and get a short, meaningful explanation powered by a local AI model.
2. Offline & Private
All image processing and reasoning are done directly on your device — no internet, no cloud, no privacy concerns.
3. Multilingual Voice Support
In addition to English, HelpUSee can speak other languages using iOS offline voice packs.
4. Unlimited Usage
No subscription, no usage limits. Use it anytime, anywhere.
Setup & Device Requirements:
* One-time model download is required at first launch. A Wi-Fi connection is recommended.
* Best experience on iPhone 12 or newer (8GB RAM). Minimum supported: iPhones with 4GB RAM.
* For quick access, you can set a shortcut via Settings > Action Button > Shortcut > Open App and choose “HelpUSee”.
This app was created out of love — to help my mother, and now, to help many more around the world.
Comments
this app entry isn't complete.
I'd recommend posting the description from the app store/itunes store.
Administrative Error - Now Resolved
Hi Brad,
This was an administrative error on my part--thanks for pointing it out. I had attempted to reformat the app description by re-copying it, but it did not go through.
Ah ok.
Well, it's ffixed now :)
It's interesting…
If you flip your phone upside down, and use the back camera, it puts it in a kind of automatic mode. After so many seconds, it will snap a picture and then describe the image to you. That, is pretty cool. The rest of the app? Meh.
I do love that they made it an off-line app. However, for whatever reason the AI is kind of slow, I actually get faster response time, with Speakaboo. Very cool concept, though, and as I said above, I am digging the upside down automatic mode. I don't think I will be using it to replace any of my current AI models/applications anytime soon, however.
Tell me more
@Brian the upside down thing is also how you hypnotize a chicken, just in case you need some unnecessary trivia.
It sounds like this does one shot at a time. I'm waiting for the one that looks over and over, like Andy's ducks, and that you can tell it to watch for something. The camera app comes close, but you can't tell it things to refine what it is describing. Like tell me if the focus is sharp.
I was thrown off by Andy's Ducks
Then I remembered the Be My Eyes video. Moving forward, whenever I talk about what we truely need, a proactive AI, I am going to refer to Andy's Ducks.
That one
I couldn't remember which app it was, just that it was telling Andy what the ducks were doing because he asked something, and it was commenting on his dog in real time. I think that might be what I need. Though a single shot app that you can tell it to watch for something specific and give feedback each time you take a picture might also work. I'm trying to focus a telephoto lens, and it just isn't working with the current makeshift process of switching between camera and AI describer apps.
ChatGPT 4O
That was supposed to be ChatGPT 4O with the live AI camera functionality.
I honestly do not know if they ever got that off the ground, however ...
@Brian
I remember something about people being disappointed in that.
I guess a single picture AI, rather than live, might work for what I'm doing. The way I'm doing my project right now has too many apps and functions going on to work well, and I feel like I'm wasting time. I might be able to go directly through the AI app by using a bluetooth keyboard to press the picture button without touching the phone. One of those things I just have to keep thinking about and working on.
think outside the box
what you truly desire is an intelligent companion robot, thinking in the constrain of smart phone will only frustrating you. the form factor of the hardware means it's only so much it can do, like what they used to say about some fancy computers, "well, it can't make coffee".
a robot can watch you constantly, tell you what you want to know. struggling with multi-level touch menu? even with a sighted person beside you giving you instruction is hard to achieve unless the person hold your hand and do it. with a robot, you just tell her what you want.
so maybe we should pin our hope in robot, talk more about it so more people will put more effort in that direction?
Coming back to the app at hand
I think this is a very impressive attempt with the use of on-device AI.
I wish the app allowed me to download AI model using my mobile data any way, had to come to office and download using office wifi as my new home doesn't have wifi yet.
The app describes things fairly quickly, surely more faster than Be My Eyes, but lacks details. The description is kind of short, on point, which may be preferred in some situations. But, in some situations, we would prefer longer descriptions of Be My AI as well.
I don't use speakaboo anymore due to it's free version putting some significant restrictions, so this slots in very nicely in my tools folder in place of that.
I am going to play it more and see where this may be more useful.