When is be my eyes going to get the live feature that was demonstrated a year ago?

By Oliver, 25 April, 2025

Forum
iOS and iPadOS

Forgive me if I'm wrong here, but be my eyes still doesn't have live AI video feedback as in, holding up the phone and having a live back and forth chat about what is being seen with an AI?

This was demoed a year ago. I wondered where it was with roll out or is the back end proving too expensive?

thanks.

Options

Comments

By Chris on Friday, April 25, 2025 - 08:40

My guess is they're not confident enough to release it. Given the fact Chat GPT models can make things up, they want to make sure it provides as much accurate information as possible. It'll be ready when it's ready. I'd much rather have a polished product than a buggy mess or worse, something that frequently provides misleading information for such an important task.

By Devin Prater on Friday, April 25, 2025 - 11:40

Takes too much GPU power, they're already struggling for GPU's they say, and Google will probably get their first. ChatGPT's "live" mode is more like it takes a pic, answers questions, waits for another, takes another pic, and so on. It is *not* live video, and it is disappointing when reviewers and mainstream news sources call it such, let alone if OpenAI does too. So, we'll have to wait longer. Probably much longer. When it's released, everyone will know and will be talking about it here. I suspect that when it's in beta, everyone will know and will be talking about it here.

By IPhoneski on Friday, April 25, 2025 - 13:40

I’m increasingly doubtful that this feature will ever be introduced. Right now, the image descriptions contain so many factual errors that I use this app only for funny pictures from the internet. But when I need reliable information, I turn to the GPT app or Google’s Gemini.

By Stephen on Friday, April 25, 2025 - 13:40

It does. Go into voice mode and turn on your camera. Once you do that chat gpt can see what your phone sees.

By Holger Fiallo on Friday, April 25, 2025 - 14:40

Maybe they did an apple and it does not exist similar to the new siri.

By Gokul on Friday, April 25, 2025 - 17:40

I mean with open AI. From what I've heard, the issue is exactly what @DevinPrater said.

By TheBlindGuy07 on Friday, April 25, 2025 - 23:41

Mike... The SEO of be my eyes I think, had an interview with double tap where he said that if he had known this earlier he would never have done that video as it was because openai asked them to do it but now they just won't give access to those more powerful reasons, maybe or maybe not because of the reasons mentioned above.
I really want to have a be my eyes app on mac though, can't you guys please just role out the ipad version to macos with catalist in the meantime?
At least it would be something... Just an opinion of course :)

By Callum Stoneman on Saturday, April 26, 2025 - 09:37

As said above, Mike Buckley has said several times now that if he'd have known how long it would take for OpenAI to make this feature available, they wouldn't have released the video. The "live" mode we have on ChatGPT isn't the same thing.

Ultimately, Be My Eyes can't release the feature until OpenAI give them the go-ahead, which they haven't done yet. Be My AI is in the same situation, we were just extremely fortunate that they let Be My Eyes get there first with that feature.

It will come eventually, we just need to wait it out unfortunately. From Mike's tone in some of the interviews I've heard, I would be willing to bet that the Be My Eyes team are just as frustrated as us.

By mr grieves on Saturday, April 26, 2025 - 12:04

It shouldn't be far away now. You probably already heard this if you were listening to Double Tap, but Mike was on there saying that the Mac version was a couple of months away all being well. I've lost track of time a bit, but this was probably a few weeks ago so should be fairly close now.

I am also really looking forward to it.

By SSWFTW on Saturday, April 26, 2025 - 17:13

This will be great when it comes out and I found it so frustrating the first six months after he said it would be coming out within 8 to 12 weeks something like that. But I agree here with most of the comments that it just requires too much processing power to allow people to use it that much. As well, it's so hard to know how much the video mode on chatgpt hallucinate but I can confirm it definitely does quite a bit. I'm surprised that it's so good on be my AI Image recognition, I guess it's the layer of instructions they put over top

By peter on Saturday, April 26, 2025 - 21:30

If you are looking for an interactive speech and video tool, try Google's AIStudio tool at: https://www.AIStudio.google.com No, it isn't continuous video, but you can use your camera interactively with speech to ask the AI what it sees. From what I've observed using different chatbots, Google Gemini either has a lot more resources to compute or is a much more efficient chatbot. Responses from Google's tools are generally much faster than from some of the other AI services. Unfortunately, as people have pointed out, none of these services is perfect and one does have to use them with care. One Big Advantage of a tool like Be My Eyes that is specifically geared to the blind is that the responses are usually much more descriptive and useful with detailed infromation that is more appropriate to people who can't see. Yes, continuous video would be a really nice feature. The Be My Eyes demo certainly seemed like that's what it was. But even a service that took a picture every second ore two would be great and use a lot less compute power. I'm sure these features will come some day, but developers have to be especially careful with what responses are given to people who can't see because they rely on those responses being correct. --Pete 🤞