Description of App
From Your Eyes is an assistant technology that brings together human and artificial intelligence and offers both fast and qualified visual explanation service to visually impaired and visually impaired users. There are three hundred and thirty million visually impaired people worldwide. One million of them are in Turkey, and ten million are in the United States. FYE consists of three parts: visually impaired users, trainable artificial intelligence services and descriptors. Visually impaired users can take any photo or select an image from their existing library and upload it to the application. Artificial intelligence delivers a draft description to the user immediately after the visual is loaded into the application. If the visually impaired user thinks that this draft description should be improved, he can also enter the description section in which aspect he wants it to be detailed and send it to the descriptors with a single click.
This request reaches the descriptors as a notification. When a descriptor enters the application, it selects an image from the request pool and makes written improvements to the text by comparing the image and the draft description. The process ends when the advanced draft is sent back to the user. The underlying AI services learn about these improvements with machine learning.
Comments
Try to get it to describe a video
I tried to get it to describe a video, and waited, and waited, and waited, and then it was uploaded, but then it didn’t give any description at all. It just said one photo and then give me a bunch of choices like how to help it improve, things like that. But as far as giving me any sort of a description, it failed miserably. Maybe it wanted to give me a hope of a description?
Can you list those choices?
What is this bunch of choices that you were given?
LLM
Couldn't find this info in the description, what LLM is used to generate the responses? They all have their strengths and weaknesses and it would be interesting to know what to expect.
OpenAI/GPT
It utilizes OpenAI's language model, but has its own independent AI for processing submitted files. The CEO informed us they had been looking for other alternatives after the DDoS attacks OpenAI had faced earlier this month.
Thank you
Thank you for the fast reply. Do you know which of OpenAI's models is used, GPT-3.5 or GPT-4?
It said
After it displayed the fact that a video was uploaded, then it said, teach your artificial intelligence, send this to a volunteer to be described, your descriptions, etc.
GPT 4 is now free for everyone, right?
Well, they should switch to GPT 4 soon, but I don't know what they're up to. All I know is that they were looking for some other alternative back when GPT was down.
As for video descriptions, I don't know what the exact cause is, but would already recommend waiting for the new AI trained with 15 million visuals. By the way, have you tried choosing the option to teach the AI on that screen? Or have you tried having FYE describe a photo instead, and compared the screens that you would get?
An Update
Based on the explanation from the company's founder during a conversation with her, you may change the app's interface language as well as that of descriptions by going to the My Profile option on the home/main screen and then selecting the My Preferences option. I also suggested that they make it possible to change the language from the device settings.