Hi all. Apologies in advance if I'm not posting in the right place. So it turns out that, even Chat GPT and be my AI can be subject to oversights. I recently sent it a screenshot of a retro video game password that I already know, just to test it a little. The password had two letter N's in it, but GPT acted like it only had one. When I pointed it out, it apologised for the oversight and gave me the correct one. As this was only a game, this is obviously not all that important, but I can see it becoming a problem in the future, should people take screenshots much more important than this. To report, or not to report? Your thoughts?
Comments
GPT 4 models can't count
Hi,
GPT 4 models can't count. If you ask GPT 4.0 how many Rs there are in the word 'strawberry', it'll tel you two. There is the new .01 mini and .01 preview models which apparently are better at maths, science and coding in general, but you can't upload pictures or any files to these new models.
Counting is not the issue.
This is not a matter of being able to count. Rather, it's a matter of recognising what's there, and clearly, one N was missing. Let me show you exactly what I mean. The password I provided was BMNNIID63MF. GPT gave me BMNIID63MF. I'm sorry, but you will have to read the password by character using the router. In any case, when I pointed out that there was a missing N, it apologised for the oversight and gave me the correct password.
Come to think of it
I find it a little funny that it was able to recognise both I's, but not both N's.
counting
Well maybe it's not counting, but it has issues with random sequences of letters and numbers. I've tried using it to solve inaccessible CAPTCHAs, the ones with just letters and numbers and no audio, but no luck. I know CAPTCHAs are meant to be distorted and unclear which doesn't help. The camera on my iPhone works much better for this. I pointed it at my computer screen to solve a CAPTCHA that BeMyAI couldn't, and it got it right first time. Considering the way LLMs work, it doesn't surprise me that it can't deal accurately with random letters and numbers because it's got no pattern to follow. A password or CAPTCHA could contain absolutely any combination of letters and numbers. And yes, this is definitely a shortcoming of BeMyAI. Considering the way LLMs work, there's probably not much they can do about this at the moment.
Well
I have decided to make my data available for model training. I will be feeding it many of these from now on. Hopefully, someone will see it and slowly address the issue.
Using AI
You should definitely report it, and hopefully it gets resolved soon. The thing is though, when we're using artificial intelligence, even though independence is the main goal, we still have to keep in mind that, no matter how good they are, they are still going to be inaccurate at some point. If you're relying on them for extremely important documents, I wouldn't recommend it. I think there is a disclaimer about this, at least, on the website. We still have to use our best judgment, intuition, and common sense. Even if you're working with a live volunteer, when it comes to sensitive information, you always have to be careful. I understand though it's hard to find people that you trust, if you have anyone around you at all. Once we depend on others or another program to get the job done, there's always the possibility of something going wrong, but, of course, you already know this.