Yet another shortcut to make getting descriptions of screenshots easy

By user26335377, 18 January, 2026

Forum
iOS and iPadOS

Hi all!

One day I realized that getting screenshots descriptions required too many steps: take a screenshot, have time to click the "Share" button, select ChatGPT, write a prompt, wait for a response - it seems that there are too many steps to just look at a meme received from a friend in the messenger :) There have long been addons for Windows and NVDA that allow you to perform the same task with the click of a button. If I understand correctly, the problem with iOS is that a third-party application simply does not have the ability to receive screen content, show windows on top of the current application, and the like, which is why all similar solutions involve sending the original image through the "Share" dialog. Therefore, we are limited to the capabilities provided within iOS Shortcuts app.

Acknowledgements:

  • @aaron ramirez for the excellent Shortcut, familiarization with which allowed me to understand that the capabilities of iOS Shortcuts are generally sufficient to solve such a problem;

  • Carter Temm for the greate NVDA addon, which allowed me to learn about the most popular models used for generating image descriptions.

Quick Start

For those who are not interested in reading long manuals, installation instructions and feature descriptions, here is a link to the Shortcut itself, the initial setup of which should be intuitive. Simply assign a VoiceOver command for executing this shortcut, after which the Shortcut will take a screenshot and offer a menu with a choice of available options.

Current functionality

Shortcut currently supports generating image descriptions using several popular models. However, adding a new model whose API is compatible with OpenAI does not create any difficulties. Shortcut's current functionality includes:

  • Getting image descriptions using a given model;

  • Optical text recognition from images using the OCR-Engine built into iOS;

  • Additional questions about the image whose description the model generates;

  • Using a screenshot as an input image or the ability to send your own image through the "Share" dialog;

  • Displaying lists, tables and other formatting elements in the generated image description;

  • Copying the last answer to the clipboard;

  • Optional sound notification upon completion of description generation;

  • The ability to get answers in any language supported by the model (due to technical limitations of Shortcuts, the language itself must be specified manually).

Setting things up

  1. Create (or use an existing) API key on one of the following platforms:

    • OpenAI Platform -- paid, supported models are 'GPT-5.2 Pro', 'GPT-5.2', 'GPT-5 Mini' and 'GPT-5 Nano';

    • Google AI Studio -- provides free tear, supported models are 'Gemini 3 Pro', 'Gemini 3 Flash' and 'Gemini 2.5 Flash-Lite';

    • Anthropic Developer Platform -- paid, supported models are 'Claude Opus 4.6', 'Claude Sonnet 4.5' and 'Claude Haiku 4.5';

    • xAI Developer Console -- paid, supported models are 'Grok 4' and 'Grok 4 Fast';

    • Mistral AI Console -- provides free tear, supported models are 'Mistral Large 3', 'Mistral Medium 3.1' and 'Mistral Small 3.2';

    • Pollinations AI -- provides some tokens available for free, supported model is 'Pollinations';

    • Groq Cloud Console -- provides free tear for personal usage, supported model is 'Llama 4'.

  2. Install Shortcut by following the link.

  3. A dialog will appear asking you to set some settings:

    1. Play sound: Determines whether to play a sound after description generation is complete, enter 'y' or 'n';

    2. Description model: Model that will be used for generating image descriptions, choose one for which you have an API key;

    3. Model API key: API key for model chosen at the previous step;

    4. Description prompt: prompt that will be sent to the model to get an image description , enter '/default' to use a preset prompt or your own prompt;

    5. Max tokens: the maximum number of tokens that will be used during the request execution;

    6. Language: the language in which the model will generate responses, enter the full name of the language, for example, 'English'.

  4. Assign VoiceOver command for executing this Shortcut:

    1. Go to the Settings --> Accessibility --> VoiceOver --> Commands --> All Commands --> Shortcuts --> CloudVision;

    2. Assign the gesture or the keyboard command for this Shortcut.

Usage instructions

  1. Perform a VoiceOver Command or select CloudVision in the "Share" dialog to get an image description.

  2. A menu will appear with the following options:

    1. Describe image: get an image description using the model selected in the Shortcut settings;

    2. Recognize text: Recognize text from an image using the OCR-Engine built into the system;

    3. Cancel: quit.

  3. After selecting one of the options, it will take some time to receive a response (in the case of description generation, this can be about ten seconds, while text recognition occurs almost instantly).

  4. The results of the image analysis will appear in a separate window. After familiarizing yourself with which, you can close the window using the button located in the upper right corner.

  5. After viewing the generated description, a menu will appear with the following options:

    1. Chat with model: ask an additional question about the image being analyzed;

    2. Copy last answer: Copy the model's last answer to the clipboard;

    3. Cancel: quit.

  6. After selecting the "Chat with model" option, a dialog will appear asking you to enter your question.

  7. Similar to the original description, the generated response will appear in a separate window after a while.

  8. After viewing the received response, you can continue asking follow-up questions or end your interaction with Shortcut.

Adding your own models

The following instructions involve extensive interaction with the Shortcut implementation. If necessary, detailed instructions for creating Shortcuts can be found here.

  1. Open the Shortcuts app, find CloudVision Shortcut and, using the VoiceOver rotor, select an "Edit" action.

  2. Find the description_model variable, in the text field located right before it, enter the human-readable name of the model you are adding.

  3. Find the description_model_api_key variable, in the text field located right before it, enter your API key (if necessary).

  4. Find the description_models variable, using the "Add New Item" button located right before it, create an entry for your model, selecting Dictionary as the value type.

  5. For the key, enter the model name specified in step 2.

  6. Click on a value to go to the dictionary editing screen.

  7. Create the following entries with parameters for your model:

    1. Required, type: text, Key: 'url', value: URL to which requests will be executed, for example 'https://api.openai.com/v1/chat/completions';

    2. Optional, type: text, key: 'user_agent', value: User-Agent, with which requests to the model will be sent, if omitted, the default value is 'curl/8.4.0';

    3. Required, type: text, key: 'model_name', value: The value of the model field in the request;

    4. Optional, type: text, key: 'request_messages_key', value: The key by which the request contains an array of the messages, if omitted, the default value is 'messages';

    5. Optional, type: text, key: 'request_tokens_key', value: The key by which the request contains an integer for max tokens number, if omitted, the default value is 'max_tokens';

    6. Optional, type: dictionary, key: 'additional_parameters', value: dictionary whose elements will be directly added to the request, can be used to specify parameters such as 'max_tokens' or 'temperature', if omitted, the default value is empty dictionary, i.e. no additional parameters will be added to the request;

    7. Optional, type: text, key: 'response_messages_key', value: The key (or path) where the response contains the text of the received answer, if omitted, the default value is 'choices.1.message.content'.

    8. Optional, type: text, key: 'response_error_key', value: The key where the response contains the possible error message, if omitted, the default value is 'error'.

    Note: If you want to omit any of the fields marked as optional, do not create an entry in the dictionary corresponding to that field.

  8. After filling in all the specified fields, you can complete editing the Shortcut.

  9. To switch between already added models, simply assign the description_model variable a value that matches the corresponding key in the description_models dictionary.

After all

I hope you find this Shortcut useful. You can leave any questions, bugs and suggestions in the comments below.

Options

Comments

By Peter Holdstock on Sunday, January 18, 2026 - 20:24

For those with device supporting Apple intelligence, you can simply ask Siri about what is on the screen. Siri will then ask if you are happy for it to send a screenshot to ChatGPT. You can then ask questions. As usual, any responses from ChatGPT will have a copy button included. The response is always very quick. No need to save a screenshot or remember gestures. Just ask Siri.

By user26335377 on Sunday, January 18, 2026 - 21:16

Unfortunately, Apple Intelligence is not available in all regions.
Also, I personally prefer the classic interface to voice assistants. And, in practice, asking Siri takes longer: the VoiceOver gesture is performed almost instantly, and then everything happens automatically - there is no need to save the screenshot somewhere, et cetera, et cetera.

By Brian on Monday, January 19, 2026 - 01:43

So I downloaded and installed the shortcut. Assigned a VoiceOver gesture to it. For me, I chose a triple back tap gesture. When I activate it, I get the following error:
Error: Description model should be specified

By user26335377 on Monday, January 19, 2026 - 11:37

Unfortunately, there was a bug in the original version that prevented the model name from being given a default value. The updated Shortcut can be installed from this link.

I also updated the original post, at the same time slightly simplifying the instructions for adding new models, making some fields optional.

PS. Apple has quite specific behavior for Shortcuts when imported onto other devices, for example, all top-level text fields are cleared. Unfortunately, I cannot test the correct behavior of Shortcut when imported to another device. Therefore, I will be glad for pointing out any errors that you find during testing.

By Guilherme on Monday, January 19, 2026 - 13:18

I installed the new shortcut you added by accessing the new link, and it still shows the same error saying that the model must be specified.

By user26335377 on Monday, January 19, 2026 - 13:42

I've updated the Shortcut again, sorry for the inconvenience. New link

Original post also updated.

It seams like Import Questions has binded to the incorrect fields, should be fixed now.

By Guilherme on Monday, January 19, 2026 - 14:07

When I leave the shortcut configured in English, it works normally. However, when I configure the shortcut and set the language to Portuguese, it stops working.

I am Brazilian and I would really like the shortcut to be created and to work in Portuguese. I have already tried setting Portuguese in all possible ways: with the first letter uppercase and lowercase, with and without accents, but the result is always the same.

Whenever I set the shortcut language to Portuguese and try to use it, an error appears saying that the language was not specified.

By user26335377 on Monday, January 19, 2026 - 14:28

I just tried specifying 'Portuguese' as the language and everything seems to work fine -- Shortcut provides the description in the specified language.
Please note that the language must be specified exactly as written (Portuguese) without any accented letters or similar. Shortcuts tries to find the language code in a `dictionary` which keys are language names, so it should match exactly.

By Ash Rein on Monday, January 19, 2026 - 15:23

Or, we could just wait a couple months and then we’ll be able to ask Siri directly what’s on the screen

By user26335377 on Monday, January 19, 2026 - 16:05

As I already stated, there are two main issues:
1) Apple Intelligence is only available in a fairly limited number of regions.
2) Personally, and possibly many others, I don’t really like voice assistants as a way to interact with a smartphone.
The ideal solution would be to integrate image descriptions directly into VO, and considering that VO is already able to successfully read text from images and describe what is happening (the Surrounding Recognition function), it seems that the technical part has already been implemented; it is enough to make it possible to receive descriptions not only from the device’s camera, but also directly from the screen. However, since VoiceOver does not provide an API for writing custom scripts, we are limited to the capabilities provided by Shortcuts app.

By Lee on Monday, January 19, 2026 - 16:11

Hi,

Never really used shortcuts so I maybe being thick here. I want to change VO to off as suggested so that screen curtain doesn't need to be turned off first. I didn't pick Y when setting up and I can't figure out how to change this to yes instead of no. Any help appreciated.

By Guilherme on Monday, January 19, 2026 - 16:38

The shortcut is working, and the issue was the word “Portuguese,” which I had written incorrectly. However, when I set the shortcut to turn VoiceOver off and then turn it back on, it only turns VoiceOver off and does not activate it again.

By user26335377 on Monday, January 19, 2026 - 17:07

You can reconfigure any setting at any time by simply going to the Shortcuts app, finding the CloudVision shortcut, selecting the Edit action in VO Rotor, clicking the Info button at the bottom of the screen, then Import Questions, and finally using the Setup Shortcut button at the bottom of the screen.

By user26335377 on Monday, January 19, 2026 - 17:18

That's exactly why I called this feature experimental and disabled it by default. The problem is that the screen curtain must turn itself off while the screenshot is being taken, and it does this if you use physical buttons to take a screenshot, but in the case of a programmatic call to a similar function in the Shortcuts application, the state of the screen curtain is not taken into account in any way. Trying to temporarily disable VoiceOver is essentially a dirty hack. Initially, I tried to do this without any delay at all, but it turned out that by the time VoiceOver tried to turn back on, it had not yet been completely turned off, as a result, the system simply ignored the action to turn it on. I tried adding an one second delay before VoiceOver turned on, which worked fine on my device, but it seems to vary by model. Of course, I could update Shortcut to increase the delay to, say, two seconds, but that seems to negate any benefit of automatically turning off VoiceOver -- performing a gesture to turn off the screen curtain is definitely faster than waiting for the system to allow VoiceOver to re-enable.

By Brooke on Tuesday, January 20, 2026 - 03:47

It's working here. I kept the default setting because changing it caused Voiceover to turn off but not back on. I'm enjoying using this shortcut!

By user26335377 on Tuesday, January 20, 2026 - 09:40

The main issue with temporarily turning VoiceOver off feature was that, when invoked by VoiceOver, Shortcut would stop working immediately after VoiceOver itself was disabled. You can get around this by adding an intermediate Shortcut whose only job is to run the main one. However, this approach, due to the need to execute two Shortcuts instead of one, increases the delay in the appearance of the initial menu, but given that you already have to wait for VoiceOver to restart, this may be acceptable. Link to the intermediate Shortcut, just install it next to the main one and reassign the VoiceOver command to run CloudVisionExecutor instead of CloudVision.

The original post has also been updated. Again, I highly recommend using the old Shortcut unless you need to temporarily disable VoiceOver, because while the additional Shortcut will still work, it will have a delay in showing the initial menu.

By Guilherme on Tuesday, January 20, 2026 - 10:53

Everything is working, including using the shortcut to turn VoiceOver off, turn it back on, and run the main shortcut. The only thing that is happening is that when I run the VoiceOver command, it opens the Shortcuts app, but that is not a problem for me. Congratulations on the shortcut — great work!

By user26335377 on Tuesday, January 20, 2026 - 11:09

Thank you for your feedback.
It seems that this is an inevitable consequence of using a chain of two Shortcuts; the Shortcuts app in this case runs the main command instead of VoiceOver, so that disabling the latter does not interrupt the execution of the Shortcut. Apple doesn't provide an API for managing active apps within Shortcut, so we can't automatically switch to a previously opened app. Personally, this method still seems less convenient to me than simply turning off the screen curtain before receiving an image description, however, perhaps for some such compromises will be acceptable.

By Guilherme on Tuesday, January 20, 2026 - 11:30

I’m going back to the old shortcut because when I run the Cloud Vision Executor, it opens the Shortcuts app window and takes a screenshot with the Shortcuts app open, instead of the image I want to describe. The only issue with going back to the old shortcut is that I often forget to turn Screen Curtain back on after the screenshot is taken, but there is no other way.

By user26335377 on Tuesday, January 20, 2026 - 15:09

I've completely removed the ability to temporarily disable VoiceOver because it doesn't work correctly. The updated Shortcut can be found at this link and in the original post.

The main issue is that we have a closed loop: to take a screenshot, you need to turn VoiceOver off; The only way to continue running a Shortcut after turning VoiceOver off is to start a new Shortcut before VoiceOver got disabled, because immediately after the currently executing Shortcut is terminated; any Shortcut launched from another Shortcut will open the Shortcuts app. As far as I know, the Shortcuts capabilities in iOS does not provide a technical way to bypass these restrictions.

By Brian on Tuesday, January 20, 2026 - 21:48

I downloaded the latest shortcut and installed it. I reconfigured the voice gesture, just to ensure that it was pointing to the correct shortcut. Then I tried to activate the gesture. I got as far as choosing image description or text description ((tried both), and then got the following message:
"SHORTCUTS, now, CloudVision , Please choose a value for each parameter in this action."

Just a note, when I installed the shortcut, I did not change any settings. I just ran through the initial setup and launched.

By Prateek Dujari. on Wednesday, January 21, 2026 - 02:02

https://www.icloud.com/shortcuts/eceada7194c249bfb41e285170f01fdd
The default Siri verbal command to trigger this shortcut is “TempShot share”. just say this phrase to Siri and within a couple of seconds voice over focus will be on a share sheet where the screenshot is already added. Now double tap on your favorite AI such as ChatGPT or AIRA AI or seeing AI for example. you’ll have the description and then you can ask further questions to your selected AI. The execution of this shortcut never adds the image/screen shot to your Photos library so that remains clean and it simply puts it temporarily on the clipboard and then from the clipboard on the share sheet automatically as mentioned above. HTH.

By user26335377 on Wednesday, January 21, 2026 - 09:25

I tried re-setting different values ​​several times during the initial setup, but Shortcut continues to work correctly.
Could you please run this Shortcut from the Shortcuts app by simply double tapping on CloudVision, when the error appears there will be a "Show" button, clicking on it will open the Shortcut's editing screen, there will be a focusable VO question mark icon next to the desired action. Write down what action it is located near; you can use the names of the variables declared next to it as a guide. Unfortunately, I don't know how I could test iOS Shortcut more efficiently, since I can't reproduce a scenario similar to importing a Shortcut from a link on my device.

By user26335377 on Wednesday, January 21, 2026 - 15:35

I've updated Shortcut, you can install it from this link, or in the original post.

As an experiment, I added alternative models that could be selected during the initial setup. PiccyBot is also offered by default, however, in addition to it, you can choose Pollinations (uses Open AI models), GPT-5.2 and Grok 4. Please note that if you select GPT-5.2 or Grok 4, you will also need to specify your API key; for other models, leave this field empty.

By Lee on Wednesday, January 21, 2026 - 16:23

Ok never used this AI so swapped it out on the update. However, if you don't need a key then it isn't working for me. Tried a facebook photo just as an example and all I get is a statement about if you want me to describe a photo please upload one. The old way just did the description so not sure what is going on. May change back to Piccybot.

By user26335377 on Wednesday, January 21, 2026 - 17:27

I took this Model from an AI Contents Describer NVDA Addon. It seams, that in NVDA addon it also doesn't work. I'd even tried to manually make similar request with curl or python script -- it always says that there's no image. Probably this is a model issue.

By Brian on Wednesday, January 21, 2026 - 17:52

I used to use this NVDA add-on. Pollinations used to work very, very well. Sad this is no longer the case.

By user26335377 on Wednesday, January 21, 2026 - 18:04

What about the error you were getting? Did you manage to fix it? If not, try following the steps I described in the comment above, based on those results I can try to figure out the causes of the issue.
PS. With NVDA I'm using CloudVision instead of AI Content Describer, it also works quite well.

By Brian on Wednesday, January 21, 2026 - 18:11

Hi,

I downloaded your latest shortcut from the link you provided above. I went through the setup process, and chose GPT-5.2, along with my open AI key. Everything seems to work until it gets to the part where it needs the actual AI key, and then I get a message about an invalid password. So, stupid human question for you, how do I add my open AI password to this?

By user26335377 on Wednesday, January 21, 2026 - 18:37

Hmmmm, the only thing that happens with the API Key is that it is sent as a header in the HTTP request, something like `Authorization: Bearer <api_key>`. All further responses are generated by the server, i.e. an error occurred on the Open AI side, which is what the message appeared about. In fact, I don't have the GPT-5.2 API Key to test functionality, but the request matches the API in the documentation, so I don't quite understand the source of the problem.

By Brian on Wednesday, January 21, 2026 - 18:43

When I click the show button on the latest shortcut, the punctuation mark is next to the following message within the code.
Displaying the result of the description request
This is with PiccyBot as the AI choice by the way.

HTH.

By user26335377 on Wednesday, January 21, 2026 - 19:06

1) Can you please try selecting 'n' for Play Sound? I don’t understand what could be causing the error, but literally the next action that occurs after the point you specified is checking whether the sound needs to be played and playing it.
2) Regarding the API Key: according to information on the Internet, Open AI generates this error if Project API Key is used. In this Shortcut, the call is made to an API endpoint that only supports Legacy unrestricted API Keys.

By user26335377 on Wednesday, January 21, 2026 - 20:10

I've updated the Shortcut, it can be installed from the following link or from the original post.

Pollinations AI restricted it's free usage to only text input, so it also requires an API key.

By Brian on Wednesday, January 21, 2026 - 21:50

With the latest shortcut I'm still getting that weird error. It only seems to happen when I use PiccyBot. Oh, and I did try setting it up with sounds disabled. No luck.
I'll have to try to get a ChatGPT 5.2 API key, but from what I have been reading, I need to create a developer account in order to do this. Doesn't seem too complicated, I'm just a little burned out on this shortcut after messing with it for the past few days. If nothing else, I can look into pollinations or Grok, but from what I hear, getting a Grok API key is a pain.

By user26335377 on Wednesday, January 21, 2026 - 22:19

I don’t think so: first of all, the issue happens even if you choose to simply recognize the text, and this item, in principle, does not use any kind of GPT api. Please try this test version and report about the results. In case of an error, follow the same steps to launch Shortcut from the Shortcuts app and report which action is marked as error.

Regarding the API key: yes, it involves creating a separate account, but there are quite detailed instructions on the Internet on how to do this. I had no experience interacting with Open AI, but getting an API key from Pollinations turned out to be quite simple, you just need a Github account to register there.

PS. What device are you using? iPhone/iPad/mac?

By Brian on Friday, January 23, 2026 - 01:38

OK I downloaded that test shortcut. I'm getting the same error. The text that the punctuation is next to when I click on the show button is:
Displaying the result of the description request

I chose no for playing sounds. I left it on the default AI, and did not change any other settings.

I am using an iPhone SE 2022, running iOS 18.7.2. I do have a test iPhone, that currently has iOS 26.2 installed on it. I could try your shortcut on that device, and see if it works better there. Maybe it's an iOS issue?

By Brian on Friday, January 23, 2026 - 01:46

So I tried doing things the old-fashioned way. With the shortcut installed and setup, I took a screenshot. I double tapped on the screenshot button. I double tapped on the share button. I navigated over to the cloud vision option in the share sheet. Then I chose to describe image. I did not get any errors, however I did not get anything at all.
Just wanted to share that with the class. 😣

By user26335377 on Friday, January 23, 2026 - 08:49

Quite a strange bug, technically the next step is simply checking the play_sound variable to determine whether to play the sound now. Here is another test Shortcut, try it and report the results similar to the previous one.

Unfortunately, I cannot reproduce this problem myself in order to simplify the testing process and immediately suggest a working variant.

By Brian on Friday, January 23, 2026 - 18:35

This one actually works. I tried it from the following:
*the shortcuts app.
*Assigning a VO touch gesture shortcut.
*I even took a manual screenshot, and shared it with cloud vision through the share sheet.

All is actually working as intended now. Finally! not sure what the issue was, but good job on the shortcut overall. When it works, it definitely saves a little bit of time when you just want a description of something real quick.

By user26335377 on Friday, January 23, 2026 - 19:11

It's great that everything works now!

If you have some time, could you please check out this version? The only difference between the previous version and all before it is that I did not use variables of type bool, this seems strange, because it is unlikely that only Shortcuts in iOS 26 support them. To finally verify the hypothesis, in the Shortcut from the link I replaced text with bool, this is the only change relative to the version that works for you. If it turns out that this Shortcut will not work for you either, I will correct the version in the original post by avoiding the use of the bool type. Note that when initially setting up, the value for Play Sound must be exactly either 'y' or 'n' (one lowercase Latin letter), in fact, the value entered there is validated, but in case the check for some reason does not work, the variable from which the condition is later checked may indeed not be initialized.

By burak on Sunday, January 25, 2026 - 12:58

Thank you for this shortcut. I have been needing something like this.
However, when I clikc get text, shortcuts sends this notification:
Please choose a parameter for each value in this action.
How shouldI proceed?

By Brian on Sunday, January 25, 2026 - 22:47

Hi,

So I gave that latest link a try. Once again, I get that error, when I click on the show button, I get the following:
'Displaying the result of the description request'

I did not change any of the settings, whatsoever. Just kept clicking next to setup the shortcut.

By user26335377 on Monday, January 26, 2026 - 23:30

Based on the latest comments, I conclude that on some devices Shortcut does not work correctly if variables of type bool are used inside. This problem cannot be reproduced for me, so I cannot find out its source in more detail. However, for greater compatibility, I updated Shortcut to remove all use of bool type variables. The fixed version can be installed from this link, or from the original post.

I recommend that everyone who previously experienced an error like “Please choose a parameter for each value in this action” try the new version.

By Brian on Tuesday, January 27, 2026 - 16:18

I can confirm the update works on an iPhone SE 2022, running iOS 18.7.2. In fact, even the little alert tone works now. Tested using the shortcuts application, an assigned touch gesture through VO settings, and even using the share sheet. Everything seems to be working as intended. Thanks so much! 😊

By Quinton Williams on Wednesday, January 28, 2026 - 05:12

i can't figure out how to do this.
I keep getting errors similar to this one.
Notification SHORTCUTS, now, CloudVision, No Key Provided , No key was provided to the Get Dictionary Value action.
I've even added the text "/default" back, yet it continues to fail.

By user26335377 on Wednesday, January 28, 2026 - 14:06

It seems that the previously used default model has been depricated by it's owner. The updated Shortcut uses a different model instead, but it seems to be slower and may shut down altogether at some point. For anyone who is not satisfied with the quality of descriptions with the default model, I recommend using one of the paid models with your own API Key. The updated Shortcut can be installed from this link or from the original post.

By user26335377 on Wednesday, January 28, 2026 - 14:13

I checked it several times, the prompt changes perfectly to your own; during the initial setup, instead of '/default', write any text that you want to use as a prompt. Perhaps some of the variables were damaged during the editing process, try to completely remove Shortcut and install it again by following this link or from the original post.

If the problem persists, open the Shortcuts application, launch Cloudvision, just double tap on its name, the error will not appear in the notification, but as a separate pop-up window, click the "Show" button, the Shortcut editing screen will open, at the place where the error occurred, the exact action will be marked with a question mark, this is a separate element focused by VoiceOver. Report the action next to this question mark, i.e. the one you'll be taken to if you swipe right from that question mark once.

By Apple-fan01 on Saturday, January 31, 2026 - 00:44

Shortcut like this should definitely be available on Mac as well. As Mac with Apple silicon also have ChatGPT extension.
Yesterday I was asked to try placing a picture into my PowerPoint slides just to show cited people that inserting images is possible for Vi and just realise that how difficult that was for Vi
So is this available on Mac since it’s available on iPhone.

By user26335377 on Saturday, January 31, 2026 - 08:11

Unfortunately, I have no way to test this Shortcut on macOS. However, I have not limited it to iOS devices in any way, so you are welcome to test it and report your results.

By Morgan on Tuesday, February 3, 2026 - 19:39

Hi, I started using this shortcut a couple weeks ago and loved it! It worked flawlessly up until a few days ago.
Now whenever I run the shortcut, have a custom gesture set, I received the following error:
“Unknown model.”
I haven’t changed anything in the shortcut, was wondering if anyone else has experienced the same problem?