Is there a way to link prompts to each other into a workflow? Suppose I want to run Prompt A , then use output to run Prompt B and Prompt C, then use the output from Prompt B and C to run Prompt D.
Welcome to the forum. Yes, this is absolutely possible and the basis for all agent-based solutions (agents being separate LLMs or custom AI models that receive their prompts from a “main” or “manager” LLM which is optimized for human interaction). There are some cookbooks that have such solutions available, this one for example - cookbook/examples/Search_Wikipedia_using_ReAct.ipynb at main · google-gemini/cookbook · GitHub (the Agent aspect comes into play when the LLM interacting with the user in a chat session calls another copy of itself to perform a specific function using generateContent, outside of the scope of the chat).
It is worthwhile to explore the cookbooks to get an initial idea of how the interactions between main and helper agents are to be structured, and then have a better understanding of what to look for in agent frameworks (there are several competing frameworks available).
While processing language through a workflow of prompts would not be hard, just programming, the challenge would be in separating “data” (which you wrap in a prompt) from “chat”. You don’t want a “summarize” task to be summarizing “Sure, I’m glad to help you” written by an AI.
The AI models like to talk about what they are going to do, so you need structured output to force an input/output task to be completed in a certain manner.
Besides calling models again and again, you can also give multiple steps to be performed directly in instructions, guiding the total production of language of a turn, just needing some prompt engineering to find the limit of understanding:
Perform these automated tasks, producing each of these outputs in your JSON-only response, separated by linefeed only:
List 10 celebrities with a sciences degree, JSON array: objects with keys “name”, “degree”, “institution”
From that list, produce three celebrities that would work best with Bill Nye, JSON array: objects with keys “name”, “justification”
I do not think writing a single prompt that includes sequential instructions would work in this case. Prompts are very long as is, with multiple steps. Further I would like to improve individual parts (subprompts) separately, submitting few shot examples. For management purposes, it would make the entire process super clunky. I will try the cookbooks. I was thinking maybe there is a way to call a sub-prompt in a prompt.
You going to want to make sure that the output from previous prompt is conforming to the required input of the next prompt. To do this, you can use system instructions in the api call to ask the model to make sure that the model only executes the prompt if the output from previous prompt that has been supplied as input conforms to your requirements, else it should return an error.
This can be used in combination with function calling to ensure that the output of the current prompt is conforming to your requirements