The intent with this inquiry is to improve my own workflow(s) and potentially those of others interested in this topic. Thank you!
I just prompt as I go and find that the "cost" of prompting again to get a better output is lower than the cost of having some system for cataloging, maintaining, and versioning my prompts.
I might be wrong but I'm getting good results out of LLMs.
Or a personal assistant. I have a text based workflow but explaining it to an llm takes yet another 1000 word essay. I would benefit from a workflow that lets me reuse and version prompts.
These are two very different questions.
To keep prompts organized as a software startup is a completely different use-case, as we need: - a way to dynamically fill the prompts (we use Mustache) - a way to store the prompts that enable versioning (so they are on git like the rest of our code) - a way to allow non-technical users (eg. Product team) to revise a prompt, so they are stored as JSON objects.
So our prompts are basically an object that encapsulates the OpenAI-style parameters, plus additional in-house parameters such as fallback model, risk profile, etc.
Ask HN: How do you manage your prompts in ChatGPT? https://news.ycombinator.com/item?id=41479189
I'm curious to see how people's workflows have changed.
Us, we manually catalog them in well-named Markdown files and folders and store in a git repo. I would like a more taxonomical approach.