Skip to content

Recording Templates

You might find it helpful to include the following templates in your workflow library to speed up repetitive data entry that can otherwise bog you down in process.

Prompt storage template (markdown) - basic parameters

Here's an example template that you can use to store your prompts in a library.

I add these as code snippets in VS Code so that I can quickly insert them as required.

# Prompt Title

## Prompt ID

{Unique ID for the prompt to correlate it with outputs}

## Category

## Instructions

## Example Usage

Output storage template

When recording outputs, I find it particularly helpful to include the following:

  • The date (because models and information both change rapidly)
  • The model used (ie, the variant of, say, GPT)
  • The platform (because using an LLM via an API or web UI often provides surprisingly different results!)

Detailed output storage template

# Metadata about the LLM output

output_id: "unique_output_id" 
timestamp: "2024-11-20T14:00:00Z" 
model_used: "gpt-4" 
prompt_template: "summarize_text"
input_type: "text" 
tags:
- "summary"
- "research"
- "AI"

# Input data provided to the LLM
input_data:
input_text: |
"This is the text that was provided to the model for summarization or other tasks."
additional_context: |

"Any additional context or system instructions provided to the model."

# Output generated by the LLM

output_data:
structured_output: |
"This is where the actual output generated by the model is stored."
format: "text"  # Format of the output (e.g., text, JSON, XML)
length_in_tokens: 250  

# File storage information

file_storage:
file_path: "/path/to/library/output_id.yaml"  # Path where this YAML file is stored

## Version Control:

current_version: "v1.0"
previous_versions:
- version_id: "v0.9"
timestamp: "2024-11-19T10:00:00Z"
changes_made: "Initial draft"

# Access control and permissions

permissions:
owner: "user_id_123"
access_level: "read-write"  # Options could be read-only, read-write, etc.

Simpler template for storing outputs

## Date

## Model

## Platform

## Output Text

Template for storing outputs with review markers

You can also use markdown checkboxes and then filter on them programatically (although by the time you're doing things like this, I think a proper GUI is the way to go!)

For example:

## Date

## Model

## Platform

## Output Text

---

### Review Checklist

- [x] Needs QA
- [x] Needs Fact Check
- [ ] Review Grammar
- [ ] Verify Sources
- [ ] Proofread for Consistency
- [ ] Final Approval

Prompt and output - combined template

You might be wondering: why not just use one template?

This is personally what I do:

  • Prompts I "engineer" get drafted in a prompt library
  • Those I write ad-hoc get written into outputs and then extracted programatically. Many are just discarded because they were not sufficiently detailed to be of future use. Some are retained.

If you use this approach and stick to a consistent template, you can script out the prompts programatically (which is also something I do).

A template can be as simple as:

# Title

{A summary title for this prompt and output pair, or just use the filename}

# Prompt

{Full text of the prompt you used}

# Date

{The date and timestamp depending upon how exact you wish to be}

# Model and Platform

E.g. GPT4-o via ChatGPT

# Output

{The output you received from the model}
Then, use Python to parse the text between (in this example) #Prompt and #Date and write those off to a separate folder in the directory.