Skip to content

Prompt

Let's come up with a 20 part taxonomy for classifying typical LLM interactions. Let's divide them according to whether they come from the user or the LLM. Create 10 for each.

Output

Here's a 20-part taxonomy for classifying typical LLM (Large Language Model) interactions, divided into user-driven and LLM-driven components:

User-Driven Interactions

  1. Prompt:
  2. The initial query or instruction given by the user to the LLM.
  3. Clarification Request:
  4. When the user asks for more detailed information or clarification on a previous response.
  5. Follow-up Question:
  6. A related question that builds on the LLM’s previous response.
  7. Feedback:
  8. Explicit feedback from the user, such as agreeing, disagreeing, or providing corrections.
  9. Contextual Information:
  10. Additional background or context provided by the user to guide the LLM’s response.
  11. Task Instruction:
  12. A command or directive given by the user, such as "Translate this" or "Summarize the text."
  13. Preference Indication:
  14. The user specifies a preference, such as tone, format, or style for the response.
  15. Error Report:
  16. The user indicates that the LLM made an error, prompting correction or revision.
  17. Example Provision:
  18. The user provides an example to help the LLM understand the task or question better.
  19. Session Closure:
    • The user indicates that the interaction is complete, signaling the end of the session.

LLM-Driven Interactions

  1. Initial Response:
  2. The LLM’s first answer or output generated in response to the user’s prompt.
  3. Clarification Query:
  4. The LLM asks the user for more information to better understand the prompt.
  5. Contextual Continuation:
  6. The LLM maintains context across multiple turns in a conversation, continuing from previous responses.
  7. Suggestion:
  8. The LLM proposes ideas, alternative approaches, or additional information that might be relevant.
  9. Error Acknowledgment:
  10. The LLM recognizes a mistake in its response and offers a correction.
  11. Response Expansion:
  12. The LLM provides an elaborated or extended response, going beyond what was asked.
  13. Hypothetical Scenarios:
  14. The LLM generates hypothetical examples or scenarios to illustrate a point or answer a question.
  15. Assumption Making:
  16. The LLM makes assumptions based on incomplete information and provides a response accordingly.
  17. Tone Adaptation:
  18. The LLM adjusts the tone, style, or formality of its response based on the user’s input or preferences.
  19. Session Continuation:
    • The LLM prompts the user to continue the conversation or explore related topics.

This taxonomy helps classify and analyze interactions, providing a structured way to understand the dynamics of communication between users and LLMs.## Extracted Prompts Prompts/Autocapture/LLM Interaction Classification.md#Prompt 1

Extracted Prompts

Prompts/LLM Interaction Classification.md#Prompt 1