Prompt chaining is a technique where a complex task is broken into smaller subtasks, and the outputs of one subtask are used as inputs for the next, sequentially guiding a large language model (LLM) to solve the problem step-by-step. This method is particularly useful for complex tasks that require multiple reasoning steps.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"Prompt chaining involves breaking a complex task into smaller subtasks and sequentially passing the output of one subtask as input to the next, enabling large language models to handle intricate problems by solving them step-by-step."
(Source: AWS Bedrock User Guide, Prompt Engineering Techniques)
Detailed Explanation:
Option A: One-shot promptingOne-shot prompting provides a single example to guide the LLM, but it does not break tasks into smaller subtasks or handle sequential processing.
Option B: Prompt chainingThis is the correct answer. Prompt chaining divides a complex task into smaller, manageable subtasks, solving them sequentially with the LLM, as described.
Option C: Tree of thoughtsTree of thoughts involves exploring multiple reasoning paths simultaneously, not breaking tasks into sequential subtasks.
Option D: Retrieval Augmented Generation (RAG)RAG retrieves external information to augment LLM responses but does not specifically break tasks into sequential subtasks.
[References:, AWS Bedrock User Guide: Prompt Engineering Techniques (https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-engineering.html), AWS AI Practitioner Learning Path: Module on Generative AI Prompting, Amazon Bedrock Developer Guide: Advanced Prompting Strategies (https://aws.amazon.com/bedrock/), , , , ]