Providing examples of text passages with corresponding positive or negative labels in the prompt followed by the new text passage to be classified is the correct prompt engineering strategy for using a large language model (LLM) on Amazon Bedrock for sentiment analysis.
Example-Driven Prompts:
This strategy, known as few-shot learning, involves giving the model examples of input-output pairs (e.g., text passages with their sentiment labels) to help it understand the task context.
It allows the model to learn from these examples and apply the learned pattern to classify new text passages correctly.
Why Option A is Correct:
Guides the Model: Providing labeled examples teaches the model how to perform sentiment analysis effectively, increasing accuracy.
Contextual Relevance: Aligns the model's responses to the specific task of classifying sentiment.
Why Other Options are Incorrect:
B. Detailed explanation of sentiment analysis: Unnecessary for the model's operation; it requires examples, not explanations.
C. New text passage without context: Provides no guidance or learning context for the model.
D. Unrelated task examples: Would confuse the model and lead to inaccurate results.