The correct answer is A because temperature controls the randomness of a language model's output. A higher temperature increases diversity by making the model more likely to explore less probable tokens, while a lower temperature results in more deterministic and repetitive outputs.
From AWS documentation:
"The temperature parameter in LLMs adjusts the randomness of generated responses. Higher values (e.g., 0.8–1.0) produce more creative and diverse output, while lower values (e.g., 0.1–0.3) make output more focused and repetitive."
Explanation of other options:
B. Batch size is related to training efficiency, not output diversity.
C. Learning rate affects the training convergence rate, not inference-time output variety.
D. Optimizer type is a training configuration that influences how the model learns during training, not diversity during inference.
Referenced AWS AI/ML Documents and Study Guides:
Amazon Bedrock – Parameter Tuning Guide
AWS Machine Learning Specialty Guide – LLM Inference Parameters