The correct answer isC— the most critical aspect of designing a few-shot prompt in UiPath'sLLM-driven agent frameworkis selecting examples that arediverse,representative, andrelevantto the actual data the agent will encounter in production.
In afew-shot structured prompt, examples are used to demonstrate a pattern the model should follow. UiPath recommends:
Usingrealistic examplesfrom actual user inputs or support tickets
Coveringedge casesor variations in phrasing and tone
Matching thedesired output structureexactly (e.g., Input: ..., Output: ...)
These patterns help the LLMinfer the task correctlyandmaintain consistency, especially when processing unstructured inputs like email subjects.
Option A is incorrect — introducing incorrect labels degrades performance and adds confusion.
B is wrong — the number of examples depends on thetask complexity and token budget. Sometimes 3–5 is ideal.
D undermines task alignment — random examples reduce accuracy and coherence.
UiPath'sPrompt Engineering best practicesprioritizegrounded, contextually rich inputs, particularly when automating classification tasks like spam detection, triage, or intent recognition. High-quality, task-aligned examples lead tomore reliable, human-like agents.