Zero-shot
Also known as: Zero-shot Learning / Zero-shot Prompting / ゼロショット学習
A prompting approach that provides only a task instruction with no examples, relying entirely on the model's pre-trained knowledge. Modern frontier LLMs handle many tasks well zero-shot.
Overview
Zero-shot prompting provides only an instruction, relying on the model's pre-trained knowledge to infer the expected output format. GPT-4 and Claude 3+ class models perform well zero-shot on a wide range of tasks. The practical workflow is: try zero-shot first, then add few-shot examples or CoT cues if quality is insufficient.
Advantages and limits
Zero-shot prompts are compact and conserve context-window space. For highly domain-specific formats or narrow technical tasks, however, the absence of examples can lead to inconsistent output structure.
Related Columns
Feel free to contact us
Contact Us