AI Prompt Engineering Is Dead
- Since ChatGPT was released, many have tried prompt engineering to get the best results from large language models or AI generators.
- Companies are now using LLMs to build product co-pilots and automate work, and every business is trying to leverage them.
- Research found prompt engineering strategies like chain-of-thought or positive prompts sometimes helped performance but results were inconsistent.
- Having the model optimize its own prompts through machine learning produced better, more consistent results than human trial-and-error.
- Automatically generated prompts were often bizarre but outperformed human-designed ones.
- Prompt optimization was applied to image generation too, with machine-learned prompts again outperforming human ones.
- Prompt engineering jobs may continue but the nature of the work will evolve as models improve and are integrated into products.
- Deploying LLMs requires considerations beyond prompting like reliability, formatting, testing, and compliance.
- A new job title, LLMOps Engineer, has emerged covering the full model lifecycle including prompting.
- Currently there are few rules in this new field, which some describe as the "wild wild west.