Are you ready to become an 'AI psychologist'? Now is the time to get in on the ground floor of this lucrative new career. Here's how to do it. Credit: Andrey_Popov/Shutterstock Generative AI is in its early days, but it’s already threatening to upend career paths and whole industries. While AI art and text generation are getting considerable mainstream attention, software developers tend to be more interested in large language models (LLMs) like ChatGPT and GitHub Copilot. These tools can help developers write code more efficiently using natural language queries. If you’ve spent even a few minutes playing with public versions of generative AI tools, you’re familiar with the sorts of input you can give them in order to produce results. But not all queries are created equal. Learning how to craft AI prompts to get to the best results as quickly as possible is rapidly becoming a marketable skill, known as prompt engineering. What is prompt engineering? Prompt engineering is “the art and science of precisely communicating your requirements to a generative AI tool,” says Mike King, CMO at AIPRM, provider of a prompt management tool and community-driven prompt library. “Think of it as the translator between human intent and machine output. And just like any translation, it requires a deep understanding of both sides of the conversation.” “Prompt engineering requires a great command of language, good lateral thinking skills, and an understanding of the underlying technology,” adds Richard Batt, an AI consultant in the UK who offers prompt engineering as one of his services. “It can appear to be very simple when you first try it, but getting a response that is of a consistent quality for complex requests can be a lot harder than it seems!” We spoke to practitioners in this rapidly growing field to find out about the opportunities for those who are interested in prompt engineering, and how you can learn the tricks of the trade and prove yourself to potential clients and employers. While a deep dive into prompt engineering is beyond the scope of this article, we’ll conclude with an example that demonstrates some of what’s involved in writing effective queries. How to become a prompt engineer Joseph Reeve leads a team of people working on features that require prompt engineering at Amplitude, a product analytics software provider. He has also built internal tooling to make it easier to work with LLMs. That makes him a seasoned professional in this emerging space. As he notes, “the great thing about LLMs is that there’s basically no hurdle to getting started—as long as you can type!” If you want to assess someone’s prompt engineering advice, it’s easy to test-drive their queries in your LLM of choice. Likewise, if you’re offering prompt engineering services, you can be sure your employers or clients will be using an LLM to check your results. So the question of how you can learn about prompt engineering—and market yourself as a prompt engineer—doesn’t have a simple, set answer, at least not yet. “We’re definitely in the ‘wild west’ period,” says AIPRM’s King. “Prompt engineering means a lot of things to different people. To some it’s just writing prompts. To others it’s fine-tuning and configuring LLMs and writing prompts. There are, indeed, no formal rules but best practices like the mega prompts are emerging.” While formal prompt engineering courses are beginning to emerge from providers like DeepLearning.ai, most developers will take a self-directed approach to learning and improving prompt engineering skills. Richárd Hruby, CTO of generative AI startup CYQIQ, lays out a tripartite strategy for learning about prompt engineering: Learn about model architectures. Try, fail, learn, and try again. Spend time on Twitter, Reddit, Discord, and other social media. Let’s take a minute to consider each of these points in detail. Know your large language models While some aspects of how specific LLMs work are proprietary, much of the theory and research is publicly available. Familiarizing yourself with what’s happening under the hood will keep you from thrashing around too much. “While specific implementations might differ, all LLMs are built upon the same foundational concepts and layers, which include tokenizers, embedding layers, and transformer layers,” says Andrew Vasilyev, a senior software developer working on prompts for an AI assistant in ReSharper at JetBrains. “Understanding these concepts is crucial for recognizing the limitations of LLMs and the tasks they can efficiently handle. Viewing an LLM merely as a black box can lead to an overestimation of its capabilities.” Randall Hunt, VP of cloud strategy and solutions at Caylent, has learned the ins and outs of prompt engineering through formal and informal experimentation as part of his company’s use of AI models. He advises potential prompt engineers to keep up with the current state of research on LLMs. Like Vasilyev, he emphasizes tokenization as key to understanding how LLMs work. “These models are attempting to predict the next token, so it is important to give them context in the form of tokens to work with, and it is a balancing act between prompt length and prompt performance.” He adds that it’s important to understand the models’ limitations, including “context size, language constraints, and persona constraints.” Keep iterating One of the most exciting parts of working with generative AI is that you get instant feedback. That means it’s worth taking time to tweak and experiment with your prompts, which is a process that can help you improve your skills. “Crafting a prompt is never a one-shot process,” says CYQIQ’s Hruby. “Testing and refining the prompt multiple times is always the way to go. Often you are the first person to ever try prompting for your use case, so the only way you can learn how to write better prompts is by experimenting.” Find your online community Whether you’re honing your prompt engineering craft to boost your productivity on the job or at home on your own time, everyone we spoke to emphasized that you don’t have to do it alone. Enthusiast communities abound across various subreddits and Discords—and, of course, in Twitter chatter. Showcasing your prompt engineering skills In the still-emerging world of prompt engineering, online communities can serve a dual purpose. By sharing what you’ve learned, you can build up your reputation in the community, which can lead to career or contracting opportunities. Expanding that to other social media can help you make a name for yourself. “There’s no secret in marketing the skillset,” says AIPRM’s King. “Engage in thought leadership through blogging and vlogging, especially with short-form video since it has the highest propensity for virality. Get active on the various gig economy marketplaces, because there are a lot of people who don’t have the patience to build out their prompt engineering skillset.” Many of the folks we talked to also emphasized that you should be walking the walk—making your prompts and AI-based tools available for potential customers or clients to see and for others to learn from. Nagendra Kumar, co-founder and CTO of Gleen, a generative AI startup that builds customer success chatbots for enterprise brands, urges those honing their prompt engineering skills to “build ‘toy’ products with end-to-end experiences. The best way is to build some applications where your prompts are pre-inserted and users can play with them.” And, of course, you can never go wrong by open sourcing your work or contributing to open source projects. “Create a repo of awesome prompts and regularly commit the prompts there. Show examples with the use cases,” says Kumar. Open source projects also offer the opportunity to learn about the inner workings of different LLMs. “There are many open-source LLM tools on GitHub that would love contributions,” says Amplitude’s Reeve. “Look for a project you think might be interesting and start finding prompt weaknesses and suggest improvements.” Prompt engineering is evolving rapidly One thing that almost everyone we spoke to emphasized about prompt engineering is that the discipline is still embryonic and evolving rapidly. “I think anyone who claims they’re an expert in this space should caveat their claim with something like, ‘This is a rapidly evolving field and the advice that makes sense today may not be the same as the advice that makes sense in six months,'” says Caylent’s Hunt. “I’d even go so far as to say there are not yet any true experts in this space. As models grow in context, shrink in per-token costs, and improve in throughput, prompt engineering advice will need to adapt.” One big reason for those changes is that the underlying models themselves keep changing, with big AI companies and open source projects alike constantly training LLMs on more data and refining their capabilities. “As AI models and their architectures evolve—OpenAI releases a new version of GPT-4 every four to six weeks—so should the techniques for prompting,” says CYQIQ’s Hruby, who reiterates that online communities are a great place to share knowledge and observations about the shifts. “Every time generative AI tools undergo upgrades or changes (and they do, frequently), the way they interpret and respond to prompts can shift,” adds AIPRM’s King. “This phenomenon, which we call ‘prompt drift,’ can be both fascinating and frustrating. It’s like owning a Lamborghini, and then one day you get in it, and the steering wheel has a response delay after you’re used to whipping around turns at 100 miles an hour.” Prompt engineering might sound a bit like search engine optimization—another area where practitioners sometimes have to scramble to account for abrupt and unexpected changes from the creator of an underlying technology whose interests don’t always line up with their own. However, CYQIQ’s Hruby foresees a future where the relationship between prompt engineers and AI companies is more collaborative than that between SEO shops and Google, not least because nobody in the LLM space has achieved monopolistic dominance—at least not yet. “Model providers will (maybe partly as a push from their investors) share more and more best practices on how devs can make the best of their models,” he says. “As of now, there is not much official communication, but mostly chatter from the community on what prompts work best for each model and version. But I would expect more transparency from providers in the future.” What will the work of professional prompt engineering look like in the near-to-medium term? Amplitude’s Reeve outlines how prompt engineering development can be integrated into an overall workflow at companies like his. “It turns out that great prompt engineering is highly collaborative, combining the knowledge of domain-specific experts, data engineers, and software engineers,” he says. How to write effective AI prompts Prompt engineering is a process, says Reeve. “As you discover the types of prompts that work well, you need to re-shuffle, split, and merge prompts to get perfect results every time.” He breaks the process down into four phases: prototyping, productionizing, internationalizing, and polishing and optimizing. The prototyping phase is all about experimenting to discover the kinds of data you’ll want to augment your prompts with, and what the various LLMs are capable of relating to the specific task you’re trying to solve. This stage primarily requires knowledge of the problem you’re trying to solve (e.g., product managers) and the data that’s available in your system (e.g., data engineers). In the productionizing phase, you’re mostly attempting to split the task into the smallest number of prompts that can be reliably executed, and wire the prompts up to real data in your application’s code. This stage requires traditional engineering skills, data engineering skills, and outputs from the prototyping phase. For many projects, language support is important. It’s during the internationalizing phase that you should consider tweaking the prompt to output the required languages. Depending on the model this will probably be trivial, but it’s worth having native language speakers around to verify the output. It’s during the polishing and optimizing phase that the difficult part starts. Polishing your prompts to squeeze every last marginal gain out of the LLM is currently an infinite task. OpenAI’s models are constantly changing, so you’ll need to come back regularly to make sure the prompts are still performing well—you may want to build some unit tests to make this easier. This phase involves tweaking the text and data passed into the prompt, and measuring the quality of results over time. It requires domain-specific knowledge of your problem area, and sometimes some software engineering. Both cost and speed are directly related to the number of tokens passed in and out of an LLM, meaning you’ll want to make the input prompt (including data) and output format as terse as possible. Cost also differs between LLMs, so early decisions can have a large impact here. A prompt engineering example While a full course in prompt engineering is beyond the scope of this article, we’ll wrap up with some tips and a prompt engineering example from Gleen’s Nagendra Kumar, showing the work and thought process that goes into putting together a useful and efficient prompt. Kumar offered the following tips for great prompt engineering in the context of software development: Provide concrete tasks one at a time. Break up your prompt into discrete tasks. Add more context. Before starting the actual question, add context around the details and specify the role the AI will play. Provide content with a clear separator. Make the parts of the prompt clear. For example, say, “my code is in italics” or “my code is in quotes.” Don’t switch the context. Stick to the context that is related to your conversation. If you are asking the AI to debug front-end code, talk about front-end code. Avoid talking about anything unrelated to your context. Now, let’s see these tips in action. Here’s the example from Kumar. As front-end developers, we have a number of React components and CSS to make an amazing UI experience. Sometimes we have to change the CSS, but that can be tricky. Here’s an example where we used developer prompts to get amazing results. Problem: Given a list of customer logos, rotate them in a circular way on the front end. Prompt: You are an awesome front-end engineer. You are going to help me write some JavaScript and CSS code for my requirements following the guidelines mentioned. Guidelines: My code is in React. So stick to the React coding standard. Create a separate section for CSS. Don’t complicate the code. Make it easy to understand. Write React code that is responsive to the screen size of mobile and desktop devices. Don’t assume anything—ask follow-up questions if needed. Requirements: I have a React component that renders customer logos in the sheet. I want to change my React component such that the list starts rotating in a circular way. Take a look at my code written in the quote section and modify it to make it rotate in a circular way. ``` const HorizontalList = ({ items }) => { const style = { display: 'inline-block', margin: '0 10px' }; return ( <div> {items.map((item, index) => ( <span key={index} style={style}> {item} </span> ))} </div> ); }; ``` In the spirit of prompt engineering, we encourage you to input this prompt into your LLM of choice, check the results, and then see if you can refine it. Good luck! The future of prompt engineering Where prompt engineering goes from here is anybody’s guess. In fact, you could argue that the true dream of generative AI is that specialized prompt engineering knowledge won’t ultimately be necessary. “We’re getting indications that OpenAI is attempting to push the technology in a direction where prompts require less engineering,” says AIPRM’s King. “I think in the long term it’ll be a lot easier to get what you want out of generative AI tools, and more tools will integrate and abstract the functionality, so it won’t need to be someone’s full-time job anymore.” That said, people have been dreaming of easy, frictionless interactions with computers since the early days of COBOL, and it never quite works out. Learning how to talk to generative AI will likely be a lucrative new skill for software developers for years to come. Related content news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages news Visual Studio 17.12 brings C++, Copilot enhancements Debugging and productivity improvements also feature in the latest release of Microsoft’s signature IDE, built for .NET 9. By Paul Krill Nov 13, 2024 3 mins Visual Studio Integrated Development Environments Microsoft .NET news Microsoft’s .NET 9 arrives, with performance, cloud, and AI boosts Cloud-native apps, AI-enabled apps, ASP.NET Core, Aspire, Blazor, MAUI, C#, and F# all get boosts with the latest major rev of the .NET platform. By Paul Krill Nov 12, 2024 4 mins C# Generative AI Microsoft .NET news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos