Master these 3 tips to make you a GPT person in seconds!

23-04-09 15:15
Read this article in 15 Minutes
总结 AI summary
View the summary 收起
You've Been using ChatGPT Wrong! Master these 3 tips to become a GPT Player!
Original article by Normanrockon, MultiSig


Why is prompt engineering so important?


The goal of the Prompt project is to improve the performance of the language model by providing clear, concise, and well-structured input that needs to be customized for the specific task or application for which the model is intended. Think of prompt engineering as using clear language to communicate with people so that they can understand your intentions and respond more satisfactorily.


Next, we'll look at OpenAI's prompt engineering best practices, FushionAI's auto-generation capabilities, and allowing GPT to reflect on itself. We'll give you an extra practical tip, so keep an eye out!


OpenAI's official best Tips


1. Use the latest models


For best results, we recommend using the latest, most powerful models. As of November 2022, the best choice for text generation is the "text-Davinc-003" model, and the best choice for code generation is the "Code-Davinc-002" model. Being able to use GPT-4 is certainly better than ChatGPT.


2. Place the instruction at the beginning of the prompt and separate it from the text with a ### or ""


Not very effective:

Summarize the text below into a list of key points.

{input text}


A better option:

Summarize the text below into a list of key points.

Text:

""" {enter text} """


3. Be as specific, detailed, and descriptive as possible about the desired context, result, length, format, style, etc


Not very effective:

Write a poem about OpenAI.


A better option:

Write a short inspirational poem about OpenAI that focuses on the DALL-E product launch (DALL-E is a text-to-image machine learning model) in the style of {famous poet}.


4. Use examples (Example 1 and Example 2) to specify the required output format


Not very effective:

Extract the entity from the text below. Extract the following four entity types: company name, person, specific subject, and subject.

Text: {text}


A better option:

Extract the important entities from the text below. First, extract all company names, then all names, then specific topics related to the content, and finally the overall topic.

Desired format:

Company Name: < A comma-separated list of company names.  

Name: - | | - & have spent

Specific topics: - | | - & have spent

The overall theme: - | | -

Text: {text}


5. Start with zero sessions, then do fewer sessions, and if none of these methods work, tweak it


Zero order learning

Extract the keywords from the text below.

Text: {text}

Key words:


Learn Less - Provide several examples

Extract the keywords from the text below.

Text 1: Stripe provides apis for Web developers to integrate payment processing into their websites and mobile applications.

Keywords 1: Stripe, payment processing, API, Web developer, Website, mobile application

Text 2: OpenAI has trained a very good language model for processing and generating text. Our API lets you use these models to solve almost any task involving a processing language.

Key words 2: OpenAI, language model, text processing, API.

Text 3: {text} 

Key words 3:


Fine tuning: See the fine tuning best practice guide in Resources.


6. Reduce ambiguity and inaccuracy


Not very effective:

The description of the product should be short, only a few sentences, not too much.


A better option:

Describe the product in 3 - to 5-sentence paragraphs.


7. Not just what not to do, but what to do


Not very effective:

The following is a conversation between the agent and the customer. Don't ask for a username or password. Don't repeat yourself.

Guest: I can't log into my account. Agent:


A better option:

The following is a conversation between the agent and the customer. The agent will attempt to diagnose the problem and propose a solution, while avoiding asking any questions related to personally identifiable information (PII). Don't ask the user name or password, but guide users to consult www.samplewebsite.com/help/faq help articles

Guest: I can't log into my account. Agent:


8. Code generation - Use "lead words" to guide the model to generate specific patterns


Not very effective:

Write a simple Python function.

1. Ask me for a number in miles.

2. Convert miles to kilometers


In the following code example, adding "import" indicates that the model should start writing in Python. (Similarly, "SELECT" is a good cue to start an SQL statement.)


A better option:

Write a simple Python function.

1. Ask me for a number in miles.

2. Convert miles to kilometers

import


FusionAI automatically generates better prompts


图片


FusionAI is an AI software that can automatically generate more suitable tips for GPT and generate corresponding articles. I would recommend using it as a tutorial for beginners to learn the tips engineering.


For example, if the prompt "I want to have a blog of prompt engineering" is displayed, the FusionAI changes the prompt to


图片


As you can see, the resulting prompts compare to tips 3 and 6 above, specifying the input length, making the requirements more precise, allowing the AI to focus on the benefits and challenges of the prompt project, and providing examples.


Let's challenge FusionAI with Chinese input. Give a hint: "Give me a blog about the Hint project." The FusionAI information is as follows:


图片


It can be seen that this prompt is seriously wrong, the words are not meaningful, can not be used. It's a reminder that language and instructions lose information when they're translated, and the more times they're translated, the worse the information becomes, until they become indistinguishable. Therefore, we should try to reach out and use primary information as much as possible, and this applies to AI as well.


We do not recommend using various templates or FusionAI-like tools to generate content because of the noise. Of course, it's okay to learn about prompt engineering when you don't know about it.


GPT, you need to be self-reflective


In a recent blog post by Eric Jang, Can LLMs Critique and Iterate on Their Own Outputs? He mentions that LLMS can self-correct without any basic feedback, and attempts to use such self-reflection as a technique to prompt engineering. (Note that currently only GPT-4 has this feature)


You can think of the situation as someone sending you a text message, then quickly "unsend" and send a new one.


Let's take an example, when we ask GPT-4 to write a poem that doesn't rhyme: "can you write a poem that does not rhyme? think carefully about the assignment", GPT-4 gives the answer:


图片


Obviously, the excerpt rhymes, which is not what we want. Then we give a further instruction for GPT-4 to reflect on itself: "did the poem meet the assignment?" , GPT-4 will answer:


图片


It can be seen that GPT-4 does not rhyme this time, and GPT-4 has completed its cue project without giving any additional feedback. I suspect this may have something to do with the LLM's unsupervised learning, but why GPT-4 has this feature and GPT-3.5 does not is unclear.


There are limits, of course. If you wanted, you could take GPT-4 and give them two random five-digit numbers and take their product. The next thing you'll see is that no matter how much you ask GPT-4 to reflect, it can't give you the right answer. GPT-4 just keeps talking polite nonsense. For those who want to delve deeper, the link at the end of the article is Eric's blog and a recent preprint paper, Reflection.


One more thing


Interested readers may have noticed that the authors generally choose English as the prompt language when using LLM. This is because as a pre-training model, its excellent performance is related to the data set at the time of pre-training. Generally speaking, the more data, the better the training. As the world's most used language, English has far more data than Chinese. So unless you need to output articles that are strongly related to the Chinese context, I would recommend using English as the cue language.


summarize


In this article we introduce three ways to prompt engineering. They are OpenAI recommended pre prompt project, AI automatic generation, and reflective post prompt project. At the same time, we also recommend that non-native English speakers try to use English as the language for interacting with LLMS.


References:
https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api
https://docs.google.com/document/d/1h-GTjNDDKPKU_Rsd0t1lXCAnHltaXTAzQ8K2HRhQf9U/edit#

https://fusion.tiiny.site/home.html

https://evjang.com/2023/03/26/self-reflection.html

https://arxiv.org/pdf/2303.11366.pdf


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit