Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Sharing is Caring: My Top Prompts for LLMs - SMX Munich 2024

Sharing is Caring: My Top Prompts for LLMs - SMX Munich 2024

The quality of results that you can achieve with LLMs will depend to a high degree on the quality of the prompts you are giving. And while there is A LOT to learn about prompt engineering, at SMX Munich 2024 I was asked to share some of my favorite prompting tips and some secret sauce. So here goes!

Peak Ace

June 11, 2024
Tweet

More Decks by Peak Ace

Other Decks in Marketing & SEO

Transcript

  1. A whole load of tips and tiny text are squeezed

    into jam-packed PDFs, barely readable. My timelines are full of ChatGPT cheat sheets!
  2. The reality: the PDFs are rotting somewhere on my hard

    drive, collecting virtual dust. Sadly, most of them are garbage
  3. 6 peakace.agency ChatGPT doesn‘t ask questions – it fills gaps

    with assumptions In contrast, any human being would ask questions if they didn't understand something You: I need your help creating regular expressions to build URL redirects. What information do you need from me?
  4. 7 peakace.agency Always double check if that’s REALLY everything… Double

    checking usually produces a good chunk of additional ideas and thoughts to consider as prompt input: You: Do you need any further information? Are you sure this is all you need?
  5. 8 peakace.agency Ditch what you don‘t need and let ChatGPT

    create a prompt template for you You: Please convert this into a prompt template and mark any placeholders I need to fill in with brackets. Remove items 6 (performance) and 7 (testing).
  6. I find this extremely helpful for building prompts which are

    less prone to errors and actually comprehensive.
  7. 11 peakace.agency Depending on your goal, pick your framework wisely

    Hat tip to Dr Marcell Vollmer who shared this visual on LinkedIn outlining different strategies for crafting effective prompts that target specific outcomes by emphasising the roles, tasks, and desired results. Source: https://pa.ag/3T2kMQ8 A structured approach to formulating prompts using different frameworks, each designed to optimise the interaction for specific outcomes.
  8. 12 peakace.agency Some of my core ChatGPT/LLM use cases right

    now They all come with a somewhat different prompt syntax: Do research Create outlines Create summaries Validate [things] Ideation & concepts Speed-up learning Simplify content Write code
  9. 13 peakace.agency Writing code such as complex Regular Expressions (RegEx)

    RegEx query filters in Google Search Console are extremely handy and powerful, but a pain to create by hand:
  10. 14 peakace.agency For (complex) data, drag’n’drop CSV and explain columns

    E.g., upload Sistrix data exports & explain columns to get a quick first overview of untapped potentials: Convenience = on-the-fly fixes […] data from the spreadsheet appears to be incorrectly delimited, resulting in a single column containing all data […] it has been successfully reformatted, revealing several columns […]
  11. 16 peakace.agency Why use Custom GPTs? Just ask ChatGPT: They

    offer more "everything" (context awareness, consistency, fine-tuning, …)
  12. 17 peakace.agency A Custom GPT in its simplest form: Using

    Peak Ace’s Structured Data GPT to debug and fix errors in JSON-LD mark-up Source: https://pa.ag/structured-data
  13. 18 peakace.agency You could prompt a Custom GPT the same

    way… Technically, however, you need fewer details (per prompt), such as a specific context, as you have already provided these details when creating/training/setting up the Custom GPT:
  14. 19 peakace.agency Pimp your GPT: Integrating 3rd party data to

    make it smarter A custom GPT, linked to the DataForSEO API to provide real-time access to the latest search volume data:
  15. Temperature is used to control the randomness of outputs. The

    higher you set it, the more random the outputs you will get in return. Conversely, lower values provide more deterministic outputs. Understanding OpenAI's Temperature
  16. It’s a number between 0 and 2, with a default

    value of 0.7 or 1.0 For code generation or data analysis, go with lower values around 0.2 - 0.3, for chatbots 0.5 and for creative writing 0.7 or 0.8: Understanding OpenAI's Temperature
  17. The top_p parameter, also known as nucleus sampling, is another

    setting to control diversity of the generated text. Whatever you do, don’t use both at the same time… top_p as an alternative to temperature sampling