ChatGPT, prompt engineering and blogging: What does the future hold for today's writer?
True story: I spent months developing a prompt that prevented ChatGPT
from using the words "conclusion" at the end of every blog post. I
came close several times, including results that omit that phrase about 60
percent of the time. So, I kept tweaking the prompt just slightly, testing each
version repeatedly. And I'd come a little closer, only to fall just a tiny bit
short as ChatGPT would either use that exact word or some variation of it, like
"in conclusion," or "in summary," or even "to wrap
things up."
Finally, after about four months of tweaking and rewriting the prompt, I developed one that avoids the usage of that phrase 100 percent of the time – or so I thought.
Then, just for grins, I used the prompt again tonight on a random, easy blog topic just before writing this post – and the resulting content produced by ChatGPT 4.0 used the phrase "concluding this guide" in the last paragraph.
Of course, the prompt worked the very next time I used it and avoided using any variations of the word "conclusion."
Still, it's back to the drawing board.
The problem with the DIY approach …
Welcome to the new age of content writing.
Digital marketing companies and individual freelancers are losing clients left and right because ChatGPT, Bard, Jasper and other AI tools are now available to the public.
That's because clients, still recovering from the pandemic, are intrigued by the idea of taking a DIY approach to their digital marketing, especially regarding SEO blogging. On the one hand, a client can pay a freelance writer to write a 1,000-word blog post about a topic that the client probably asked for, the post might or might not be search engine optimized, the headline might be more "meh" than great, it won't come with art (like photos or graphics), and it definitely won't come with social media text that promotes the blog post. And that 1,000-word post, although pretty well-written, will cost at least $30 to $50.
That's why ChatGPT is so intriguing to businesses. "I can just get the AI tool to write all my digital marketing content," they're saying.
And they're right – to an extent.
Make sure you test your prompts out on both 3.5 and 4.0
The resulting posts created by ChatGPT just aren't very good. It's obvious to almost anyone that the posts were likely written entirely with AI technology. And you can bet your next paycheck that Google also knows that – which could eventually kill a company's website rankings.
So what's a business, and writer, to do?
…
and the solution: prompt engineering
This
circles back to the new age of writing.
First,
consider the following: newspaper writing is a far different style from SEO
blogging, which is a far different style than social media writing, which is a
far different style from email marketing writing, which is a far different
style of ad writing.
You get the gist.
Sure, it's all just different forms of writing, but each comes with a distinctive, unique style that takes skill, diligence and time to perfect. That doesn't mean, however, that a newspaper reporter can't learn over time to become a great social media marketer, for example.
It just takes a skilled writer to learn a different writing style.
And that's what freelancers and digital marketing companies need to be mindful of because an entirely new type of writer is replacing today's blogger: the prompt engineer.
What it means, though, is that just because you were a good newspaper writer doesn't mean you will be a good SEO blogger. And just because you're an effective SEO blogger doesn't mean you can write equally effective social media posts.
And it's a lot more challenging style of writing than it might seem.
Life of the (frustrated) prompt engineer
When it comes to prompt engineering, changing even one word in a ChatGPT prompt can drastically change the result – for better or worse. It's not a big deal if you're just using the AI language model to write your family's next Christmas letter. But if you're trying to develop a complex chatbot that performs a significant time- and money-saving service for an industry?
Then this sort of minutia makes prompt engineering extremely challenging.
So why not make all prompts longer to cover all possibilities?
If only it were that easy …
Fact is, for now, ChatGPT loses its effectiveness after a while if you feed it too much data at one time, or, in this case, the longer the prompt, the less likely even ChatGPT 4.0 is going to be able to follow all directives to a "t."
For example, you could give ChatGPT a prompt like this: "Please include this exact keyword, 'find plumbers near me,' once in the third paragraph."
Sure enough, ChatGPT will write that post for you and, viola, you'll find the keyword "find plumbers near me" in the third paragraph.
But you'll also find it in the first paragraph.
And the fourth.
And definitely in the last.
And you'll find about two or three more examples of that keyword phrase worded slightly differently, like "plumbing company near me."
Therefore, you can't leave anything to the imagination in your prompts. Sure, you could add these additional directives: "Avoid using that keyword phrase in the first paragraph; avoid using that keyword phrase in the second paragraph," and so on until you've covered all the paragraphs.
But if your entire prompt is that detailed, then it may be too long – and therefore too complicated – for you to get the desired results. Simply put, at some point, if your prompt is too long, ChatGPT will only follow some, but not all, of the instructions.
The challenge with long prompts
ChatGPT, like other language models, has a maximum token limit, which for GPT-3.5 is approximately 4096 tokens. Tokens can be as short as one character or as long as one word.
So, when a prompt is too long and exceeds this limit, the model can't process it entirely. And if you paste a considerable amount of text or give extremely long directives, part of it may be cut off, resulting in incomplete understanding and potentially inaccurate responses.
Plus,
processing large amounts of information can also lead to diluted context, where
the AI might lose track of the main topic, affecting the relevance and
coherence of the output. Thus, concise and direct prompts yield the most
effective responses.
Be very careful what you say!
Language
models like ChatGPT function on patterns found in the data they were trained
with. And these patterns are susceptible to changes – even if they're
as small as a single sentence or word in the prompt. This sensitivity stems
from the nature of natural language processing and machine learning.
Then
you have these factors to consider, among others:
- Contextual Understanding: A language model doesn't have "knowledge" or "awareness" in the human sense – at least not yet!! – but instead relies on a learned pattern from a vast amount of text data. Hence, even a slight change in the context (a sentence in the prompt) can lead the AI to a different pattern and, thus, a different response. Think of it like the "butterfly effect," only for AI language models.
- Semantic Sensitivity: Although it might seem like it sometimes, AI does not understand language the way we do. Instead, it processes text and determines responses based on a statistical analysis of the words and phrases in the prompt. Therefore, altering a sentence could modify the semantics, guiding the AI's response differently.
- Prompt Dependence: The AI's response is entirely prompt-dependent. It doesn't have a memory or personal thoughts to draw from, making it heavily reliant on the prompt for generating a response.
- Specificity: If the change in the sentence makes the prompt more specific or vague, it can affect the AI's response. The AI responds better to prompts that are specific because it helps reduce the scope of possible responses.
So why do prompts lose their effectiveness over time?
Prompt engineers face another problem with AI technology like ChatGPT: the prompts can lose effectiveness over time.
That's because ChatGPT is updated or "retrained" periodically by OpenAI. Each update is based on a new snapshot of the internet. Therefore, the training data changes and the model is adjusted based on the latest language patterns and topics on the internet.
So, if a prompt seems less effective over time, it could be due to language patterns or topics on the internet evolving, causing the model to be updated accordingly. That's why the responses generated by a specific prompt might change or seem less effective if they no longer align with the most recent language patterns the model has been trained on.
That said, not all prompts go "rotten" after a while. Without any updates or changes in the model, a prompt should consistently yield similar results over time (unless it's relatively complex). If there's a noticeable change in the results to a specific prompt, then it might be worth examining the prompt for any contextual or semantic aspects influencing the responses.
In
other words, it could be a "you" problem, not a ChatGPT problem.
How to create an effective prompt
So how do you develop concise, effective prompts that stay "ripe" for more extended periods?
Again, that's why prompt engineers make the big bucks.
Here are seven things to be mindful of when crafting a prompt:
- Clarity: Clearly articulate what you want the AI to do. Ambiguity can lead to results that don't meet your needs.
- Concision: While being clear, keep directives as brief as possible. Extended directives might result in less effective responses, as the AI could get overwhelmed with information.
- Focus on Essentials: Include only the most essential information and instructions in your directives. Every detail might seem important, but prioritize those directly influencing the final output.
- Avoid Redundancies: If a certain instruction is obvious or generally understood, there's no need to include it. This helps keep your directives concise.
- Ordering: Arrange the directives in the order of their importance. The most crucial ones should come first.
- Specify Structure: If structure matters (like in an article or blog post), outline it briefly in your directive. Define the key sections you want to see in the final output.
- Include Examples (If Necessary): Providing an example can be helpful if a directive is complex or could be misinterpreted. However, use this sparingly to maintain brevity, as too many examples can make the prompt too long to be effective.
The future of blogging
Prompt engineering is an emerging market. Just look at all the new prompt marketplaces that have suddenly appeared. Some top ones are already generating millions of page views each month – but have only been "live" for less than six months!
And that's where the market is going for today's blogger and freelance writer. Yes, unfortunately, a lot of great writers will lose their jobs over this new technology. That's what happens every time new tech emerges.
But the writers who are going to survive this wave are the ones who know how to get these AI tools to perform precisely the way they want them to function and can manipulate the instructions to get the best possible results.
In other words, bloggers will survive, just not in their current form. Instead, today's "blogger" is tomorrow's "prompt engineer."
You know, someone who can get ChatGPT to stop adding that dang word "conclusion" at the end of every blog post.
#ai #chatbot #GPT4 #promptengineer #promptengineering #aiprompts #generativeai #llm #blogging #blogger #contentmarketing
-------------------
JakeGPT is the owner and operator of JakeGPT1973.com, an AI-powered digital marketing company based out of Carrollton, Texas. Email him at jakegpt@jakegpt1973.com.
Comments
Post a Comment