top of page
Writer's pictureTarasekhar Padhy

ChatGPT Is Best for Content Writing (4 Reasons)

Updated: 5 hours ago

I’ve tested various LLMs for different tasks within a typical content creation workflow. Tasks included generating ideas, rephrasing existing content, generating meta details, and more. 


Of all the LLMs that I had the pleasure (and pain) of trying out, ChatGPT stood out from the rest for these reasons:


  1. Large knowledge base: You can write about almost any niche topic, provided they aren’t the latest trending news. Moreover, this is also helpful to learn any concept that you are writing about.

  2. Internet access: When you are researching recent news or an industry update, GPT-4o can scrape the most relevant pages to fetch you the necessary information. You can also provide specific web pages to go through.

  3. Easy customizability: Anyone can create custom GPTs or provide custom instructions, making it simpler to automate regular, administrative tasks. The only complexity is adding custom actions to GPTs, which still remain developer-dependent.

  4. GPT library: There are plenty of good GPTs created by various AI enthusiasts (like myself) that are useful. They cater to various tasks ranging from informative to creative types which will make your daily life easier. The challenge, though, is finding the right one.


While ChatGPT is the best all-rounder, it comes with a fair share of limitations. 


In the rest of this chapter, let’s dive into how OpenAI’s staple product can be improved and talk a bit about other LLMs (proprietary and open-source) which come with some nice perks.


Limitations of ChatGPT


As AI is a complex technology, some of the following limitations have a cause-effect relationship with each other depending on how you see it. After discussing the limitation and its impact, I’ve suggested some alternative LLMs that are free from that handicap.


1. Context window


ChatGPT was, is, and will always be the most-funded LLM with the smallest token window.


You can’t upload large documents or blurbs of text for analysis or any other purpose because the 128K tokens context window won’t let you. This is also a problem when you are trying to extract insights from multiple docs with a single chat.


The research process becomes challenging and disjointed when you are uploading your files or data in separate chats to collect detailed information about a single topic. 


Moreover, if the chat goes on for too long, GPT-4o forgets the earlier context of the conversation entirely and the hallucinations increase. There were many instances when I was generating research material for long-form guides, only to get a load of crap for the last 50% of prompts.


The alternatives include Claude 3.5 (200K tokens) and Google Gemini 1.5 (2 million tokens).


However, neither can access the internet but have a decent-sized database. If you have a bunch of large files that need analysis, you can use Gemini 1.5 on Notebook LM. Notebook LM can also analyze YouTube videos and extract specific information from them.


2. Subpar understanding


Have you ever felt that ChatGPT fails to follow your instructions accurately, especially in creative tasks? I don’t mean the times when the result is not useful but the times when you feel like GPT-4o has Down Syndrome.


Look, I am aware that this is a subjective thing and this observation can get biased as most AI users have used ChatGPT more than other LLMs considering its popularity and all-roundedness.


However, I discovered that when a task has too many instructions and those instructions aren’t provided in a nice structure and sequence, the output is simply dumb. Those exact prompts delivered significantly better results with Claude 3.5 Sonnet.


Claude 3.5 Sonnet outperforms GPT-4o on most of the logical and reasoning benchmarks.


3. Too woke


Sometimes I write opinionated pieces on Medium on various topics such as technology and AI.


Typically, these pieces start off with an emotionally-charged standpoint that I am passionate about followed by the facts and logical reasoning that led to it. I have shared those facts and logical reasoning with ChatGPT only to get a corporate-friendly gobbledygook.


I am aware that proprietary LLMs have strict guardrails that prevent it from generating harmful responses, but satire on how iPhones lack innovation is anything but. Once upon a time, I tried generating an image of an obese woman chomping on fast food, which it denied because apparently it purported a type of stereotype.


The best alternative to this roadblock is the open-source LLMs such as the Dolphin Mistral family and other fine-tuned Llama models. My preferred tool is AnythingLLM for its user-friendly interface and LLMStudio for the range of options it offers.


While using them, be mindful that these models have limited world knowledge, so you have to provide as much context as possible.


4. Information retrieval


There were many instances when I uploaded documents well within the token context range of GPT-4o to extract some information. In most of the cases, it was trash. The only ones where the performance was useful was documents that had less than ten pages.


This is a bit frustrating as you will have to switch to other tools like Notebook LLM even for small documents and reports to extract specific information. It’s embarrassing that even some open-source models (like Llama 3.1) perform better than ChatGPT’s flagship.


However, information retrieval with general purpose LLMs can be tricky as it has to strike the balance between creativity and fetching the information you need. At the same time, it can be done through reinforcement learning, where LLMs can tweak the temperature automagically.


The temperature parameter determines the degree of randomness for each reply. The higher the temperature is, the more creative the model gets. Lower temperature increases the accuracy of the outputs as the answers are fetched from your data or its own knowledge base.


The AI playground allows users to play around with this temperature parameter, but you will need to pay based on the number of tokens you use, which can sting a bit if you are an individual creator like myself. Hence, I prefer Google AI Studio with Gemini 1.5.


5. Fatty replies


There are both pros and cons to a detailed response. When you are generating ideas or validating strategies, longer answers are better as they contain essential points. This also provides a multi-perspective analysis that speeds up the overall research process.


The cons here are obvious. 


In many instances, particularly when I just want to check up on something real quick, the process is slow. I have to scan through the mammoth replies of GPT-4o to find the direct answer to the direct question I asked.


Fortunately, there is a simpler workaround for this. You can use the Quick Answers GPT which is coded to reply within 50 words. Additionally, it will automatically use the internet for queries on time-sensitive topics such as news, elections, sports, and weather.


Wrapping up: ChatGPT is the jack of all trades


It's kind of ironic that the title is about why ChatGPT must be in a writer’s tech stack while the majority of the article is nitpicking on things it can’t do as well as it's competitors. Well, the broader objective with this approach is to provide you with a neutral perspective.


You can make the most money with GPT-4o, or any other LLM for that matter, when you understand where it shines and where it sucks. 


There are plenty of amazing things about ChatGPT, but it’s definitely not the best at anything in particular. Its versatility and ability to provide “good enough” responses on most of the action items in a content writing workflow makes it the best.


If you can afford only one tool, my suggestion would be to go with ChatGPT. At the same time, it’s a great idea to learn where other free tools can help you better.







0 views0 comments

コメント

コメントが読み込まれませんでした。
技術的な問題があったようです。お手数ですが、再度接続するか、ページを再読み込みしてださい。
bottom of page