Artificial Intelligence (AI) is a rapidly growing technology field that promises to revolutionize how we live and interact with one another. In particular, the internet’s obsession with ChatGPT, a viral AI that responds well to all sorts of prompts, has sparked questions and concerns surrounding using AI writing tools. This piece will explore these writing systems – their limitations, ethical and legal implications, and potential uses – to understand how to leverage them for your needs.
A quick note before we begin – we’ll use Jasper.AI throughout to provide examples. That doesn’t mean we recommend the service; it just happens to be a popular AI writing tool we can access. With that out of the way, let’s dive in.
Before delving into how we can leverage AI to assist in content development, it’s essential to understand its limitations. While current AI tools can help in writing any copy imaginable, from articles to advertisements, they cannot create publish-ready materials for the following reasons:
These tools predict the next word in a sequence for natural-sounding language; their programming doesn’t always train them on facts. AI writing tools will fill gaps in knowledge with anything that sounds plausible or professional.
For example, in its first demo, Google’s AI chatbot Bard confidently declared a factual error involving the James Webb Telescope. These tools do not cite their information, so it’s imperative for people using these systems to either fact-check the data produced or feed facts to AI generators through prompts.
There are countless nuances to human language – from connotations to sarcasm. Some AI writing tools may struggle with these variations due to a lack of emotional understanding; thus, some take a simplistic approach to language.
For instance, here’s how Jasper.AI responded to the prompt, “Write a sarcastic statement about AI writing tools.”
AI writing tools are touted as the latest and greatest way to get quality content quickly and easily, but they’re far from it. In fact, they can actually be a detriment to your content strategy since they don’t provide the insight and personalization that readers are looking for.
Jasper.AI interpreted “sarcasm” to mean attack, and other attempts to evoke a sarcastic response were also unsuccessful. That’s because these tools operate within the parameters of their programming – they can lack imagination. Curating creative and new content sometimes requires human assistance.
Conversely, Chat GTP responded to this prompt by saying, “Oh sure, let’s just let the robots take over writing too. What could possibly go wrong?” Evidently, some AI writing tools are better with creativity and nuances than others, so keep this in mind as you determine which to use.
Because these tools rely on templates and existing data sources to generate content, they often produce texts similar to existing works. Search Engine Land explores this issue and how Google’s guidelines discourage “stitching or combining content from different web pages without adding sufficient value,” which is how AI writing tools operate. Consequently, AI-generated written content will likely perform worse in Google’s search rankings.
Additionally, people and tech companies are releasing tools that determine whether the content is AI-generated or written by a human. The parameters tell us more about how AI-written content compares to human-written content. A Princeton University student created GPTZero for this purpose and to combat plagiarism issues.
GPTZero assesses complexity and “burstiness,” noting that AI-generated text tends to produce simple sentences of roughly the same length. In contrast, human-written texts have sentences that vary in complexity and size.
If you're using – or plan to use – AI writing tools, keep these obstacles in mind. A better strategy is to leverage these tools as writing assistants; we'll cover that later. First, let's examine the ethical and legal implications.
As is often the case, legislation at various levels struggles to keep up with problems posed by new technology, and many institutions are taking matters into their own hands.
In January 2023, one of the most distinguished machine learning conferences – the International Conference on Machine Learning (ICML) – banned participants from using AI tools to write scientific papers. They based this decision on the fact that these systems are trained on public data and tend to regurgitate information verbatim without citing their sources.
Authors contributing to ICML can use AI writing tools to edit and polish their work but cannot use AI to create entire texts. The challenge with this decision is determining how many AI-guided edits are allowed before it crosses the line. James Vincent at The Verge captures this problem perfectly:
What if a user asks an AI tool to summarize their paper in a snappy abstract? Does this count as freshly generated text (because the text is new) or mere polishing (because it’s a summary of words the author did write)?
These questions are essential for regulating AI writing tools, and it’ll be interesting to see where the consensus lies in the future as legislative systems catch up to these emerging technologies.
New York City schools have also banned ChatGPT across all their devices and networks, but this ban cannot extend to what students do on their home computers. The district hopes this measure will mitigate the use of what they view as plagiarism. Unfortunately, this ban becomes an equity issue: assuming some students will still use ChatGPT at home, students with limited computer or internet access outside of school will be disadvantaged.
They also intend to use programs like GPTZero to identify text written by AI. OpenAI, the company behind ChatGPT, responded to this decision indicating they’re also developing tools to identify text generated by their system.
Although legislation surrounding AI Writing Tools is sparse and specific to certain arenas, we can evaluate the practice alongside copyrighting. Brand Bodyguards analyzes the copyright implications of AI writing: “Copyright doesn’t protect ideas. It protects only how they are expressed. [...] If you entirely rewrite something produced by someone else, you can own the copyright to what you create and would avoid infringing upon any copyright in what you rewrote.”
Authors expressly own copyrights to whatever they write – a federal registration or copyright notice isn’t necessary, and ChatGPT gives their users ownership rights of all generated texts. However, the U.S. Copyright Office declared that only text written by a human is copyrightable. Thus, no one can copyright AI-generated material.
So long as ChatGPT’s output isn’t too similar to a work protected by copyright, most users have minimal risk of committing copyright infringement and should take steps such as using plagiarism checkers to ensure their output isn’t too similar to text from elsewhere. Nevertheless, the U.S. Copyright Office’s decision poses another problem:
There’s no definitive answer, but it’s safe to assume that the best practice is to rewrite AI-generated text as much as possible. Remember that you lose all copyright protections by producing content written entirely by AI.
Alongside limitations, ethics must be a significant factor in approaching AI writing tools. Use the following information to carefully research potential platforms and inform how you create prompts for text generation.
Many incorrectly assume that computers are impartial to the biases that pervade our society; however, people build the algorithms that support AI – people whose experiences cause them to internalize assumptions about cultures and the world. Unfortunately, these biases leak into and affect machine learning.
Lexalytics identifies two types of bias in AI:
In this first case, developers may train AI tools on data sets that aren’t necessarily representative or all-encompassing due to human bias. When you use these tools, especially for scientific purposes, review their training sets to judge whether they’re inclusive. Be wary if the developer doesn’t disclose their training sets.
As for the second type, societal AI bias can occur when specific questions, keywords, or information are unintentionally – or intentionally – excluded. While Lexalytics points to Google Maps struggling to pronounce Hawaiian words when navigating the streets of Hawaii, we also see this societal bias occurring on the user side with AI writing tools. If you present these tools with biased prompts, they’re likely to deliver biased results.
To combat these issues, stay attuned to societal bias and challenge the assumptions that underpin datasets and their outcomes. This practice will help you remove bias from your prompts and recognize dataset bias.
We won’t go too in-depth on this ethical implication. There’s an excellent article on Medium if you’re interested in learning more: “AI Writing Tools: The Future of Learning or the End of Creativity?”
Essentially, with the rise of AI writing tools, there’s a valid concern that students and professionals will cease to produce original content. Moreover, those relying on these tools may fail to learn how to craft and organize a compelling argument independently. Educators fear reliance on AI will lead to a lack of fundamental understanding of the grammatical structures necessary for clear communication.
When you implement AI writing tools for your team, ensure over-reliance doesn’t occur to the detriment of learning or experience.
Due to the energy-intensive model training process for natural language processing (NLP), MIT concluded that programming a single AI model requires the same amount of energy as flying from New York to San Francisco 630 times, thus emitting the same amount of carbon. Hopefully, those developing this technology will find more energy-efficient options in the future or undertake measures to offset these environmental costs.
For now, consider this carbon footprint when utilizing AI. Determine whether developers disclose their environmental impact and make informed decisions based on environmentally conscious solutions.
Due to its limitations and ethical and legal implications, we recommend leveraging AI writing tools as assistants in the writing process. The following prompts can be more beneficial than generating entire texts:
Prompting the AI with your written content and requesting headlines can inform SEO titles and headings.
For this article, Jasper.AI recommended these headlines:
They’re not perfect, but these suggestions did help us arrive at this article’s title, “Understanding AI Writing Tools: Limitations, Implications, and Best Practices.”
After requesting hashtags based on your content, ensure they’re relevant to your piece and will perform well on social platforms. For example, here are the hashtags Jasper.AI generated for this article:
Some hashtags are more relevant than others, while some are hardly used and would perform poorly. In particular, according to RiteTage, barely anyone on Twitter uses #LossOfLearning. After checking relevancy and performance, you can use similar results to inform your hashtag decisions.
Treat AI as your personal think-tank. Input your area of interest, and see what it delivers. For example, we’ll prompt Jasper.AI to “generate three blog topics about AI writing tools.”
Here’s what the system delivered:
Remember, it’s regurgitating already available information, so use these results as a guiding tool. Choose a topic you’re interested in and narrow the focus to a more original result. For instance, you could narrow a guide about optimizing your articles with an AI writing tool to a piece about optimizing headlines.
In this scenario, you’d give your chosen AI a piece of already-written content. The delivered summary can help to inform introduction or conclusion paragraphs and social media blurbs while giving you a quick overview of a long piece of writing.
Let’s test this prompt now for this article’s conclusion:
AI writing tools can be incredibly useful when used as an assistant in the creative process. While they are still limited and come with ethical implications, there is much potential to harness their power for SEO optimization, brainstorming topics, summarizing content, and even creating headlines. The key is to understand AI’s limitations while also being mindful of its environmental impact by considering how developers disclose their carbon footprint. With that said, utilizing these cognitive neuroscience principles may help you optimize your digital marketing strategy and drive more sales with less effort.
As you can see, Jasper.AI went a little overboard – cognitive neuroscience principles? So now, we’re going to take this paragraph, throw most of it out, and include some information about the future of AI to send you off with.
While limited creatively and plagued by ethical implications and legal questions, these tools still have the potential to assist you in the writing process. By understanding these limitations and making informed decisions about your preferred AI and how you create prompts, you can use AI writing tools to increase efficiency without compromising quality.
Valentine Shkulov, Staff Data Scientist at Meson Capital, predicts that AI will get better at answering queries factually, and virtual assistants will infiltrate daily life and businesses in the next three years. Undoubtedly, AI writing tools will become more prevalent in workplaces, so learning how to leverage these systems appropriately is essential.
That said, we encourage you to refrain from generating entire texts and assuming they’re ready for publishing. With or without AI, the editing process should remain sacred.