AI text generation in marketing needs careful management – VentureBeat | Mobiz World

Register now for your free virtual pass to the Low-Code/No-Code Summit on November 9th. Hear from leaders at Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.


It’s been two years since OpenAI announced the arrival of GPT-3, its groundbreaking natural language processing (NLP) application. Users have been blown away by the AI ​​language tool’s uncanny ability to create insightful, thoughtful, and colorful prose — including poems, essays, song lyrics, and even detailed manifestos — with the shortest prompts.

Known as the “fundamental model,” OpenAI’s GPT-3 was trained by feeding it to virtually the entire internet, from Wikipedia to Reddit to the New York Times and everything in between. It uses this huge data set to predict which words are the most plausible at each prompt. Given the scope of such a large research project, there are only a handful of these basic models. Others include Meta’s RoBERTa and Google’s BERT, as well as others developed by startups like AI21.

Almost all commercial AI text generation applications, from blog creation to headline generators, are built on top of one of these basic models via an API. The basic models are the tracks, and the commercial applications are the trains that run on them. And traffic on these routes is increasing, fueled by recent VC investments. These included an $11 million Series A round for AI copywriting tool Copy.ai, a $10 million seed funding for AI content generator Copysmith, and a Series A round of of $21 million for the AI ​​writing assistant Writer.com.

Given the massive amount of AI text generation use cases now being served in the marketing and communications industry, some of the everyday content that marketing and communications teams produce is now being generated by AI, including ad copy, social media captions and blog -Posts.

incident

Low-Code/No-Code Summit

Virtually join today’s leaders at the Low-Code/No-Code Summit on November 9th. Sign up for your free pass today.

Register here

While much of this is fairly prosaic and gives little cause for concern – if AI can write search ad headlines that drive clicks, then so be it – legitimate concerns about the impact of this technology on other applications need to be acknowledged and addressed by our developers on those platforms .

Size alone does not equal success

At the heart of the matter for many is the tremendous scalability that this technology offers and what the implications may be. When you think about it, it helps to distinguish between short-form content and long-form content. The negative consequences of companies scaling short-form content creation, such as B. Ad text or landing page text are negligible. Businesses can reduce costs and improve conversions with few downsides. There are problems in the area of ​​longer content.

At the bottom of the food chain for long content, like travel and lifestyle blogs, there may be little harm in using natural language processing (NLP) to parse through huge datasets to generate blogs for SEO. Finally, is there a difference between NLP using its dataset to create a 500-word blog post about the “5 Best Things to Do in Denver” and a human author researching the first few results on Google (and easily plagiarized)? Most likely not much.

But does the world need more “meh” content written by someone (or something) with no actual expertise on the subject? And who benefits from it, apart from the platform that gets paid for it? Of course, we’ve had this problem before, as content mills churn out articles by authors who have absolutely no idea what they’re writing about. But the cost and time constraints of human authors have at least partially obscured this. Lifting these restrictions could open the floodgates.

Google looks at AI text generation

Google clearly plans to do so, as evidenced by some of its recent search updates. Recently, the company announced the rollout of its “Helpful Content” algorithm update, which will, among other things, devalue content with “low added value” in its search rankings. Meanwhile, Google has also gradually increased the importance of what it calls EAT – Expertise, Authoritativeness and Trustworthiness – when rating content. In short, Google focuses more on the demonstrable expertise of the author and the authority and trustworthiness of the site.

In other words, Google places more value on content that is authored by proven subject matter experts and offers unique insights, insights or other value, and that is published on sites with proven strong editorial policies. Since NLP tools create content based on what has already been written about a topic, they will struggle to provide anything unique. So while it’s never been easier to create a blog post on literally any topic, the bar has never been higher when it comes to getting a blog on the first page of Google.

Marketers need to understand this new paradigm and use AI text generation tools where they add value and avoid using them where they don’t – for example, to create optimized ad and sales copy or content descriptions , rather than relying on them to create longer technical blog content.

Automation is not a substitute for expertise

Another area of ​​communication where we are seeing rapid growth of AI text generation tools is PR, with a number of NLP-powered media pitching platforms now on the market. These range from light personalization based on the targeted recipient’s Linkedin profile to writing pitches on behalf of the company.

However, I speak from direct experience when I say that it is vitally important that platform developers and users fully understand the problem these platforms are designed to solve and the part of the process where there is no substitute for human expertise to understand. The role of these platforms is basically to eliminate friction between an organization or an individual and the media and to help users send pitches to journalists faster and more accurately. In other words, to act as a door opener.

Users still need to be subject matter experts on the topics they propose to journalists, and they need to provide valuable insights in their pitches, rather than just spamming reporters. Platform developers have some control over the latter by setting reasonable limits on how many times a reporter can be contacted, while the former is the responsibility of users.

And platform developers need to educate users about the responsible use of these platforms. There is only negative value in associating a journalist with a user if that user is then unqualified to speak on the topic at hand. They are quickly discovered and damage their personal and professional reputations in media circles.

A case for the human editor

If language models were ever to achieve true sentience, one of their defining personality traits would be that of a pathological liar. As anyone experimenting with any of these models will quickly realize, they invent it over time, producing the most plausible response to a given prompt.

It is therefore not difficult to imagine some of the problems of AI-generated text that are left uncontrolled in numerous use cases. For example, content containing inaccurate financial or health advice could lead to serious harm.

For this reason, the role of the human editor must become more important as more and more content is generated by AI. Editors will be crucial in both media companies and regular organizations. Additionally, the role of the editor needs to evolve to focus more on fact checking and verification. A major focus of an editor’s work will also be training AI models to the desired tone, technical level, and storytelling style, much like they train human writers.

Ultimately, we are still at the very beginning of the adoption curve of AI text generation technology. Given the technology’s tremendous ability to scale and be efficient, it’s almost inevitable that usage will go mainstream this decade. In response, it can be expected that there will be even more emphasis on true subject matter experts who deliver original and unique content. And those who can leverage AI text generation tools in the right use cases to eliminate friction and scale will benefit the most.

Steve Marcinuk is co-founder and operations manager of an AI-supported PR platform Smart Relationships.

data decision maker

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.

If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers

Leave a Comment