How agencies adapt as robots evolve

Social media bots may represent only a fraction of an app’s total users, but it turns out they may be generating more content than we thought.

While media agencies find bot content worrisome, some say it won’t become a higher priority until both platforms and advertisers sound the alarm. At the same time, companies and media agencies are employing artificial intelligence and developing broader social strategies to ensure brand safety, as bot content becomes more widespread on social media.

“We just have to try to keep an eye on it,” said Drew Himmelreich, senior analyst at digital agency Barbarian. “It remains an open question to what extent brands want to know what percentage of their engagement is genuine… Our clients tend to focus on more standard performance metrics and have not expressed a willingness to allocate additional resources to attempt to quantify or contextualize the role of bots or inauthentic activity.”

Search by analytics platform Similarweb recently determined that bots generate between 20.8% and 29.2% of content posted on Twitter in the US, while accounting for around 5% of the platform’s monetizable daily active users. This means that a small number of accounts actually generate a substantial amount of content on the social site, with other studies estimating that bots produce 1.57 times more content than human users.

“I would say that the only thing that really puts bot-generated content at risk is the engaging experience that advertisers want to be a part of,” said David F. Carr, senior information officer at Similarweb. “If Twitter users feel that many of the accounts they interact with are robotic rather than genuine, or are turned off by what they are reading in the media about bot activity, they are likely to use Twitter less or engage with a very more skepticism.”

Similarweb notes that other platforms, including Meta’s Facebook, also deal with bots on their platforms. “The problem is certainly not unique to Twitter,” Carr said.

Use of AI for prevention

Simply put, bots are essentially programs used to perform repetitive tasks, which can range from posting spam comments to clicking links. On social platforms, this can lead to fake accounts posting frequently or bots manipulating information in conversations, both of which are potentially harmful to any associated branded content.

“The bad guys are responsible for those comments and spam messages that you always see on channels or they can even delete content on the website, among other things,” said Matt Mudra, director of digital strategy at the B2B agency Schermer. “The question is: How can brands and agencies prevent their content from being affected?”

At Barbarian, for example, Himmelreich said analysts use alerts and automated tools to flag unusual social media activity. In this case, automation serves as an additional layer for human reviewers, who are still needed when they notice large spikes in conversations or other significant anomalies in these applications. Barbarian also uses different metrics for certain channels, based on different platform and account risks.

“Our analysts know to watch for red flags when they’re reporting on performance, and we have automated alerts for our clients’ brands that notify us of unusual social conversation activity,” Himmelreich said.

Brian David Crane, founder of digital marketing fund Spread Great Ideas, added that focusing on preventative measures is key for agencies. The use of automation and machine learning as part of the bot management solution is becoming more common, and this includes bot monitoring tools such as Bot Sentinel and Botometer. In other words, robots control robots.

“In the wrong hands, automated bots on platforms like Twitter can manipulate information and create errors in the social fabric of trends and conversations,” Crane said. “It can be very difficult for brands or agencies to tackle them head-on, as bots are easy to code, can be implemented from the shadows, and can be difficult to trace back to the source.”

Development of good practices

Increasingly, agencies and creative companies are incorporating best practices to combat bot issues as part of their brand security measures. And there are many safeguards that don’t require AI or additional IT training, some of which continue to evolve as brands invest more in social channels.

Tyler Folkman, chief technology officer at influencer marketing firm BEN Group, said agencies and brands can follow simple guidelines even as bots become more sophisticated. These include looking for shallow engagement such as single emojis, looking for accounts with few followers but following a large number of accounts, and removing accounts with “poor profile pictures.”

“It’s a place to start to help brands get smarter,” Folkman said.

Agencies can also use Internet Protocol filtering and blocking to stop traffic from certain IP addresses associated with spam and bot activity, Mudra added. This means they can use what’s called a frequency filter to limit the number of times a visitor can see an ad or website.

“For context, any view count beyond three times is probably a bot. Another easy one is to block sources that may display suspicious behavior patterns. Remember that bots behave differently than humans,” Mudra said .

As for search engine optimization, which remains an important focus in social strategies, Baruch Labunski, CEO of SEO marketing firm Rank Secure, said bad bots can steal content from an agency or brand and damage its reputation if left unchecked. Some of the ways to combat this include simply searching for copies of your content using tools like Copyscape and regularly getting rid of spam comments and bad links.

“There are also good bots that can do this automatically, depending on the platform,” Labunski added. “Blocks both unknown IP addresses and known bots. Test your site’s speed to see if it slows down. A slowdown may indicate you have some bad bots.”

But as noted, the bot challenge extends beyond the domain of Twitter. Himmelreich noted that bot problems seem more pronounced on Twitter, but that it is “rarely the most important social channel in the marketing mix.”

“Bots appear to be the most prominent on Twitter, but inauthentic activity more broadly, such as campaigns orchestrated by agitators or abuse of a platform’s algorithms, we also see risks inherent in social media as a vertical of marketing,” Himmelreich said.

Experts believe that TikTok, Instagram and Facebook are also tackling their own bot issues, with Mudra adding that this will “intensify” in the social space and beyond. Instagram can be particularly vulnerable.

“If you’ve noticed in your social feeds over the last 12 to 24 months, there’s been a huge increase in bots spamming Instagram posts,” Mudra said. “I also suspect that many blog sites, wikis and forums are seeing an increased frequency of traffic and bot activity.”

What he does agree on is that robots are here to stay, so now it’s a matter of separating the good from the bad.

We want to hear from you. Take this quick five minute survey to help Digiday learn how to make our products even better and you’ll be entered to win one of five $25 Amazon gift cards.

[ad_2]

Source link

You May Also Like

About the Author: Ted Simmons

I follow and report the current news trends on Google news.

Leave a Reply

Your email address will not be published. Required fields are marked *