Moving into an era of advertisements by digital avatars


SOURCE: ORFONLINE.ORG
MAR 29, 2022

Synthetic (or AI-generated) media is poised to transform the global advertising industry by obviating traditional reliance on cost-intensive ad shoot equipment and film crews. Yet, even as this new age technology holds great promise for advertisers, several fundamental concerns around its use to manipulate consumer trust and compromise consumer safety remain unaddressed across most jurisdictions, including in India. This article explores these concerns in the context of Cadbury’s ‘NotJustACadburyAd’ campaign and proposes some measures to address them.

Even as this new age technology holds great promise for advertisers, several fundamental concerns around its use to manipulate consumer trust and compromise consumer safety remain unaddressed across most jurisdictions, including in India.

Banking on SRK’s digital avatar

To help the revival of local businesses across India during Diwali, Cadbury had launched ‘NotJustACadburyAd’ campaign in November 2021. The campaign banked on Shah Rukh Khan (SRK)’s signature persona, including his facial and voice attributes to help local businesses promote their stores and urge consumers to buy its products. The campaign provided local businesses with the opportunity to ‘make their own ad’ or self-generate customised ads for their products. The local shop owners just had to put in some details like store name, pin code, and category of products for sale, to generate the ad using Cadbury’s website. Much to their delight, in no time and at no cost, the shop owner would receive a customised ad for their store over WhatsApp, with the ability to share it widely across social media platforms.

Rephrase.ai used few versions of ads shot by SRK under this campaign for each category of product – footwear, fashion, grocery stores, and electronics to train its AI model. Using deep neural networks and generate AI technology, the startup generated customisable digital avatars of SRK that appear authentic. These hyper-personalised ads were shared across social media platforms including YouTube to target consumers in and around the store’s pin code to promote these stores locally. Some of these self-generated ads as part of the campaign are still available on YouTube and are circulating on WhatsApp—see Lallan Shoes, Banerjee Garments, and Gourav Nigam-Bhola Kirana.

Seeing is believing?

On careful observation of the ads generated within this campaign, one would not find SRK’s digital avatar describing the quality of any of the products. However, SRK’s massive fan following associates the star with trust, quality, and credibility, and is also plausibly the reason for Cadbury “starring” his digital avatar and leveraging his goodwill for the campaign. Consumers watching the campaign are more likely to believe that the advertised products are of a “particular standard” and have his “approval”, and could possibly be misled to buy low-quality products that could never have been used or experienced by SRK, simply because of the manner and pace at which these ads were generated. Moreover, the shorter versions of the ad shared by Cadbury and the ones shared with shop owners over WhatsApp or YouTube did not have any disclaimer stating that synthetic media was employed to create the ad.

Additionally, the registration website did not require small businesses to self-certify the accuracy of the information provided by them and ensure that their products were compliant with any specified quality standards. One of the blogs on Rephrase.ai’s platform, “Personalised Videos from Celebrities—Peek into the next-gen Digital Marketing” mentions that it has a team that filters entries to avoid creating videos that the celebrity might not approve. However, it is doubtful if the information provided during registrations was used to verify the quality of the products by ad developers at the backend. This means, most likely, that none of the parties included in the process, evaluated, or certified the quality of the products that continue to be advertised across social media platforms.

The registration website did not require small businesses to self-certify the accuracy of the information provided by them and ensure that their products were compliant with any specified quality standards.

The developers of the campaign might have felt that they could moderate the misuse of the campaign but once deceptive technologies are let loose, it becomes difficult to control them. For instance, Lallan Shoes has conveniently removed the Cadbury logo from its ad. This instance raises concerns for consumer trust along with personality, copyright, and other intellectual property rights. It is a good reminder of the infamous 2001 Nike Scandal that demonstrated that putting users in control is a recipe for trouble for any brand. Nike had to cancel Jonah Peretti’s personalisation request for the shoes because he had requested to print “sweatshop” on them to highlight the harmful working conditions in Nike’s sweatshops.

Way forward

Even as Cadbury and Rephrase.ai might have had noble intentions with the campaign and many local businesses may have benefitted from it, there is no denying that in all of this, consumer trust and safety was taken for granted. The campaign ended up glorifying and advancing the unethical use of synthetic media, exposing serious gaps in the existing legislative frameworks governing consumer protection in India. Even though the use of synthetic media has been around since 2017, the Consumer Protection Act 2019 (CP Act 2019) does not provide any accountability and grievance redressal mechanisms for consumer if they are misled into buying a product sold by a digital avatar and were to suffer any harm. These protections become even more critical as there are multiple actors involved in the generation of synthetic ads and any apportionment of liability for harm caused by their unethical use cannot be easily derived from the general principles of civil or criminal liability. There is an urgent need to think of safety valves and precautionary measures to ensure consumer safety, considering the fact that investments in synthetic media are increasing.

There is an urgent need to think of safety valves and precautionary measures to ensure consumer safety, considering the fact that investments in synthetic media are increasing.

The Indian government under the Central Consumer Protection Authority (Prevention of Misleading Advertisements and Necessary Due Diligence for Endorsement of Advertisements) Guidelines, 2020 within CP Act could notify advertisers of their responsibility to state expressly at the outset if the making of the ad used synthetic media. The disclaimer, in this case, was inordinately important since the accuracy of the content within each ad is under question and can reach countless people using social media platforms. The Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act (DEEP FAKES Accountability Act), introduced in the US Congress in 2019 also requires creators of synthetic media or deepfake video alteration technology to embed watermarks on the audiovisual file with verbal and written statements describing the extent of use of the technology in preparing the video. The EU document released in 2021, Laying Down Harmonised Rules on Artificial Intelligence (AI Act) and Amending Certain Union Legislative Acts also states under Article 52, “Users of an AI system that generates or manipulates image, audio, or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.”

Social media intermediaries must also refrain from sharing synthetic media without embedded disclaimers on their platforms. Facebook’s parent company, Meta, had updated its policy in 2020, stating that they would remove content that could, “mislead someone into thinking that a subject of the video said words that they did not actually say”. However, Facebook lets ads as part of this campaign stay on its platform. These suggestions are vital to retaining consumer trust in authentic content shared using social media platforms and not contribute to generalised cynicism amongst citizens to online civic culture. While watermarks and disclaimers will ensure that the consumer is informed and the intention to not mislead the consumer is evident, it should not absolve the sellers, manufacturers, and endorsers from ensuring that the advertised goods meet the necessary safety and quality requirements.

Similar articles you can read