Mind Control In The Metaverse


SOURCE: LOUIS ROSENBERG II HTTPS://MEDIUM.COM/PREDICT/MIND-CONTROL-IN-THE-METAVERSE-48DFBD88C2AE
DEC 28, 2022

If we’ve learned anything about technology over the last few decades, it’s that we don’t prepare for the downsides until the problems are so egregious, we can’t ignore them. The obvious example is social media, which was hailed as utopian when it first arrived, but is now widely considered to be a destructive and destabilizing force in society that can amplify misinformation, hate, harassments and polarization. It took over a decade for the harms to really sink in, but these days more than 65% of Americans believe social media has a mostly negative effect on our world.

Of course, there’s nothing about the technology itself that creates these problems. It’s the business models behind social media drives platforms to mediate information flow across society, filtering and amplifying content in ways that distort our thinking. This is a form of mind control and it’s about to get much MUCH worse. I’m talking about the metaverse.

The Metaverse (L Rosenberg / Midjourney)

The Metaverse has the potential to unlock boundless waves of human creativity and productivity. But unless regulated, the metaverse could also be the most dangerous tool of persuasion ever created. I don’t make this warning lightly. I’ve been a technologist in this field for over 30 years, starting as a researcher at Stanford, NASA, and the Air Force and founded a number of early companies in the space. I genuinely believe the metaverse can be a positive force for humanity. But if we wait for the problems to become egregious like social media, it will be too late to undo the damage.

And the metaverse is far more dangerous than today’s social media.

To make the danger as clear I can, I’d like to introduce a basic engineering concept called Control Theory, which is the method used by engineers to control the behaviors of any system. Think of the thermostat in your house. You set a temperature and if your house falls below that goal, the heat turns on. If your house gets too hot, it turns off. Like magic, it keeps your house close to the goal you set. That is feedback control.

Generic “Control System” Diagram — (credit) Wikipedia

In the heating example, your house would be the SYSTEM, a thermometer would be the SENSOR, and the thermostat would be the CONTROLLER. An input signal called the Reference is the temperature you set as the goal. The goal is compared to the actual temperature in your house (i.e., Measured Output). The difference between the goal and measured temperature is fed into the thermostat which determines what the heater should do. If the house is too cold, its heater turns on. If it’s too hot its heater turns off. That’s a classic control system.

Of course, control systems can get very sophisticated, enabling airplanes to fly on autopilot and cars to drive themselves, even allowing robotic rovers to land on mars. These systems need sophisticated sensors to detect driving conditions or flying conditions or whatever else is appropriate for the task. These systems also need powerful controllers to process the sensor data and influence system behaviors in subtle ways. These days, the controllers increasingly use AI algorithms at their core.

With that background, let’s jump back into the metaverse.

Referring back to the standard diagram above, we see that only a few elements are needed to effectively control a system, whether it’s a simple thermostat or a sophisticated robot. The two most important elements are a SENSOR to detect the system’s real-time behaviors, and a CONTROLLER that can influence those behaviors. The only other elements needed are the feedback loops that continually detect behaviors and impart influences, guiding the system towards desired goals.

As you may have guessed, when considering the danger of the metaverse, the system being controlled is you — the human in the loop. After all, when you put on a headset and sink into the metaverse, you’re immersing yourself in an environment that has the potential to act upon you more than you act upon it. Said another way, you become an inhabitant of an artificial world run by a third party that can monitor and influence your behaviors in real time. That’s a very dangerous situation.

Control System Diagram with “Human” in the loop

In the figure above, System Input to the human user are the immersive sights, sounds, and touch sensations that are fed into your eyes, ears, hands, and body. This is overwhelming input — possibly the most extensive and intimate input we could imagine other than using surgical brain implants. This means the ability to influence the system (i.e. you) is equally extensive and intimate. On the other side of the user in the diagram above is the System Output — that’s your actions and reactions.

This brings us to the SENSOR box in the diagram above. In the metaverse, sensors will track everything you do in real-time — the physical motions of your head, hands, and body. That includes the direction you’re looking, how long your gaze lingers, the faint motion of your eyes, the dilation of your pupils, the changes in your posture and gait — even your vital signs are likely to be tracked in the metaverse including your heartrate, respiration rate and blood pressure.

In addition, the metaverse will monitor your facial expressions and vocal inflections to track your emotions in real-time. This goes beyond sensing expressions that other people notice, but also includes subconscious expressions that are too subtle for humans to recognize. Known as “micro-expressions,” these events can reveal emotions that users do not intend to convey. Users may not even be aware of feeling those emotions, enabling metaverse platforms to know your inner feelings better than you do.

This means when you immerse yourself into the metaverse, sensors will track almost everything you do and know exactly how you feel while doing it. We can represent this in the diagram by replacing the SENSOR box with the METAVERSE (Behavioral and Emotional Tracking in real-time) as shown:

Control System Diagram for Metaverse Environments

Of course, in an unregulated metaverse, the behavioral and emotional data will not just be tracked, it will be stored over time creating a database that reflects how individuals are likely to react to a wide range of stimuli throughout their daily life. When processed by AI algorithms, this extensive data could be turned into behavioral and emotional models that enable platforms to accurately predict how users will react when presented with target stimuli (i.e. System Input) from a controller. And because the metaverse is not just virtual reality but also augmented reality, the tracking and profiling of users will occur not just in fully simulated worlds but within the real-world embellished with virtual content. In other words, metaverse platforms will be able to track and profile behaviors and emotions throughout our daily life, from the moment we wake up to the moment we go to sleep.

Of course, the danger is not that platforms can track and profile us, it’s what they can do with that data. This brings us to the CONTROLLER box in the diagram above. The controller receives a Measured Error, which is the difference between a Reference Goal (a desired behavior) and the Measured Output (a sensed behavior). If metaverse platforms are allowed to adopt similar business models as social media, the Reference Goal will be the AGENDA of third parties that aim to impart influence over users (see diagram below). The third party could be a paying sponsor that desires to persuade a user to buy a product or service, or to believe a piece of propaganda, ideology, or misinformation.

Control System Diagram with Third Party Agenda

Of course, advertising and propaganda have been around forever and can be quite effective using traditional marketing techniques. What’s unique about the metaverse is the ability to create high-speed feedback loops in which user behaviors and emotions are continuously fed into a controller that can adapt its influence in real-time to optimize persuasion. This process can easily cross the line from marketing to manipulation. To appreciate the risks, let’s dig into the controller.

At its core, the CONTROLLER aims to “reduce the error” between the desired behavior of a system and the measured behavior of the system. It does this by imparting System Input as shown on the diagram above as an innocent looking arrow. In the metaverse, this innocent arrow represents the ability of platforms to modify the virtual or augmented environment the user is immersed within.

In other words, in an unregulated metaverse, the controller can alter the world around the user, modifying what they see and hear and feel in order to drive that user towards the desired goal. And because the controller can monitor how the user reacts in real-time, it will be able to continually adjust its tactics, optimizing the persuasive impact, moment by moment, just like a thermostat optimizes the temperature of a house.

To make this clear, let’s give some examples:

Imagine a user sitting in a coffee house in the metaverse (virtual or augmented). A third party sponsor wants to drive that user to buy a particular product or service, or believe a piece of messaging, propaganda, or misinformation. In the metaverse, advertising will not be the pop-up ads and videos of today but will be immersive experiences that are seamlessly integrated into our surroundings. In this particular example, the controller creates a virtual couple sitting at the next table. That virtual couple will be the System Input that is used to influence the user.

First, the controller will design the virtual couple for maximum impact. This means the age, gender, ethnicity, clothing styles, speaking styles, mannerisms, and other qualities of the couple will be selected by AI algorithms to be optimally persuasive upon the target user based on that user’s historical profile. Next, the couple will engage in an AI-controlled conversation amongst themselves that is within earshot of the target user. That conversation could be about a car that the target user is considering purchasing, possibly framed as the virtual couple discussing how happy they are with their own recent purchase.

As the conversation begins, the controller monitors the user in real-time, assessing micro-expressions, body language, eye motions, pupil dilation, and blood pressure to detect when the user begins paying attention. This could be as simple as detecting a subtle physiological change in the user correlated with comments made by the virtual couple. Once engaged, the controller will modify the conversational elements to increase engagement. For example, if the user’s attention increases as the couple talks about the car’s horsepower, the conversation will adapt in real-time to focus on performance.

As the overheard conversation continues, the user may be unaware that he or she has become a silent participant, responding through subconscious micro-expressions, body posture, and changes in vital signs. The AI controller will highlight elements of the product that the target user responds most positively to and will provide conversational counterarguments when the user’s reactions are negative. And because the user does not overtly express objections, the counterarguments could be profoundly influential. After all, the virtual couple could verbally address emerging concerns before those concerns have fully surfaced in the mind of the target user. This is not marketing, it’s manipulation.

And in an unregulated metaverse, the target user may believe the virtual couple are avatars controlled by other patrons. In other words, the target user could easily believe they are overhearing an authentic conversation among users and not realize it’s a promotionally altered experience that was targeted specifically at them, injected into their surroundings to achieve a particular agenda. (Note — in videos games we might refer to AI-controlled avatars as “Nonplayer Characters” or NPC’s, but in the metaverse these AI agents are not playing games. They can impart serious influence. Therefore, I prefer to call them NHC’s or “non-human characters”.)

And it’s not just adults who will be targeted in this way, but children who already have a hard time distinguishing between authentic content and promotional material. Already Roblox, provider of a metaverse used by 50 million children announced plans to roll-out “immersive ads” in the near future. What chance does a child have if approached by a giant lovable teddy bear (NHC) who follows them around while holding a particular brand of toy or eating a particular brand of cereal?

Augmented Reality — Immersive Marketing depiction (L Rosenberg / Midjourney)

And that’s a relatively benign example. Instead of pushing the features of a new car or toy, the third-party agenda could be to influence the target user about a political ideology, extremist propaganda, or outright misinformation or disinformation. In addition, the example above targets the user as a passive observer of a promotional experience in his or her metaverse surroundings. In more aggressive examples, the controller will actively engage the user in targeted promotional experiences.

For example, consider the situation in which an AI-controlled avatar that looks and sounds like any other user in an environment engages the target user in agenda-driven promotional conversation. In an unregulated metaverse, the user may be entirely unaware that he or she has been approached by a targeted advertisement, but instead might believe he or she is in a conversation with another user. The conversation could start out very casual but could aim towards a prescribed agenda.

In addition, the controller will likely have access to a wealth of data about the target user, including their interests, values, hobbies, education, political affiliation, etc. — and will use this to craft dialog that optimizes engagement. In addition, the controller will have access to real-time information about the user, including facial expressions, vocal inflections, body posture, eye motions, pupil dilation, facial blood patterns, and potentially blood pressure, heartrate, and respiration rate. The controller will adjust its conversational tactics in real-time based on the overt verbal responses of the target user in combination with subtle and potentially subconscious micro-expressions and vital signs.

It is well known that AI systems can outplay the best human competitors at chess, go, poker, and a wealth of other games of strategy. From that perspective, what chance does an average consumer have when engaged in promotional conversation with an AI agent that has access to that user’s personal background and interests, and can adapt its conversational tactics in real-time based on subtle changes in pupil dilation in blood pressure? The potential for violating a user’s cognitive liberty through this type of feedback-control in the metaverse is so significant it likely borders on outright mind control.

To complete the diagram for metaverse-based feedback control, we can replace the generic word controller with AI-based software that alters the environment or injects conversational avatars that impart optimized influence on target users. This is expressed using the phrase AI AGENTS below.

Rosenberg Scenario for Metaverse Mind Control

As expressed in the paragraphs above, the public should be aware that large metaverse platforms could be used to create feedback-control systems that monitor their behaviors and emotions in real-time and employ AI agents to modify their immersive experiences to maximize persuasion. This means that large and powerful platforms could track billions of people and impart influence on any one of them by altering the world around them in targeted and adaptive ways.

This scenario is frightening but not farfetched.

In fact, it could be the closest thing to “playing god” that any mainstream technology has ever achieved. To protect against this scenario, industry leaders, politicians and policymakers need to take action, implementing regulatory safeguards, promoting industry standards, and guaranteeing Immersive Rights to consumers before platforms adopt business models that are dangerous to the public. Had such safeguards been put in place early in the evolution of social media, the world might be a safer place.

. . . this article originally appeared in VentureBeat (2022)


Similar articles you can read