I Do Not Want Mark’s Metaverse

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of thecovidguide.com, and organizer of computers-society.org.

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI


What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

[WE WILL PUBLISH YOUR MOST THOUGHTFUL RESPONSES. Send to ronbaecker@gmail.com, 300-1000 words, include hyperlinks.] 

In a blog posted two days ago, I highlighted phrases and sentences from Mark Zuckerberg’s recent keynote speech sketching his vision of Meta’s intended metaverse. Here are thoughts triggered by his words: 

1. “ you’re going to be able to do almost anything you can imagine … “This isn’t about spending more time on screens … [include] communities whose perspectives have often been overlooked … consider everyone …” 

No, Mark, be honest. This is about getting more people into Meta, and about getting them to spend more time in the metaverse, because that’s the only way you can sustain the growth your shareholders expect, and the only way you can withstand the onslaught of firms like Tiktok that now have greater appeal to the next generation of users. 

2. “to feel present like we’re right there … making eye contact, having a shared sense of space, and not just looking at a grid of faces on a screen” 

As researchers have known for years, eye contact is important in intimate conversations or delicate negotiations. But eye contact is not supported in currently available VR technology, nor are there prototypes without hardware more complex than today’s unwieldy and commercially unsuccessful headsets. Presence is more than photorealistic images of participants in a conversation, or eye contact, which are insufficient for real presence, which requires a bond of trust and a shared sense of purpose. These have nothing to do with technology. Despite Zoom fatigue from hours of looking at a grid of faces on a screen, there is no reason to believe that immersion in a sea of avatars on a head-mounted display or digital glasses will be more satisfying and less stressful. 

3. “in new, joyful, completely immersive ways … Everything we do online today connecting socially, entertainment, games, work is going to be more natural and vivid.” 

Myron Krueger’s pioneering Videoplace in the early 1970s showed the potential of VR for artistic experiences. VR is a proven success in high-end technologically complex gaming environments and will continue to provide even more compelling gaming experiences. AR is useful in certain kinds of surgery; it will soon also be assisting in firefighting. Both VR and AR have good uses in education for rich kids. But the notion that these technologies will be used for “everything we do online today” is ludicrous, as is the claim that this will make all experiences “joyful” and “more natural”. 

4. “Technology .. built around people and how we … experience the world and interact with each other” 

We experience the world in many ways, sometimes individually. sometimes in close interaction with one or more people. To assert that VR and AR makes our experience of the world more natural is false, even though it does create immersive and engaging and effective experiences in certain situations. 

5. teleport to a private bubble to be alone. … 

Why not just turn off the technology to be alone? 

6. “You’ll … have a photo realistic avatar for work, a stylized one for hanging out … a wardrobe of virtual clothes for different occasions … put up your own pictures and videos and store your digital goods.” 

It is hard to imagine many adults getting turned on by this idea, although it could enable new forms of play for kids. The degree of tech addiction of many children, and the documented destructive effect on many, as studied by Jean Twenge among others, makes one question its desirability. Is Mark’s Metaverse going to be better than traditional play worlds which deliver engagement and stimulate imagination? Is spending time online creating multiple avatars, imaginary clothing, and virtual wall decorations anything other than a new playground for the rich and the technically adept? 

7. “new forms of governance … Privacy and safety need to be built into the metaverse from day one … we need to make sure the human rights and civil rights communities are involved … we believe that neural interfaces are going to be an important part” 

Based on Facebook’s track record, do we trust Meta to accept reasonable governance and to prioritize our privacy and safety? Has Facebook ever prioritized human rights? Do we trust Facebook to gather even more intimate data and to connect us via neural interfaces? 

Facebook intends to spend $10B over the next year to build the metaverse. Such intense development will yield some interesting prototypes and products. But do we want a Metaverse from Mark Zuckerberg, a brilliant technician whose commitment to ethics has been manifested only through platitudes and apologies before congressional committees? 


I do not want Mark’s Metaverse, nor do I condone allowing his social media monopoly media to finance its construction. Do you want it? Why or why not? 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s