What Society Must Require from AI

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of thecovidguide.com, and organizer of computers-society.org.

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI

FOR THINKING AND WRITING AND DISCUSSING 

What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

Algorithm and Blues: The Tyranny of the Coming Smart-Tech Utopia. Why optimizing the world for efficiency, productivity and happiness is bad for humanity.

Brett Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University. His most relevant book to his thoughts below is Re-Engineering Humanity (Cambridge University Press 2018).

Imagine a world governed by smart technologies engineered to achieve three distinct yet interrelated normative ends: optimized transactional efficiency, resource productivity and human happiness. We could have congestion-free roads—no stop and go, no road rage! Instantaneous, personalized entertainment—no need to search or browse! Successful social interactions—no misunderstanding or missed cues! No surprise ailments, no failures, no missed opportunities! Heck, no surprises of any kind! There are so many imperfections in our world that smart technology could fix.

We do not live in such a world, but the technologies required for it to exist are already being rapidly developed and deployed. Take, for example, the Internet of Things (IoT)—big data, sensors, algorithms, artificial intelligence and various other related technologies. Their promoters make seductive promises. Supposedly, smart phones, grids, cars, homes, clothing and so on will make our lives easier, better, happier.

Read More »

Does Tech Hasten an Environmental Apocalypse? WE WANT TO PUBLISH YOUR IDEAS…

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of thecovidguide.com, and organizer of computers-society.org.

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI

FOR THINKING AND WRITING AND DISCUSSING 

What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

[WE WILL PUBLISH YOUR MOST THOUGHTFUL RESPONSES. Send to ronbaecker@gmail.com, 300-1000 words, include hyperlinks.] 

Recent increases in hurricanes, flooding, heat waves, fires, and drought are signs that the world is coming closer to irreversible damage. For example, scientists recently predicted that an Antarctic ice shelf holding up the huge Thwaites Glacier could collapse within 3 to 10 years, leading to the glacier sliding into the ocean and raising sea levels worldwide by more than 2 feet. 

What is digital technology’s contribution to the environmental apocalypse? Energy is used in three ways: (1) to manufacture digital technologies; (2) to operate them; and (3) to dispose of and replace them with newer versions. 

Read More »

I Do Not Want Mark’s Metaverse

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of thecovidguide.com, and organizer of computers-society.org.

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI

FOR THINKING AND WRITING AND DISCUSSING 

What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

[WE WILL PUBLISH YOUR MOST THOUGHTFUL RESPONSES. Send to ronbaecker@gmail.com, 300-1000 words, include hyperlinks.] 

In a blog posted two days ago, I highlighted phrases and sentences from Mark Zuckerberg’s recent keynote speech sketching his vision of Meta’s intended metaverse. Here are thoughts triggered by his words: 

1. “ you’re going to be able to do almost anything you can imagine … “This isn’t about spending more time on screens … [include] communities whose perspectives have often been overlooked … consider everyone …” 

No, Mark, be honest. This is about getting more people into Meta, and about getting them to spend more time in the metaverse, because that’s the only way you can sustain the growth your shareholders expect, and the only way you can withstand the onslaught of firms like Tiktok that now have greater appeal to the next generation of users. 

Read More »

What is Zuckerberg’s Metaverse, and Do We Want It?

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of thecovidguide.com, and organizer of computers-society.org.

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI

FOR THINKING AND WRITING AND DISCUSSING 

What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

In a recent blog, I suggested that we have finally lost patience with Facebook after new revelations by whistleblower Frances Haugen and the Wall Street Journal. Leaked documents show that FB knows that almost six million VIPs are given special dispensation to violate their content standards; criminals use FB to recruit women, incite violence against ethnic minorities, and support government action against political dissent; Instagram is toxic to many young girls, contributing to poor self-image, mental health, and suicidal thoughts; the firm relaxed its safeguards too soon after the U.S. election, contributing to the January 6 riot; and FB is incapable of suppressing election and vaccine misinformation. 

Read More »

Stretched Too Thin by Social Media: Beware its power to reshape your web of relationships 

Brett Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University. His most relevant book to his thoughts below is Re-Engineering Humanity (Cambridge University Press 2018).

Recently, I’ve received multiple invitations to leave Facebook and Twitter and join a new social network that promises to not destroy democracy. I’m tempted. I’m also tempted to delete my accounts and abandon social media altogether. The decision got me thinking, not about democracy but instead about how social media affect my behavior and relationships. 

Social media promise and deliver social networks with better or at least bigger scale and scope. Essentially, this means you can connect to many more people from many different places to relate on a wider variety of interests. To socialize is a core human need. The difficult question is whether social media improve our capability to relate to each other. 

Read More »

Facebook Was Soon to Be Held to Account: Will Meta Escape the Consequences?

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of thecovidguide.com, and organizer of computers-society.org.

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI

FOR THINKING AND WRITING AND DISCUSSING 

What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

In 2004, Mark Zuckerberg built an app to connect Harvard undergrads to one another. By 2006, it was available to anyone over the age of 13. Soon thereafter, his Facebook (FB) social media firm was animated by the concept that connectivity was a human right for the world’s billions. FB is now visited by almost 3 billion distinct users each month. The firm has become a monopoly, counting Instagram and WhatsApp among its divisions. (Further details appear in Chapters 11 and 17 of Digital Dreams Have Become Nightmares: What We Must Do.) 

FB’s dominance has led to serious problems which are well known. Its news feed widely shares toxic material — misinformation, hate speech, and fake news. People post private information which FB exploits commercially through surveillance capitalism. Fake social media participants constructed by Russia in the 2016 US presidential election and other elections has skewed the results. Children’s addiction to social media harms their sense of self-worth and their physical and mental health and well-being.

Read More »

A Review of: ‘Digital Dreams Have Become Nightmares: What We Must Do’

C. Dianne Martin is Emeritus Professor of Computer Science at George Washington University, and Adjunct Professor in the School of Information, University of North Carolina at Chapel Hill. She has been teaching Computers and Society since 1983.

I was delighted to receive email early this year from Prof. Ron Baecker, whose Computers and Society class at the University of Maryland in 1972 made me see that I could productively combine my previous studies in the social sciences and humanities with my new career in information technology. I was therefore eager to read his latest book, Digital Dreams Have Become Nightmares: What We Must Do.

In documenting his personal journey from dreams and exuberant optimism about computer technology to pessimism, nightmares, and fear caused by the emerging consequences of the tech explosion of the past 75 years. Ron has provided a comprehensive historical sweep of the computer revolution. In Part I he chronicles the high hopes of early developers to create technological solutions to disparities in healthcare and education, to increase creativity, collaboration, and community, and to provide greater power and convenience to all.

Read More »

A Review of: ‘People Count: Contact-Tracing Apps and Public Health’

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of thecovidguide.com, and organizer of computers-society.org.

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI

FOR THINKING AND WRITING AND DISCUSSING 

What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

Cybersecurity expert Prof. Susan Landau’s valuable and insightful recent book, People Count: Contact-Tracing Apps and Public Health, stresses that trust in government is essential to making contact tracing work for everyone. 

Contact tracing is a process for identifying, informing, and monitoring people who might have come into contact with a person who has been diagnosed with an infectious disease such as COVID-19.  It starts with a positive test. Public health officials then need to know who that person might have inadvertently infected. This requires tracking down anyone that person had contacted (was “close enough” for “long enough”) recently (14 days in the case of COVID). They can then be informed that they might have been infected and take measures to quarantine and monitor for symptoms. For example, restaurants initiate tracing by recording the name and phone number of one person in each party taking a table in the restaurant. 

Read More »

Technology and lifestyle in the COVID 4th wave and beyond

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of thecovidguide.com, and organizer of computers-society.org.

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI

FOR THINKING AND WRITING AND DISCUSSING 

What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

My blog post of May 18 suggested that some of the COVID-forced changes in work will survive past-COVID: “Large companies will shrink their office space footprint. Landlords will suffer economically, spaces will be vacant, and prices will drop. Many employees will work at home far more frequently than they did pre-pandemic. Many employees will no longer have a permanent desk; rather, they will grab a free desk when they are in the office. There will be less business travel, with more business conducted via teleconference. Progressive conferences will allow for both on-site and virtual attendance. Reductions in travel by [land and air will help] the environment.” 

Read More »