Does Tech Hasten an Environmental Apocalypse? WE WANT TO PUBLISH YOUR IDEAS…

Ron Baecker is Emeritus Professor of Computer Science at the University of Toronto, author of Computers and Society: Modern Perspectives and Digital Dreams Have Become Nightmares: What We Must Do, co-author of, and organizer of

AI and in particular machine learning has made great progress in the last decade. Yet I am deeply concerned about the hype associated with AI, and the risks to society stemming from premature use of the software. We are particularly vulnerable in domains such as medical diagnosis, criminal justice, seniors care, driving, and warfare. Here AI applications have begun or are imminent. Yet much current AIs are unreliable and inconsistent, without common sense; deceptive in hiding that they are algorithms and not people; mute and unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted. 

Patient safety and peace of mind is aided in medical contexts because doctors, nurses, and technicians disclose their status, e.g., specialist, resident, intern, or medical student. This helps guide our expectations and actions. Most current chatbots are not open and transparent. They do not disclose that they are algorithms. They do not indicate their degree of competence and their limitations. This leads to user confusion, frustration, and distrust. This must change before the drive towards increasing use of algorithmic medical diagnosis and advice goes too far. The dangers have been illustrated by the exaggerated claims about the oncology expertise of IBM’s Watson. 

Ai algorithms are not yet competent and reliable in many of the domains anticipated by enthusiasts. They are brittle — they often break when confronted with situations only trivially different from those on which they have been trained.  Good examples are self-driving anomalies such as strange lighting and reflections, or unexpected objects such as kangaroos, or bicycles built for 2 carrying a child on the front handlebars. Ultimately, algorithms will do most of the driving that people now do, but they are not yet ready for this task. AIs are also shallow, possessing little innate knowledge, no model of the world or common sense, which researcher Doug Lenat, creator of the CYC system, has been striving to automate for four decades. 

But we expect even more of good agents beyond competence.  Consider a medical diagnosis or procedure.  We expect a physician to be open to discussing a decision or planned action.  We expect the logic of the decision or action to be transparent, so that, within the limits of our medical knowledge, we understand what is being recommended or what will be done to us.  We expect a decision or an action by an agent to be explainable. Despite vigorous recent research on explainable AI, most advanced AI algorithms are still inscrutable. 

We should also expect actions and decisions to be fair, not favoring one person or another, and to be just in terms of generally accepted norms of justice. Yet we have seen repeatedly recently how poor training data causes machine learning algorithms to exhibit patterns of discrimination in areas as diverse as recommending bonds, bail, and sentencing; approving mortgage applications; deciding on ride-hailing fares; and recognizing faces. 

If an algorithm rejects a résumé unfairly, or does a medical diagnosis incorrectly, or through a drone miscalculation injures an innocent person or takes a life, who is responsible?  Who may be held accountable? We have just begun to think about and develop the technology, the ethics, and the laws to deal with algorithmic accountability and responsibility. A recent example is an investor suing an AI company peddling super-computer AI hedge fund software after its automated trader cost him $20 million, thereby trying to hold the firm responsible and accountable. 

The good news is that many conscientious and ethical scientists and humanists are working on these issues, but citizen awareness, vigorous research, and government oversight are required before we will be able to trust AI for a wide variety of jobs These topics are discussed at far greater length in Chapter 11 of . Computers and Society: Modern Perspectives, Chapters 12 and 17 of  Digital Dreams Have Become Nightmares: What We Must Do, and also in The Oxford Handbook of Ethics of AI


What do you think? Are my expectations unreasonable? What issues concern you beyond those I have discussed? 

[WE WILL PUBLISH YOUR MOST THOUGHTFUL RESPONSES. Send to, 300-1000 words, include hyperlinks.] 

Recent increases in hurricanes, flooding, heat waves, fires, and drought are signs that the world is coming closer to irreversible damage. For example, scientists recently predicted that an Antarctic ice shelf holding up the huge Thwaites Glacier could collapse within 3 to 10 years, leading to the glacier sliding into the ocean and raising sea levels worldwide by more than 2 feet. 

What is digital technology’s contribution to the environmental apocalypse? Energy is used in three ways: (1) to manufacture digital technologies; (2) to operate them; and (3) to dispose of and replace them with newer versions. 

Computer manufacture uses significant energy and employs hazardous materials such as antimony, arsenic, cadmium, and lead. These elements must be disposed of separately from normal garbage, yet they often end up in landfills, incinerators, or recycling, posing dangers to humans and the environment. Alternatively, they are exported from the developed world to developing countries with an ‘out of sight, out of mind’ philosophy. 

Making things worse is the cycle of obsolescence and technology replacement. Vendors encourage an insatiable consumer hunger for the latest users — more speed and features. This guarantees that almost all digital technology will be upgraded or replaced by users every few years

A major culprit is the operating system. Developments in Microsoft Windows between 1996 and 2008 increased processor speed required by a factor of 15, main memory required by a factor of 40, and hard disk required by a factor of 30. The average life span of desktop computers is three to five years, laptops about three years, and mobile phones merely one year. Users don’t need all the ‘improvements’, but they are forced to adopt them because vendors stop supporting old models and versions, and because consumers are enticed by the ‘sweetness’ of new technology. 

New technology trends such as cloud computing require large server farms, which burn up huge amounts of energy. The mining (creation by computation) of cryptocurrencies such as Bitcoin consume vast amounts of energy in both manufacture and operations; machine learning computations also are damaging to the environment. 

Most technology and technology vendors do not support a purchaser’s Right to Repair. They increase their profit margins by withholding technical information and spare parts from third-party repair shops. The good news is that this is changing. President Biden recently signed an executive order mandating Right to Repair rules, and the Federal Trade Commission voted to enforce it

Electronic waste (e-waste) occurs when repair is impossible or undesirable. A 2016 estimate of the amount of e-waste produced in the world was 54 million metric tons, most of which was not recycled. If the remainder of the world follows American habits, where the average household owns twenty-four discrete consumer electronics products, and phones and tablets are discarded and replaced at increasingly fast rates, the accumulation of e-waste will get even worse. 

All of us have a role to play in saving the planet. Computer scientists can do research on computational sustainability. Thousands of Amazon employees publicly demanded in 2019 that Amazon take more climate action, which led the company to adopt an aggressive carbon neutral plan. We must speak out about the contribution of digital technologies to environmental damage, and work to ensure that devices are not consumed and discarded at such a dangerous rate. 


What do you think? What do you plan to so so that you are part of a solution instead of part of the problem? 

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s