With ChatGPT, We All Act as Well-Behaved Children of Tech Capitalism

With ChatGPT, We All Act as Well-Behaved Children of Tech Capitalism


The products of big tech companies have brought about a state of unhealthy disruption and unmanageable consequences. Meanwhile, we act as though the technology is living its own unstoppable life and neglect to question its true purpose.

OpenAI has laid the world at its feet by developing a machine that produces text on subjects that are already abundant with text. ChatGPT thus appears to be a solution to a problem that does not exist, yet we neglect to question its justification. Something similar happened when social media and smartphones spread like wildfire throughout the world. Today, we know that social media makes some less social, and smartphones hardly make us any smarter. Excessive screen use is deeply harmful to many and has pulled some children and youth into an unprecedented state of chronic anxiety, sleep problems, and numerous other mental disorders. The dramatic U-turn from enthusiasm and democratic optimism about social media to screen addiction, impaired quality of life and democratic-harmful echo chambers should serve as a buzzing alarm for new disruptive technology.

In this context, ChatGPT’s rapid success should have awakened the world’s growing army of AI Ethics people. Instead, most of us, as if in a dogmatic slumber, jumped into the exact same hole as usual. Myself included. We ask if the machine’s answers are biased. We ask if its answers are correct. We ask how best to ensure that it is not used for wrongdoing. How do we avoid, for example, that ChatGPT is used for academic misconduct? We act like well-behaved children of tech capitalism, like yes-men constantly looking for constructive solutions. While what we should have done was to direct unsparing criticism towards OpenAI’s blatantly clumsy and ill-considered launch of a technology that, in one fell swoop, disrupted the global education sector without offering so much as one minute of preparation time. No one was given time to ask how the technology might affect research and education. The launch was extremely inconsiderate of OpenAI, but that view is, for reasons I will come to in a moment, grossly under-represented in the debate.

Dismantling Technological Determinism
The extent to which many western citizens have come to function as obedient noddles of the tech industry was brought home to me personally in an almost unbearable way when a school principal commented on one of my posts on the subject online. He wrote:
“ChatGPT is a gamechanger. We see that in schools too. But instead of being afraid of the technology, we’re better off being critically curious.”

A similar approach is supported by several high school teachers, who believe we should embrace it as we did the calculator. I agree that fear alone doesn’t solve much. But being critically curious and adjusting to living with whatever technology comes along is simply too polite an approach. It reflects the fundamental problem that most of us – consciously or unconsciously – swear by what is called technological determinism. It’s a concept we need to understand if we, as a civilisation, want other guiding principles than than those coming from big tech. As the eminent American philosopher David J. Gunkel laconically wrote on Twitter in a comment on ChatGPT and education:

“Those who do not put-in the time and effort to understand Technological Determinism are determined to repeat it.”

Technological determinism is a complex concept, but some interpret it to embody the notion that technology governs us, rather than the other way around. Thus, we are unable to control technology, and unable to stop progress. This is deeply problematic since it is obvious that technological progress can easily be a human setback. The atomic bomb, the machine gun, heroin, and Instagram Reels are frightening examples of this.

We Neglect to Ask the Most Important Questions
Some may argue that OpenAI’s new generative products do not belong in this conversation. We’re just using ChatGPT because it’s smart. With ChatGPT, I can code four times as fast! With ChatGPT, I can write texts for my clients’ websites in no time! With ChatGPT, I can make calculations for my architectural project without having to pay an engineer for it! A Twitter user proudly showcased how ChatGPT saves her five hours of work per week. For example, she no longer has to plan when to spend time with her family, she no longer has to write her own emails and sales pitches, and she no longer has to ask her friends for recommendations on which books to read.

And sure enough, ChatGPT can, in some ways – and at first glance – make our lives more comfortable. And maybe it is a good solution in some situations. But unreflective enthusiasm and technological determinism are a toxic cocktail, at once numbing our critical sense and leading us to believe that the obvious side-effects for, say, education, are inevitable. This is precisely why we neglect to ask the most important questions of all:
Does this technology make anyone happier?
Does this technology offer anyone a better life?
Does this technology create a better society for anyone?

“… it is obvious that technological progress can easily be a human setback. The atomic bomb, the machine gun, heroin, and Instagram Reels are frightening examples of this.”

Thomas Telving

A Live TV Demonstration
Without thoroughly investigating questions like these, we will never find out whether neither ChatGPT, social media, smartphones or Alexa are genuine civilisational progress, or whether they are merely the result of meaningless, deterministic technological development. We are so deeply embedded in digital structures that it is close to impossible to ask the questions unbiasedly, but we must insist on trying. How difficult it can be to practice critical thinking on the subject was demonstrated to me in a live TV program, where I hardly had time to express my frustration over OpenAI’s totally tactless launch of ChatGPT before the host interrupted:

“Well, it’s here now, so what are we going to do?”

Implicit in the otherwise skilled and experienced journalist’s question was that withdrawing digital technology is not an option. She pushed me into the hole of routine questions, I was regretful about in the beginning of this article, and that underlines my point that we are stuck in an existentially numbing trap of technological determinism.

The Battle is Not Lost
On a positive note, history has shown us that resistance is possible. Examples within digital technologies are few, but we have had some success in regulating the use of narcotics, alcohol, tobacco, firearms, nuclear bombs, medicine, cars, pesticides and freon. Some are subject to age restrictions, others to pre-use certification requirements, others are banned outright. Some were originally legal but were later withdrawn from the market or sought to be restricted when it was discovered how harmful they were. Even if they seemed fun to begin with. The same enlightened critical approach should be applied to digital technology. Fortunately, the EU is doing a lot, and right now efforts are being made to include ChatGPT in some of the provisions of The AI Act. There also seems to be a wave of universities banning the use of ChatGPT.

Put a Stop to Unhealthy Disruption
While these positive initiatives counteract some specific harmful effects, they also reflect that the EU is entrenched in technological determinism too. They do not step back and question whether the technology is actually beneficial for us. Or whether its benefits may only apply in certain industries and in specific situations. My purpose is not to recommend a total ban. That would hardly be wise and in any case it would be unobtainable. But I would argue that with disruptive and potentially harmful technologies – such as ChatGPT – we as a society must demand time to think before launch. This will involve bureaucracy, but it will improve conditions for distinguishing between unhealthy technological disruption and genuine human progress.

No Simple Solutions
So, dear tech companies – and yes, that means Google, Apple, Microsoft, Amazon, Meta, OpenAI, and all of you – you can’t just go around launching products into the global market that disrupt something as vital and important to all of us as the education sector. It’s not right, and you must do better in the future. You are a part of our collective society, and that calls for a whole other level of responsibility.

The way forward for humanity and technology is to put humanity first and make the oft-repeated mantra of AI for good more than just empty words. It will not give us a black and white answer on whether ChatGPT is good or bad for humanity, for such an answer does not exist. But it will give us the opportunity to have more nuance, so, before new launches, be aware for whom, in what situations and under what conditions a new technology can be truly beneficial.

Read about the author here and search his name for more of his writing

The illustration is generated with another disruptive tool from OpenAI Dall-E with the prompt: a 3D animation of a pretty and smart girl using a computer


via Stephen’s Web ~ OLDaily http://www.downes.ca/

January 23, 2023 at 06:05PM