understanding the hype and hope
I have been keeping an eye on the hype & hope around artificial intelligence (AI), especially:
- ML — machine learning
- GPT — generative pre-trained transformers
- GAI — generative artificial intelligence
- LLM — large language models
“I’ve long been a fan and found value in AI / ML and its capabilities. Learning and finding patterns and causal patterns that in time can lead to outcomes that are problematic (a large fleet of vehicles with hundreds of sensors feeding and AI / ML to detect early engine, transmission, or other failure to address before more expensive damage or at a human cost). Generative AI from large language models is missing core pieces still and had knock-on effects that are really problematic with its lack of understanding facts (or multitudes of facts and truths), but more problematic is it blunts human learning and cognition.”
—Thomas Vander Wal 2023-03-18
How Technology Influences Social Networks
Stewardship of global collective behavior —2021-06-21
“Human collective dynamics are critical to the well-being of people and ecosystems in the present and will set the stage for how we face global challenges with impacts that will last centuries. There is no reason to suppose natural selection will have endowed us with dynamics that are intrinsically conducive to human well-being or sustainability. The same is true of communication technology, which has largely been developed to solve the needs of individuals or single organizations. Such technology, combined with human population growth, has created a global social network that is larger, denser, and able to transmit higher-fidelity information at greater speed. With the rise of the digital age, this social network is increasingly coupled to algorithms that create unprecedented feedback effects.”
Dark Shadows
Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow —2023-03-22
These companies can put as many disclaimers as they like on their chatbots — telling us they’re “experiments,” “collaborations,” and definitely not search engines — but it’s a flimsy defense. We know how people use these systems, and we’ve already seen how they spread misinformation, whether inventing new stories that were never written or telling people about books that don’t exist. And now, they’re citing one another’s mistakes, too.
“From my perspective, the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially … You can use AI to make fake news faster, cheaper, and on greater scales. That combination is where we might see our extinction.” —Jaron Lanier 2023-03-23
The Market Shifts Quickly
The genie escapes: Stanford copies the ChatGPT AI for less than $600 —2023-03-19
What does this all mean? Well, it means that unlimited numbers of uncontrolled language models can now be set up – by people with machine learning knowledge who don’t care about terms and conditions or software piracy – for peanuts.
It also muddies the water for commercial AI companies working to develop their own language models; if so much of the time and expense involved is incurred in the post-training phase, and this work can be more or less stolen in the time it takes to answer 50 or 100,000 questions, does it make sense for companies to keep spending this cash?
And for the rest of us, well, it’s hard to say, but the awesome capabilities of this software could certainly be of use to an authoritarian regime, or a phishing operation, or a spammer, or any number of other dodgy individuals.
The genie is out of the bottle, and it seems it’s already incredibly easy to replicate and re-train. Hold on to your hats.
@LinusEkanstam on Twitter —2023-03-23
“I think Apple will be launching their own secure and private LLM that runs on device (edge compute). And when necessary it offloads more heavy workloads to a cloud based LLM that’s optimized for heavier tasks. So we will initially have some hybrid. Single-use apps will be a huge thing. If you need to solve a unique problem, and nobody has ever done software for that because not enough market. With an LLM even a problem with only one user, will be doable, enter your ask, and code gets written, problem gets solved. Runtime ends, app dies. Done. Single use apps are born.”
There is a Regulatory System (sort of)
AI-generated images from text can’t be copyrighted, US government rules —2023-03-16
“Based on the [US Copyright] Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. In the Office’s view, it is well-established that copyright can protect only material that is the product of human creativity.”
One Risk
OpenAI CEO warns of risks of AI —2023-03-17
“The thing that I try to caution people the most is what we call the ‘hallucinations problem’,” [Sam] Altman said. “The model will confidently state things as if they were facts that are entirely made up.
“The right way to think of the models that we create is a reasoning engine, not a fact database,” he added. While the technology could act as a database of facts, he said, “that’s not really what’s special about them – what we want them to do is something closer to the ability to reason, not to memorize.”
A Possible Solution to that One Risk
ChatGPT Gets Its “Wolfram Superpowers”! —2023-03-23
And now ChatGPT + Wolfram can be thought of as the first truly large-scale statistical + symbolic “AI” system. In Wolfram|Alpha (which became an original core part of things like the Siri intelligent assistant) there was for the first time broad natural language understanding—with “understanding” directly tied to actual computational representation and computation. And now, 13 years later, we’ve seen in ChatGPT that pure “statistical” neural net technology, when trained from almost the entire web, etc. can do remarkably well at “statistically” generating “human-like” “meaningful language”. And in ChatGPT + Wolfram we’re now able to leverage the whole stack: from the pure “statistical neural net” of ChatGPT, through the “computationally anchored” natural language understanding of Wolfram|Alpha, to the whole computational language and computational knowledge of Wolfram Language.
Open AI in the Field
A&O announces exclusive launch partnership with Harvey —2023-02-15
“Allen & Overy (A&O), the leading international law firm, has broken new ground by integrating Harvey, the innovative artificial intelligence platform built on a version of Open AI’s latest models enhanced for legal work, into its global practice. Harvey will empower more than 3,500 of A&O’s lawyers across 43 offices operating in multiple languages with the ability to generate and access legal content with unmatched efficiency, quality and intelligence … Harvey is a platform that uses natural language processing, machine learning and data analytics to automate and enhance various aspects of legal work, such as contract analysis, due diligence, litigation and regulatory compliance. Whilst the output needs careful review by an A&O lawyer, Harvey can help generate insights, recommendations and predictions based on large volumes of data, enabling lawyers to deliver faster, smarter and more cost-effective solutions to their clients.”
Jobs & GPT
Notice that ‘Human Resources’ appears on both lists.
The Human Component
In the ‘age of AI,’ what does it mean to be smart? —2023-03-16
“So what happens when we automate our most impactful and superior cognitive capacity—thinking—and we don’t think for ourselves? I think we end up not acting in very smart ways, and then the algorithms are trained by behaviors that have very little to do with intelligence. Most of the stuff we spend doing on a habitual basis is quite predictable and monotonous and has very little to do with our imagination, creativity, or learnability—which is how we refer to curiosity.” —Tomas Chamorro-Premuzic, author — I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique
Caveat Emptor
Previous Posts on AI
Englisch
via Harold Jarche https://jarche.com
March 26, 2023 at 09:33AM