The advancing language of AI
Hands up everyone who is old enough to remember ELIZA.
This “natural language” program emerged from MIT in the mid-1960s and generated widespread interest by creating online text conversations that led some people to believe they were dealing with a human. However, its limitations quickly became apparent and ELIZA retreated into the land of academic research, the first in a long line of debunked claims about software communicating “just like a person”.
Bad news, Homo sapiens: the claim is no longer disproved.
ChatGPT, a program that creates statistics-based combinations of online material to create prose, poetry, and even computer code that’s incredibly good, is here.
This isn’t just a better ELIZA. Along with similar programs that generate visual art from simple cues, it’s a whole new application of what’s loosely called “artificial intelligence,” or AI. And I have to say that it feels smart, even though it’s just a huge amalgamation of if-then and yes-no rules.
Things, to use the cliché, will never be the same. Just look at the next classroom.
“Some of the professors, adjuncts, play with it. One person wrote a sonnet about me,” said Alan Lindsay, head of English at NHTI. “I agree it’s different. You could be fooled by it if you’re not careful.”
Colleges and high schools are trying to figure out how to use a free online program that can create excellent, hundreds of word essays on complicated topics in just a few seconds with just simple prompts (“compare Catcher in the Rye with Harry Potter‘ or ‘write a limerick about New Hampshire’ or ‘explain how the electricity grid works’).
The immediate concern is that ChatGPT is the perfect tool for cheating, but more importantly, it raises questions about how education should be done. The established model of explaining things in class and having kids write about them for homework could fall out of the window.
“There are many concerns from teachers who are wondering can we use it in school? How can we use it? … We are watching and learning and waiting to see how things develop. It’s such a new technology and it’s evolving so fast; it changes literally every day,” said Pam McLeod, director of technology for the Concord School District.
(ChatGPT can’t currently be used in public schools because it requires a personal phone number for authentication, which violates privacy guidelines. Concord blocks it from their school network, although I suspect this won’t slow many teens down.)
New Hampshire’s education commissioner Frank Edelblut, who gave a presentation on ChatGPT before the State Board of Education on Thursday, said he felt it was the latest iteration of technologies like spell checking and Grammerly taking on skills that used to be tedious had to be learned in class.
“We need to update our models for the 21st century,” he said. “If we send this kid home to write a 500-word essay, what are we trying to teach him and teach him? You need the concepts and logic models to think these things through. The same learning goals can exist with this new tool.”
As an example, he said students could write an essay and then generate an essay on the same topic from ChatGPT to compare the results.
Educators aren’t the only ones unsure of what ChatGPT will do with them. My job is definitely getting nervous.
Software has been used for several years to create simple forms of journalism such as short game stories and corporate earnings reports, but with ChatGPT news organizations can go far beyond that.
CNET, a well-respected website that covers technology, has started using it to write some articles, and it’s very hard to tell that an inked wretch wasn’t involved. Freelancers are complaining about losing jobs as clients who once hired them to write material are now doing the work with ChatGPT, perhaps paying a small portion of the previous fee to verify the result.
Computer programming jobs are similarly at risk. Give ChatGPT a few prompts and it can combine existing software code to create new code that I’ve been told is often frighteningly good. There are already reports of hackers using it to create new types of malware. If you know what a “script kiddie” is, you’ve got your hands on a powerful new tool.
Perhaps the group screaming bloody murder loudest is the visual artists, because natural language processing has begun to create digital images in seconds that are mesmerizing, even artistic. For example, right now I’m looking at a beautiful rendering of a castle that was created in a very short time by feeding Coleridge’s poem “Kubla Kahn” into an art AI program – so I’m guessing it’s more of a stately pleasure dome than a lock is .
I assume that music, spoken word performances and films will also be fair game.
This raises big copyright and intellectual property questions, as these new artworks are really high-quality mashups of existing artworks, but if history is any guide, these concerns aren’t going to slow the technology down much. Microsoft seems to agree: it is rumored to be buying OpenAI, the company that developed ChatGPT, for $10 billion.
So does this mean that we are finally seeing the beginning of the long-foretold apocalypse in which humans will be replaced by software?
It is certainly true that this technology will have far-reaching implications that are unpredictable, there are still real limits to the approach because artificial intelligence is not really intelligence.
What we are seeing is a very deep way of combining human creations, anything found on the internet, by following statistical patterns found in those creations. These programs don’t invent new creations and that leads to some glaring flaws.
For example, librarians report people coming in asking for books that aren’t available. They were recommended by a ChatGPT script that combined information about existing books in a statistically valid but meaningless way. Since ChatGPT has no knowledge of the world – it has no knowledge at all – it didn’t see the problem.
I’ve seen ChatGPT described as the ultimate BS artist, to use a family newspaper friendly euphemism. A BS artist, as you know, is someone who always sounds plausible, even if they make it up over time. This is exactly how ChatGPT works. It uses existing patterns to make it sound right, even when it produces utter nonsense.
In a way that makes it even more alarming, another quick and easy misinformation tool. But there is also hope that we carbon-based life forms will retain a competitive advantage.
In the meantime, here’s your homework. Log into ChatGPT and give it the prompt “Write a newspaper column about ChatGPT’s impact on New Hampshire” to see what it produces.
Let me know any weirdly bad results. Results that are better than me – you can keep them for yourself.