

It is the industrial production of a knowledge sausage, which crams together so much data that its ability to spit out a million possible outputs becomes relatively quotidian.Ĭontrast this with examples of actual emergence, such as fluid flow, which has been described for two centuries by an elegant expression known as the Navier-Stokes equations. In the end, this immense enterprise arrives not at an unexplained spark of consciousness but a compressed kielbasa of information. The process involves what the chief executive of OpenAI has called an “ eye-watering” amount of computations.

The language model training process used for AI takes gigantic troves of data scraped indiscriminately from the internet, pushes that data repeatedly through artificial neural networks, some containing 175 billion individual parameters, and adjusts the networks’ settings to more closely fit the data. There’s more to fear here than killer robots. Technology and the Internet Column: Afraid of AI? The startups selling it want you to beĬhatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. Far removed from this classic definition, current AIs display behaviors more appropriately characterized as “knowledge sausage”: complex, vaguely acceptable computer outputs that predictably arise from even more complex, industrial-scale inputs. The term captures one of the most thrilling phenomena in nature: complex, unpredictable behaviors emerging from simple natural laws. If anything, these far-fetched claims look like a marketing maneuver - one at odds with the definition of “emergence” used in science for decades.

A new study from Stanford researchers suggests that “sparks” of intelligence in supposedly “emergent” systems are in fact mirages. However, as computational linguistics expert Emily Bender has pointed out, we’ve been giving AI too much credit since at least the 1960s.

This misappropriation of the term “emergent” by AI researchers and boosters deploys language from biology and physics to imply that these programs are uncovering new scientific principles adjacent to basic questions about consciousness - that AIs are showing signs of life. “60 Minutes,” for example, reported credulously that a Google program taught itself to speak Bengali, while the New York Times misleadingly defined “emergent behavior” in AI as language models gaining “ unexpected or unintended abilities” such as writing computer code. One of the boldest, most breathless claims being made about artificial intelligence tools is that they have “emergent properties” - impressive abilities gained by these programs that they were supposedly never trained to possess.
