Home Science Could Symbolic AI Unlock Human-like Intelligence?

Could Symbolic AI Unlock Human-like Intelligence?

by DIGITAL TIMES
0 comment


Will computers ever match or surpass human-level intelligence — and, if so, how? When the Association for the Advancement of Artificial Intelligence (AAAI), based in Washington DC, asked its members earlier this year whether neural networks — the current star of artificial-intelligence systems — alone will be enough to hit this goal, the vast majority said no. Instead, most said, a heavy dose of an older kind of AI will be needed to get these systems up to par: symbolic AI.

Sometimes called ‘good old-fashioned AI’, symbolic AI is based on formal rules and an encoding of the logical relationships between concepts. Mathematics is symbolic, for example, as are ‘if–then’ statements and computer coding languages such as Python, along with flow charts or Venn diagrams that map how, say, cats, mammals and animals are conceptually related. Decades ago, symbolic systems were an early front-runner in the AI effort. However, in the early 2010s, they were vastly outpaced by more-flexible neural networks. These machine-learning models excel at learning from vast amounts of data, and underlie large language models (LLMs), as well as chatbots such as ChatGPT.

Now, however, the computer-science community is pushing hard for a better and bolder melding of the old and the new. ‘Neurosymbolic AI’ has become the hottest buzzword in town. Brandon Colelough, a computer scientist at the University of Maryland in College Park, has charted the meteoric rise of the concept in academic papers. These reveal a spike of interest in neurosymbolic AI that started in around 2021 and shows no sign of slowing down.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Plenty of researchers are heralding the trend as an escape from what they see as an unhealthy monopoly of neural networks in AI research, and expect the shift to deliver smarter and more reliable AI.

A better melding of these two strategies could lead to artificial general intelligence (AGI): AI that can reason and generalize its knowledge from one situation to another as well as humans do. It might also be useful for high-risk applications, such as military or medical decision-making, says Colelough. Because symbolic AI is transparent and understandable to humans, he says, it doesn’t suffer from the ‘black box’ syndrome that can make neural networks hard to trust.

There are already good examples of neurosymbolic AI, including Google DeepMind’s AlphaGeometry, a system reported last year that can reliably solve maths Olympiad problems — questions aimed at talented secondary-school students. But working out how best to combine neural networks and symbolic AI into an all-purpose system is a formidable challenge.

“You’re really architecting this kind of two-headed beast,” says computer scientist William Regli, also at the University of Maryland.

War of words

In 2019, computer scientist Richard Sutton posted a short essay entitled ‘The bitter lesson’ on his blog (see go.nature.com/4paxykf). In it, he argued that, since the 1950s, people have repeatedly assumed that the best way to make intelligent computers is to feed them with all the insights that humans have arrived at about the rules of the world, in fields from physics to social behaviour. The bitter pill to swallow, wrote Sutton, is that time and time again, symbolic methods have been outdone by systems that use a ton of raw data and scaled-up computational power to leverage ‘search and learning’. Early chess-playing computers, for example, that were trained on human-devised strategies were outperformed by those that were simply fed lots of game data.

This lesson has been widely quoted by proponents of neural networks to support the idea that making these systems ever-bigger is the best path to AGI. But many researchers argue that the essay overstates its case and downplays the crucial part that symbolic systems can and do play in AI. For example, the best chess program today, Stockfish, pairs a neural network with a symbolic tree of allowable moves.

Neural nets and symbolic algorithms both have pros and cons. Neural networks are made up of layers of nodes with weighted connections that are adjusted during training to recognize patterns and learn from data. They are fast and creative, but they are also bound to make things up and can’t reliably answer questions beyond the scope of their training data.

Symbolic systems, meanwhile, struggle to encompass ‘messy’ concepts, such as human language, that involve vast rule databases that are difficult to build and slow to search. But their workings are clear, and they are good at reasoning, using logic to apply their general knowledge to fresh situations.

When put to use in the real world, neural networks that lack symbolic knowledge make classic mistakes: image generators might draw people with six fingers on each hand because they haven’t learnt the general concept that hands typically have five; video generators struggle to make a ball bounce around a scene because they haven’t learnt that gravity pulls things downwards. Some researchers blame such mistakes on a lack of data or computing power, but others say that the mistakes illustrate neural networks’ fundamental inability to generalize knowledge and reason logically.

Many argue that adding symbolism to neural nets might be the best — even the only — way to inject logical reasoning into AI. The global technology firm IBM, for example, is backing neurosymbolic techniques as a path to AGI. But others remain sceptical: Yann LeCun, one of the fathers of modern AI and chief AI scientist at tech giant Meta, has said that neurosymbolic approaches are “incompatible” with neural-network learning.

Sutton, who is at the University of Alberta in Edmonton, Canada, and won the 2024 Turing prize, the equivalent of the Nobel prize for computer science, holds firm to his original argument: “The bitter lesson still applies to today’s AI,” he told Nature. This suggests, he says, that “adding a symbolic, more manually crafted element is probably a mistake.”

Gary Marcus, an AI entrepreneur, writer and cognitive scientist based in Vancouver, Canada, and one of the most vocal advocates of neurosymbolic AI, tends to frame this difference of opinions as a philosophical battle that is now being settled in his favour.

Others, such as roboticist Leslie Kaelbling at the Massachusetts Institute of Technology (MIT) in Cambridge, say that arguments over which view is right are a distraction, and that people should just get on with whatever works. “I’m a magpie. I’ll do anything that makes my robots better.”

Mix and match

Beyond the fact that neurosymbolic AI aims to meld the benefits of neural nets with the benefits of symbolism, its definition is blurry. Neurosymbolic AI encompasses “a very large universe,” says Marcus, “of which we’ve explored only a tiny bit.”

There are many broad approaches, which people have attempted to categorize in various ways. One option highlighted by many is the use of symbolic techniques to improve neural nets. AlphaGeometry is arguably one of the most sophisticated examples of this strategy: it trains a neural net on a synthetic data set of maths problems produced using a symbolic computer language, making the solutions easier to check and ensuring fewer mistakes. It combines the two elegantly, says Colelough. In another example, ‘logic tensor networks’ provide a way to encode symbolic logic for neural networks. Statements can be assigned a fuzzy-truth value: a number somewhere between 1 (true) and 0 (false). This provides a framework of rules to help the system reason.

Another broad approach does what some would say is the reverse, using neural nets to finesse symbolic algorithms. One problem with symbolic knowledge databases is that they are often so large that they take a very long time to search: the ‘tree’ of all possible moves in a game of Go, for example, contains about 10170 positions, which is unfeasibly large to crunch through. Neural networks can be trained to predict the most promising subset of moves, allowing the system to cut down how much of the ‘tree’ it has to search, and thus speeding up the time it takes to settle on the best move. That’s what Google’s AlphaGo did when it famously outperformed a Go grandmaster.

An alternative idea is to insert symbolics into the middle of an LLM’s workflow, in the same way as consulting a calculator might help person to solve a maths puzzle. Using rules-based systems during crucial reasoning steps can help to keep LLMs from going off-track, many argue. Projects including the Program-Aided Language (PAL) model, for example, use an LLM to convert natural-language tasks into Python code, use that symbolic code to solve the problem, and then interpret that solution back into natural language with an LLM.

Jiayuan Mao, an AI researcher who has just completed her PhD at MIT with Kaelbling and is on her way to the University of Pennsylvania in Philadelphia, has had success in using neurosymbolic AI to make robot training more efficient. Her strategy is to use a neural network to recognize objects (such as a red rubber ball or a green glass cube) in a visual field and then use a symbolic algorithm to reason through relational questions about those objects (such as ‘is the rubber object behind the green object?’). A pure neural network would need 700,000 examples in its training data to achieve 99% accuracy on this task, she says. But by adding symbolic techniques, she needs just 10% of that number. “Even if you use 1%, you can still get 92% accuracy, which is quite impressive,” she says. A similar neurosymbolic system she created trounced a neural-network-based system at guiding a robot that encountered unfamiliar objects while washing dishes or making tea.

Lost in translation

One of the big challenges for symbolic AI is how to encode sometimes slippery human knowledge within a language of logic and rules. One of the earliest attempts was a project called Cyc, started by computer scientist Doug Lenat in 1984 and later overseen by his AI company Cycorp, based in Austin, Texas. The intent was to explicitly articulate common-sense facts and rules of thumb, such as ‘a daughter is a child’, ‘people love their children’ and ‘seeing someone you love makes you smile’. The project’s language, CycL, uses symbols (for logical operators such as IF, AND, OR and NOT) to express logical relationships so that an inference engine can easily draw conclusions, such as ‘seeing your child would make you smile’.

Cyc, which now holds more than 25 million axioms, has been used in a variety of AI efforts and has inspired other projects, such as Google’s Knowledge Graph, which holds more than 500 billion facts. Today, ‘knowledge engineers’ use a similar strategy to gather human-generated facts and relationships, build specialized databases and integrate them with AI.

Symbolic databases can help an AI to generalize knowledge from one situation and apply it in another, says Kaelbling, which is a powerful way to make reasoning more efficient. But there is a trade-off in accuracy when dealing with subjects for which there are many exceptions to the ‘rules’ — not all people love their children, for example, and seeing something you love doesn’t always make you smile. Symbolics should be incorporated only when it is helpful to do so, she says. “Cyc was trying to turn common sense into math. That is almost surely a bad idea,” Kaelbling says.

In 2023, Marcus posted a paper with Lenat laying out what LLMs can learn from Cyc. As part of that work, the duo asked GPT-3, an early LLM of the type that underpins ChatGPT, to write CycL statements that encode the logical relationships in the sentence “Did you touch a blue object located in the capital of France on September 25th, 2022?” The response “at first amazed the Cyc team”, they report, because it generated what looked to be the right sort of statements in the right sort of language. But on closer inspection, GPT-3 made many crucial errors, they write, such as concluding that “the thing that is touching the blue object is the date”.

“It looks like it’s good, it looks like it should work, but it’s absolutely garbage,” says Colelough. This shows that it’s pointless to simply ram together a symbolic engine and a neural net, he says. “Then you might as well just not have the neurosymbolic system.”

What’s needed, Colelough says, is a lot more research on AI ‘metacognition’ — how AI monitors and conducts its own thinking. That would enable AI ‘conductors’ to oversee a more sophisticated integration of the two paradigms, rather than having different engines simply take turns. Colelough says AlphaGeometry does this well, but in a limited context. If a flexible conductor that works for any domain of knowledge could be developed, “that would be AGI for me”, Colelough says.

There’s a lot more work to do. Fresh hardware and chip architectures might be needed to run neurosymbolic AI efficiently. In time, other types of AI — maybe based on neural networks, symbolic AI, both or neither — might become more exciting, such as quantum AI, a fledgling field that seeks to exploit the properties of the quantum world to improve AI.

For Mao, the ultimate goal is to leverage neural networks’ learning abilities to create rules, categories and paths of reasoning that humans aren’t yet aware of. “The hope is that eventually we can have systems that also invent their own symbolic representation and symbolic algorithms, so that they can really go beyond what a human knows,” she says. That might be like a computer discovering an as-yet-unknown mathematical or physical concept — perhaps analogous to π or the property of mass — and then encoding the new concept to help to extend knowledge. “We need to study how computers can teach humans, not how humans can teach machines.”

This article is reproduced with permission and was first published on November 25, 2025.



Source link

You may also like