When trying to develop intelligent systems, we face the issue of choosing how the system picks up information from the world around it, represents it and processes the same. Symbolic Artificial Intelligence, also known as Good Old Fashioned AI (GOFAI), makes use of strings that represent real-world entities or concepts. These strings are then stored manually or incrementally in a Knowledge Base (any appropriate data structure) and made available to the interfacing human being/machine as and when requested, as well as used to make intelligent conclusions and decisions based on the memorized facts and rules put together by propositional logic or first-order predicate calculus techniques. Non-Symbolic Artificial Intelligence involves providing raw environmental data to the machine and leaving it to recognize patterns and create its own complex, high-dimensionality representations of the raw sensory data being provided to it.
Looking at the definitions, Non-Symbolic AI seems more revolutionary, futuristic and quite frankly, easier on the developers. The system just learns. It can tell a cat from a dog (CIFAR-10/CIFAR-100 with Convolutional Neural Networks), read Dickens’ catalog and then generate its own best selling novels (text-generation with LSTMs) and help to process and detect/classify Gravitational Waves using raw data from the Laser Interferometers at LIGO (https://arxiv.org/abs/1711.03121).
On the other hand, Symbolic AI seems more bulky and difficult to set up. It requires facts and rules to be explicitly translated into strings and then provided to a system. Patterns are not naturally inferred or picked up but have to be explicitly put together and spoon-fed to the system. Also, dynamically changing facts and rules are very hard to handle in Symbolic AI systems, and learning procedures are monotonically incremental, where Non-Symbolic AI systems can perform quick corrections and configure themselves easily to handle new conflicting data (Convex optimization techniques).
One of my favorite examples of the difference between Symbolic and Non-Symbolic AI was mentioned by Andrew Brown, Founder at Intent Labs, on a Quora answer (https://www.quora.com/What-is-the-difference-between-the-symbolic-and-non-symbolic-approach-to-AI);
Say you had a man in a room, and his job was to translate whatever note you slipped underneath the door to him from English to Mandarin. Seems like a simple enough workflow. Slip note, translate, get note.
If he was a Symbolic AI, he knows no Mandarin but has a huge library of English to Mandarin translations for him to use to put together a finished product for you. He receives your note and then makes the arduous journey of skimming the giant corpus and generating his reply.
If he was a Non-Symbolic AI, he knows Mandarin. Receives the note, translates it for you, and sends it back.
It may seem like Non-Symbolic AI is this amazing, all-encompassing, magical solution which all of humanity has been waiting for. However, there’s an issue. Like many things, it’s complicated.
Non-Symbolic AI (like Deep Learning algorithms) are intensely data hungry. They require huge amounts of data to be able to learn any representation effectively. They also create representations that are too mathematically abstract or complex, to be viewed and understood.
Taking the example of the Mandarin translator, he would translate it for you, but it would be very hard for him to exactly explain how he did it so instantaneously. Additionally, becoming an expert in English to Mandarin translation is no easy process.
Symbolic AI, on the other hand, has already been provided the representations and hence can spit out its inferences without having to exactly understand what they mean. The representations are also written in a human-level understandable language.
In the example of the Mandarin translator with a library of books explaining English to Mandarin translation, the translator can walk you through the process he followed to reach his final translated string. It would take a much longer time for him to generate his response, as well as walk you through it, but he CAN do it.
So, as humans creating intelligent systems, it makes sense to have applications that have understandable and interpretable blocks/processes in them. Therefore, throwing the symbols away may put AI out of circulation from human understanding, and after a point, intelligent systems will make decisions because “they mathematically can”. Also, Non-symbolic AI systems generally depend on formally defined mathematical optimization tools and concepts. That involves modeling the whole problem statement in terms of an optimization problem. However, many real-world AI problems cannot or should not be modeled in terms of an optimization problem.
So, it is pretty clear that symbolic representation is still required in the field. However, as it can be inferred, where and when the symbolic representation is used, is dependant on the problem.
For example, Direct Memory Access Parsing (https://www.cs.northwestern.edu/academics/courses/325/readings/dmap.php) studied by Prof. Chris Reisbeck (https://www.cs.northwestern.edu/~riesbeck/index.html) in the field of Natural Language Understanding, is used to build basic episodic memory to understand natural language, makes use of real-world symbolic representations stored in hierarchical systems to represent information and semantic connections between each object in the context. This episodically stored information is referred to when a bottom-up parsed statement queries the knowledge base for a particular context/fact or rule. Another example is games like Chess, which require syntactic representations of the current board state, what each piece is and what it can do, to make appropriate decisions for a follow-up move.
Therefore, it seems pretty important to understand that when we have sufficient information about the players and actors in the environment of a specialized high-level skilled intelligent system, it becomes more important to utilize a symbolic representation rather than a non-symbolic one.
However, what might be even more exciting, is the integration of symbolic and non-symbolic representations. They can help each other to reach an overarching representation of the raw data, as well as the abstract concepts this raw data contains. For example, we may use a non-symbolic AI system (Computer Vision) using an image of a chess piece to generate a symbolic representation telling us what the chess piece is and where it is on the board or used to understand the current attributes of the board state. This information can then be stored symbolically in the knowledge base and used to make decisions for the AI chess player, similar to Deep Mind’s AlphaZero (https://arxiv.org/pdf/1712.01815.pdf) (it uses Sub-symbolic AI, but however, for the most part, generates Non-symbolic representations). In short, analogous to humans, the non-symbolic representation based system can act as the eyes (with the visual cortex) and the symbolic system can act as the logical, problem-solving part of the human brain.