Skipping the step
My vision is that we’ll get to AGI faster if we skip the most difficult step. We will skip the conversational engine.
We will get the input from the user with something that already works. We start with tables and spreadsheets. Then we do 2-D geometry, interactive and editable 3D representations like what you can see in Minecraft and in the meanwhile we ask the user questions using a GUI, and present the results and inner thinking of AI using GUI.
We also don’t hesitate asking the user to write some programming language if he doesn’t mind.
All this is to skip the natural language. Others will probably solve the natural language soon, and we will use it. But we’re focused directly on AI and not the language.
It’s a well known fact that human brains have a dedicated unit, the language processing unit, sometimes called the auditory cortex or Broca’s and Wernicke’s areas. It’s a relatively small area of the brain that is massively connected to the neocortex and frontal lobe, the “thinking” part of the brain – and serves the purpose of translating the language input that we read or hear to concepts and logical networks of knowledge that we have in the brain. Without this part of the brain humans can still solve complex problems but would not be able to talk or write.
Guys like OpenAI and Google are focused on building this “language module” of the brain. Google is a specialist in understanding the meaning of the text, and OpenAI took the direct confrontation path with Google because that’s where the most of the funding currently is.
At HyperC, we’re building the “neocortex of AI”, the part that makes us human – the logic thinking, the math-crunching part of the brain.
So learning and using the abstract model of the process is what we do at HyperC.
Basically, if you have the model of how let’s say the market operates, you can use the model to perform complex trade operations to finally net in a positive balance. Current heavily-developed AI can do predictions but it will not invent new smart ways to route the money, because it can not remember the context and it can not navigate out of all possible outcomes and find the sequence of actions – what only humans can figure out using the neocortex.
So we’re that.
Now, why do I believe that the auditory cortex is the most difficult step?
Because machines are already extremely good at logic and math.
What they’re not good with is understanding the peculiarities of human thought expressed in writing in a human language and deriving useful understanding of the model of the world. This also tends to be highly dependent on local knowledge of a specific group of people. Just think about the sentence “Let’s do it like last summer?” is it possible for a machine to figure out what you’re talking about? But with a friend, this totally makes sense.
So if we ask the human to explain the problem to the machine using something both can understand, suddenly we realize that the machines are already far smarter than humans, with today’s technology.
It’s just a different type of AI.
It’s called “AI planning”.
AI planning is a type of AI that is responsible for representing every knowledge piece derived from facts, rules and experience using a densely-interconnected network of outcomes and conclusions. This network is also called “the grounded actions and states”. The procedure of finding a “path” to the desired state through all the outcomes is quite boringly called “the search” and the sequence of actions and outcomes – “the plan”.
The main issue is that the amount of different states of the world and outcomes can be so huge that it will never fit into any machine memory. Actually, sometimes the “state space” of a problem is comparable to the amount of atoms in the universe. For example, the amount of combinations in chess is over a decillion (10^33) times larger than any imaginable amount of computer memory today.
The challenge of AI planning is to rewrite the problem that was shown to the machine so that the amount of states is feasible to search and that the search itself can be efficiently accomplished using the computation mechanics of a modern computer.
This works by learning which states and outcomes actually worth exploring using the experience that the AI gets while solving various problems. You can think of this approach as if the AI was playing chess against you and was figuring out what moves you know thus limiting the state space to only those states that are relevant. Because you’re actually not infinitely smart. You could also learn more “moves” in chess and become smarter, but that’s a whole different story.
The AI we’re building is looking at problems that are thrown at it and is learning how to produce a more efficient representation of the problem that is faster to solve. For the majority of cases not specifically designed to create infinity states – it will converge to a specialized “program” that is as fast as a computer program can be, exposing a great degree of adaptability to problems and superhuman decision-making speed.