Paul's Blog

Bookmark this to keep an eye on my project updates!

View My GitHub Profile

Epiphany

In November 2020, I was hospitalized for brain surgery. I saw limitations first hand from speech recognition to how the old brain impacts everyday life and how the new brain, if still functioning, can still create this presentation. For months, I’ve been ruminating on ml but being in the hospital crystallized all those issues and what was nagging me.

What was your epiphany? Current ML uses the old brain way of doing “intelligent” things: stimulus respose. The new brain, or neocrtex, uses imagination (world simulation) to be “intelligent.”

Problem

The way we do ml is expensive & a black box ultimately. What if we could train it w hundreds of times less training data & we know what all the layers of neurons are doing? Down yo Each individual neuron. See Will scaling work?

What if we could get rid of the hidden layers, require hundreds of times worth of data required for training, make if affordable for the entire world, ensure AI safety, and be able to provide dirt cheap AI for the entire world?

The status quo is vacuum tubes, I know the path create the transistor!

The fundamental limitation is hidden layers. You can’t keep doing engineering on black boxes indefinitely, you’ll hit & snag & wont be able to debug it.

Solution

Instead of calculus, use a hierarchical form of probabilities in the same branch of mathematics as google page rank. The idea is mainly from Jeff Hawkins, Dileep George & Scott Phoenix. The ideas have already be productized and validated by vicarious (bought out by Google). THey wrote a whitepaper 7 years ago cracking the code of captcha using hundred of times less training data. The key insight is a hierarchical PGM that uses both constant feedback & feedforward in order to handle assumptions AND counterfactual of observations (training data). Furthermore, in Jeff Hawkin’s book A Thousand Brains he states the fact that neurons vote - this highly suggests intelligence uses the mathematics of probabilities.

Understanding the ingredients to make a synthetic intelligence means knowing it’s made of the following essential components: imagination (aka world simulation aka foresight), memory, senses, and optionally qualia. All these things can throttled or scaled; in serial or parallel. The game Ultimate Battle Simulator gave me the inspiration for that notion of throttling or scaling since all those things I just mentioned can be variables in a computer program. Humans now have a solid framework for creating synthetic intelligence that is understandable, explainable, and affordable to all.

What about alignment?

Adding guardrails to synthetic intelligence is brittle because there are too many edges cases. An example of this is that there the many ways people can and do jailbreak current generative ML systems like ChatGPT. Building maximum curiosity, truth seeking , and with foresight (aka imagination) is the best way to align human well-being and synthetic intelligence. Credit Elon Musk for popularizing the notion of maximizing truth seeking & curiosity in order to align human well being to synthetic intelligence.

video

TLDR

Problem: LLM are expensive & a black box

Solution: RCNs can be human-aligned, completely understandable and explainable, affirdable to all

Next chapter

Part 5: Implication of solved intelligence & consciousness

TOC