[AI] Well I can see it, why can't Sense?

I’m still trying to wrap my head around this research but what stood out immediately was the relevance to Sense geometry.

“Geometric confections”!

Screen Shot 2022-03-22 at 9.57.32 AM

https://www.pnas.org/doi/pdf/10.1073/pnas.2023123118

I also love the correlation of correlations chart:

2 Likes

Very interesting. I agree with the folks at the end of the NYT article - geometry and language both first arose in the human brain for representational purposes. It kind of feels to me like humans are not innately wired for language or geometry, but are wired to learn complex decoding / discretization from their earliest experiences (babies hearing words and seeing shapes).

And there is a relationship between all these things - human brains and Sense both try to do discretization of streams of real-time data. This table from the second paper highlights the challenge and benefits.

2 Likes

I had an epiphany when doing some Markov chain coding experiments decades ago working with letter frequency in texts (all of Shakespeare because it was one of the few things available to me in digital form in 1986).

If you want to generate Shakespeare that reads a little like Shakespeare then you need to look at groups of more than 3 or 4 letters. Too few then you get gobbledegook; too many and you just get verbatim Shakespeare.

The issue at the time was that building a 3 or 4 or 5 dimensional array was beyond my computational capacity (an 80286 I think it was, 8MHz). Or maybe I just wasn’t brave enough.

Then it dawned on me that of course you just parse the text for each new output letter. Use that as your matrix. It was still barely possible with a 10MB hard drive.

I built a “Shakespeare Machine” that output endless plays at about one letter every 5 seconds (appropriately written in Smalltalk-80) to a Teletype.

Anyway, not long after that with the leaps and bounds in computational power and natural language AI it was relatively trivial to look at words and word groups and meaning and generate more and more believable texts.

I feel something is lost though because the crude letter-parsing generates much more humorous texts.

I’m not sure where that goes in Sense terms. Disaggregation doesn’t call for humour but it does call into question what discrete means when there isn’t necessarily a corralary to “meaning” in an electrical signal, or a “word” for that matter, or even “letters”. Plenty of patterns and geometry though.

And lots of meaning through outside correlations.

1 Like

From Markov chains to LSTM RNNs to Transformers / BERT to N-HITS, we’re seeing accelerating change in decoding “signal” streams into discrete data.

Just as a reminder of your experience, we can take a look at a more recent attempt to teach a machine how to write like Shakespeare, or write code or author math proofs.

2 Likes