OK, we are not experts nor PhDs so most of this is probably not technically correct, but the math is so complicated and the concepts so complicated, that we thought it would be good to just get some intuition on what is happening. So this is a quick talk that summarizes readings from so many different sources about the history of AI from the 1950s to January 2024 or so.
It is really hard to keep up but much harder without some intuition about what is happening. We cover expert systems, the emergence of neural networks, Convolution Neural Networks, and Recurrent Neural Networks. The Attention is All You Need paper led to Transformers and then finally some intuition on how such a simple idea, that is training on things can lead to emergent and unexpected behaviors, and finally, some intuition on how Generative images work.
Chapters:
00:00 AI History
09:07 Neural Networks Weights
11:18 Convolutional Neural Networks (CNNs)
16:07 Recurrent Neural Networks (RNNs)
17:57 Embedding Vectors are the key!
23:25 Attention is all you need Transformers
27:36 But it’s too Simple! Emergence is surprising
33:04 What emerges inside a Transformer?
43:01 One Model to Rule Them All
47:54 Works for Image generation too