#387 – George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI

We have discussed round #2 with George in the past. Round #3 includes some interesting insights about programming, AI, tinygrad, etc.

Some quotes (generated with whisper, so likely not 100% accurate):


You know, what’s ironic about all these AI safety people is they are going to build the exact thing they fear. These, we need to have one model that we control and align. … You think you’re gonna control it? You’re not gonna control it. So the criticism you have for the AI safety folks is that there is a belief and a desire for control. And that belief and desire for centralized control of dangerous AI systems is not good.

I hope they lose control. I’d want them to lose control more than anything else. I think when you lose control, you can do a lot of damage, but you can do more damage when you centralize and hold on to control, is the point. Centralized and held control is tyranny, right? I will always, I don’t like anarchy either, but I’ve always taken anarchy over tyranny. Anarchy, you have a chance.


TinyGrad solves the problem of porting new ML accelerators quickly. One of the reasons, tons of these companies now, I think Sequoia marked Graphcore to zero, right? Seribis, TenzTorrent, Grok, all of these ML accelerator companies, they built chips. The chips were good. The software was terrible. And part of the reason is because I think the same problem is happening with Dojo. It’s really, really hard to write a PyTorch port because you have to write 250 kernels and you have to tune them all for performance.

This is the main philosophy of TinyGrad.
You have never refactored enough.
Your code can get smaller.
Your code can get simpler.
Your ideas can be more elegant.

Centralization and Complexity

Some people in the world like to create complexity. Some people in the world thrive under complexity, like lawyers, right? Lawyers want the world to be more complex because you need more lawyers, you need more legal hours, right? I think that’s another. If there are two great evils in the world, it’s centralization and complexity. Yeah, and one of the sort of hidden side effects of software engineering is like finding pleasure and complexity.


So step one is testing.

One of my favorite things to look at today is how much do you trust your tests? We’ve put a ton of effort and comma and I’ve put a ton of effort and tiny grad into making sure if you change the code and the tests pass, that you didn’t break the code. Now, this obviously is not always true, but the closer that is to true, the more you trust your tests, the more you’re like, oh, I got a pull request and the tests pass, I feel okay to merge that, the faster you can make progress. So you’re always programming your tests in mind, developing tests with that in mind that if it passes, it should be good.


Haven’t you heard about like the Big Bang and stuff? Yeah, I mean, what’s the Skyrim myth origin story in Skyrim? I’m sure there’s like some part of it in Skyrim, but it’s not like if you ask the creators, like the Big Bang is in universe, right? I’m sure they have some Big Bang notion in Skyrim, right? But that obviously is not at all how Skyrim was actually created. It was created by a bunch of programmers in a room, right? So like, you know, it struck me one day how just silly atheism is, right? Like, of course we were created by God. It’s the most obvious thing.