Thoughts on the Eliezer vs. Hotz AI Safety Debate
I just got done watching the debate on AI safety between George Hotz and Eliezer Yudkowski on Dwarkesh Patel’s podcast. …
Read MoreWhat Happens to Content When Top-Tier Production Quality is Commoditized?
I think AI is about to massively improve the quality of our best content. But not for the reason you …
Read MoreUL NO. 397: Propaganda in a Box, Glacier-like Security, AGI by 2028?, Ancient Wisdom via AI, and Newsletter Differentiation
Unsupervised Learning is a Security, AI, and Meaning-focused podcast that looks at how best to thrive as humans in a …
Read MoreMy Current Definition of AGI
People throw the term “AGI” around like it’s nothing, but they rarely define what they mean by it. So most …
Read MoreWhy and How I Believe We’ll Attain AGI by 2025-2028
I have a strong intuition about how we’ll achieve both AGI and consciousness in machines. Keep in mind: it’s just …
Read MoreGetting Into Short-form Video
Naabu for the win This is my first short-form video. Kind of random what I did for the first topic, …
Read MoreA List of Timeless Concepts from the Ancient Myths
Read more about AIL At least 10 times a month I find myself in a book and they make a …
Read MoreDefensive Security is a Glacier, and That’s Ok
I think I just figured out why so many people burn out in defensive cybersecurity after a decade or two. …
Read MoreHow I Differentiate the Unsupervised Newsletter & Podcast
THOC — The Hierarchy of Content There are thousands of newsletters out there that hit you with the latest news, …
Read MoreTopics, Insights, and Resources from the Neri Oxman and Lex Fridman Conversation
This conversation between Neri Oxman and Lex Fridman is one of the most beautiful discussions I’ve ever listened to. Rating …
Read More












Recent Comments