My First Tech Podcast By Dayan Ruben cover art

My First Tech

My First Tech

By: Dayan Ruben
Listen for free

About this listen

Reflecting on our first experience with technology is like stepping back into a moment of pure discovery. This podcast from a software creator for those shaping the tech world and curious minds. Each episode dives into a new language, tool, or trend, offering practical insights and real-world examples to help developers navigate and innovate in today’s evolving landscape. Made with AI and curiosity using NotebookML (notebooklm.google) by Dayan Ruben (dayanruben.com).Dayan Ruben
Episodes
  • The Illusion of Thinking: Do AI Models Really Reason?
    Jun 28 2025

    It looks incredibly impressive when a large language model explains its step-by-step thought process, giving us a window into its "mind." But what if that visible reasoning is a sophisticated illusion? This episode dives deep into a groundbreaking study on the new generation of "Large Reasoning Models" (LRMs)—AIs specifically designed to show their work.

    We explore the surprising and counterintuitive findings that challenge our assumptions about machine intelligence. Discover the three distinct performance regimes where these models can "overthink" simple problems, shine on moderately complex tasks, and then experience a complete "performance collapse" when things get too hard. We'll discuss the most shocking discoveries: why models paradoxically reduce their effort when problems get harder, and why their performance doesn't improve even when they're given the exact algorithm to solve a puzzle. Is AI's reasoning ability just advanced pattern matching, or are we on the path to true artificial thought?

    Reference:
    This discussion is based on the findings from the Apple Machine Learning Research paper, "The Illusion of Thinking: Understanding the Strengths and Limitations of Large Language Models with Pyramids of Thought."
    https://machinelearning.apple.com/research/illusion-of-thinking

    Show more Show less
    14 mins
  • Charting the Course for Safe Superintelligence
    May 10 2025

    What happens when AI becomes vastly smarter than humans? It sounds like science fiction, but researchers are grappling with the very real challenge of ensuring Artificial General Intelligence (AGI) is safe for humanity. Join us for a deep dive into the cutting edge of AI safety research, unpacking the technical hurdles and potential solutions. We explore the core risks – from intentional misalignment and misuse to unintentional mistakes – and the crucial assumptions guiding current research, like the pace of AI progress and the "approximate continuity" of its development. Learn about the key strategies being developed, including safer design patterns, robust control measures, and the concept of "informed oversight," as we navigate the complex balance between harnessing AGI's immense potential benefits and mitigating its profound risks.


    An Approach to Technical AGI Safety and

    Security: https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf


    Google Deepmind AGI Safety Course: https://youtube.com/playlist?list=PLw9kjlF6lD5UqaZvMTbhJB8sV-yuXu5eW

    Show more Show less
    29 mins
  • Algorithms for Artificial Intelligence: Understanding the Building Blocks
    Apr 26 2025

    Ever tried to understand how AI actually learns, only to get lost in a sea of equations and jargon? This episode is your fast track through the fundamentals of machine learning, breaking down complex concepts into understandable nuggets.

    Drawing inspiration from Stanford course materials, we ditch the dense textbook approach and offer a clear, conversational deep dive into the core mechanics of AI learning. Join us as we explore:

      • Linear Predictors: The versatile workhorses of early ML, from classifying spam to predicting prices.

      • Feature Extraction: The art of turning raw data (like an email) into numbers the algorithm can understand.

      • Weights & Scores: How AI weighs different information (like ingredients in a recipe) to make a prediction using the dot product.

      • Loss Minimization & Margin: How do we measure when AI gets it wrong, and how does it use that feedback (like the concept of 'margin') to improve?

      • Optimization Powerhouses: Unpacking Gradient Descent and its faster cousin, Stochastic Gradient Descent (SGD) – the engines that drive the learning process.

    Whether you're curious about AI or need a refresher on the basics, this episode provides a solid foundation, explaining how machines learn without needing an advanced degree. Get ready to understand the building blocks of artificial intelligence!

    Stanford's Algorithms for Artificial Intelligence: https://web.stanford.edu/~mossr/pdf/alg4ai.pdf

    Show more Show less
    25 mins
No reviews yet