Episodes

  • From Sensors to Solutions: The Future of Edge AI with Chad Lucien of Ceva
    Jul 3 2025

    At the crossroads of cutting-edge technology and practical innovation stands Ceva, a semiconductor IP powerhouse with a remarkable two-decade legacy. Powering nearly 20 billion devices worldwide and shipping over 2 billion annually, SIVA has emerged as a crucial enabler in the burgeoning edge AI ecosystem.

    What distinguishes Ceva in this competitive landscape is their holistic approach to edge computing. Rather than focusing solely on neural processing, they've strategically built solutions around what Chad Lucien describes as the three pillars of edge AI: connectivity, sensing, and inference. This comprehensive vision has positioned them as the industry's leading Bluetooth IP licensor while developing sophisticated DSP solutions and a scalable NPU portfolio that ranges from modest GOPS to an impressive 400 TOPS.

    The secret to Ceva's effectiveness lies in their deep integration of hardware and software expertise. "The software is becoming the definition of the product," notes Lucien, explaining how their deep learning applications team directly influences hardware specifications. This software-first perspective has created solutions perfectly tailored for low-power, small form factor devices across diverse applications. From earbuds and health trackers to consumer robots and smart appliances, SIVA's fully programmable solutions handle everything from neural network computation to DSP workloads and control code.

    Most exciting is SIVA's leadership in the Audio ML renaissance through their work with the EDGE AI FOUNDATION's Audio Working Group. As audio applications shift from traditional DSP implementations to neural strategies, we're witnessing transformative capabilities in speech enhancement, anomaly detection, sound identification, and edge-based natural language processing.

    Discover how Ceva is providing the essential "picks and shovels" for the AI gold rush and why collaboration remains the key to unlocking the full potential of intelligence at the edge. Subscribe to hear more partner stories shaping the future of edge AI!

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Show more Show less
    17 mins
  • EDGE AI Partner: David Aronchick of Expanso
    Jun 26 2025

    The digital landscape is rapidly evolving beyond centralized cloud computing. In this illuminating conversation with David Aronchik, co-founder of Expanso, we explore the growing necessity of processing data right where it's generated—at the edge.

    Drawing from his impressive background as the first non-founding PM for Kubernetes at Google and his leadership in open AI strategy at Microsoft, David reveals how these experiences led him to tackle a persistent challenge: how do you leverage container technologies and ML models outside traditional data centers? While cloud platforms excel at centralized workloads, businesses increasingly need computing power in retail locations, manufacturing facilities, and smart city infrastructure.

    Expanso's elegantly named Bacalhau project (Portuguese for cod, a clever nod to "Compute Over Data") offers a solution by providing reliable orchestration of workloads across distributed locations. Their lightweight Go binary runs on virtually anything from Raspberry Pis to sophisticated edge servers, managing the delivery and execution of jobs while gracefully handling connectivity disruptions that would cause traditional systems to fail.

    David makes a compelling case for edge computing with a simple physical reality: even 100,000 years from now, the speed of light will still impose a 45-millisecond latency between LA and Boston. This unchangeable constraint, combined with data transfer costs and regulatory requirements, makes local processing increasingly essential. For organizations struggling with high telemetry bills, Expanso confidently promises at least 25% cost reduction—or they work for free.

    Whether you're managing satellite networks, underwater cameras for aquaculture, or thousands of retail locations, this conversation illuminates how the future of computing involves bringing intelligence to where data lives rather than constantly shipping bytes across networks. Join us to discover how this paradigm shift is making AI more effective in the physical world.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Show more Show less
    22 mins
  • Bringing Generative AI to Your Pocket: The Future of Edge Computing
    Jun 19 2025

    A technological revolution is quietly unfolding in your pocket. Imagine your phone creating stunning images, understanding what its camera sees, and responding to complex questions—all without sending a single byte of data to the cloud. This isn't science fiction; it's Generative EDGE AI, and it's already here.

    We dive deep into this transformative trend that's bringing AI's creative powers directly to our devices. Building on the foundation laid by the tiny ML movement, Generative EDGE AI represents a fundamental shift in how we'll interact with technology. The benefits are compelling: complete privacy as your data never leaves your device, lightning-fast responses without internet latency, independence from network connections, and significant cost savings from reduced cloud computing needs.

    The applications span far beyond convenience. For people with disabilities, it means having image captioning that works anywhere, even without internet. For photographers, it's like having a professional editor built right into your camera. In healthcare, it enables diagnostics while keeping sensitive patient data secure and accessible even in areas with poor connectivity.

    The technical achievements making this possible are equally impressive. Researchers have shrunk massive AI models to run efficiently on everyday devices, from visual question answering systems that respond in milliseconds to text-to-speech engines that sound remarkably natural. They're even making progress bringing text-to-image generation and small language models directly to smartphones.

    As we explore these breakthroughs, we consider the profound implications of truly intelligent devices that can learn, adapt, and make decisions autonomously. What happens when our technology not only understands but creates and acts independently? The silent AI revolution happening in our hands is set to transform our relationship with technology in ways we're just beginning to comprehend.

    Ready to understand the future that's already arriving? Listen now and glimpse the world where intelligence lives at your fingertips, not in distant server farms.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Show more Show less
    27 mins
  • Audio AI on the Edge with Ceva
    Jun 12 2025

    Audio processing at the edge is undergoing a revolution as deep learning transforms what's possible on tiny, power-constrained devices. Daniel from SIVA takes us on a fascinating journey through the complete lifecycle of audio AI models—from initial development to real-world deployment on microcontrollers.

    We explore two groundbreaking applications that demonstrate the power of audio machine learning on resource-limited hardware. First, Environmental Noise Cancellation (ENC) addresses the critical need for clear communication in noisy environments. Rather than accepting the limitations of traditional approaches that require multiple microphones, SIVA's single-microphone solution leverages deep neural networks to achieve superior noise reduction while preserving speech quality—all with a model eight times smaller than conventional alternatives.

    The conversation then shifts to voice interfaces, where Text-to-Model technology is eliminating months of development time by generating keyword spotting models directly from text input. This innovation allows manufacturers to create, modify, or rebrand voice commands instantly without costly data collection and retraining cycles. Each additional keyword requires merely one kilobyte of memory, making sophisticated voice interfaces accessible even on the smallest devices.

    Throughout the discussion, Daniel reveals the technical challenges and breakthroughs involved in optimizing these models for production environments. From quantization-aware training and SVD compression to knowledge distillation and framework conversion strategies, we gain practical insights into making AI work effectively within severe computational constraints.

    Whether you're developing embedded systems, designing voice-enabled products, or simply curious about the future of human-machine interaction, this episode offers valuable perspective on how audio AI is becoming both more powerful and more accessible. The era of intelligent listening devices is here—and they're smaller, more efficient, and more capable than ever before.

    Ready to explore audio AI for your next project? Check out SIVA's YouTube channel for demos of these technologies in action, or join the Edge AI Foundation's Audio Working Group to collaborate with industry experts on advancing this rapidly evolving field.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Show more Show less
    1 hr
  • Garbage In, Garbage Out - High-Quality Datasets for Edge ML Research
    Jun 5 2025

    The EDGE AI FOUNDATION's Datasets & Benchmarks Working Group highlights the rapid progress in neural networks, particularly in cloud-based applications like image recognition and NLP, which benefited greatly from large, high-quality datasets. However, the constrained nature of edge AI devices necessitates smaller, more efficient models, yet a lack of suitable datasets hinders progress and realistic evaluation in this area. To address this, the Foundation aims to create and maintain a repository of production-grade, diverse, and well-annotated datasets for tiny and edge ML use cases, enabling fair comparisons and the advancement of the field. They emphasize community involvement in contributing datasets, providing feedback, and establishing best practices for optimization. Ultimately, this initiative seeks to level the playing field for edge AI research by providing the necessary resources for accurate benchmarking and innovation.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Show more Show less
    21 mins
  • Edge AI Investing Essentials
    May 29 2025

    The path to successful investment in edge AI requires far more than technical brilliance. In this revealing panel discussion, venture capitalists and corporate investors share what truly matters when deciding where to place their bets in the evolving edge computing landscape.

    At the heart of every investment decision lies a deceptively simple question: who is your customer? As Hans from Momentum Ventures bluntly states, "The first thing we're looking for is customers, second is customers, and third is customers." While many founders obsess over technology, successful investments begin with understanding whose job is changed by your solution and who will pay for that change.

    The conversation shifts to efficiency as "the new currency" in edge computing. David Wyatt, formerly of NVIDIA, highlights technologies achieving 100x greater efficiency than traditional approaches, pointing to innovations that challenge conventional silicon-based computing. Meanwhile, Murata's corporate venture team emphasizes material science innovations that enable more efficient processing, sensing, and power management at the edge.

    What makes an ideal founding team? The panel describes the powerful combination of a "hacker" (technical expert) and a "hustler" (business-focused leader) who together can bridge the gap between technological innovation and market demands. This complementary expertise proves especially critical in edge AI, where technical constraints meet real-world implementation challenges.

    The most sobering insights emerge when discussing startup failures. Running out of cash tops the list, often resulting from scaling too quickly or misallocating resources. One panelist cuts through the hype with brutal clarity: "You're not in business when you're spending money. You're in business when you're making money."

    Whether you're building, investing in, or partnering with edge AI companies, this discussion offers a roadmap for navigating an increasingly complex landscape where efficiency, customer focus, and strategic vision determine which innovations will ultimately survive and thrive.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Show more Show less
    56 mins
  • Beyond the Edge: Cloud and AI Convergence
    May 22 2025

    Beyond the Edge, from the EDGE AI FOUNDATION, explores the future of edge computing by advocating for a shift in perspective. It suggests moving beyond the limitations of traditional IoT deployments by integrating advancements in edge AI, semiconductors, and connectivity. The author argues that the cloud will serve as a crucial "binding agent," enabling unified management and orchestration from the cloud down to edge devices. Instead of focusing on restrictive standards, the piece emphasizes the importance of developing best practices and fostering collaboration to accelerate the deployment and value of edge AI solutions. The ultimate vision is a future where AI-powered, connected silicon at the edge becomes the default, supported by cloud-based DevOps principles.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Show more Show less
    14 mins
  • Investing In The Edge: A VC Panel from AUSTIN 2025
    May 15 2025

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    Show more Show less
    43 mins