Speaker
Description
Large Language Models (LLMs), built on transformer-based deep learning architectures, are increasingly being explored as high-level controllers. With standards like the Model Context Protocol (MCP), LLMs can now orchestrate and interface with external software, including radio stacks such as GNU Radio. Our initial investigation focused on using LLMs to dynamically manage signal processing chains in GNU Radio, with the goal of creating intelligent and adaptable communication systems.
Through this process, we observed that while LLMs are powerful for orchestration, they are sample-inefficient and ill-suited for low-level interactions or deployment on edge devices. This motivated a shift toward alternative learning approaches and led to the development of a new model.
In this paper, we introduce Hebbian Cellular Automata (HCA), a novel learning framework that leverages modulation signals themselves as learning signals. Inspired by principles from neuroscience and communication theory, HCA adapts locally using signal-driven rules rather than backpropagation. This enables decentralized adaptation, emergent behavior, and the construction of fully recurrent neural networks that remain stable and efficient. The approach is well-suited to parallel architectures and integrates naturally into real-time signal processing environments such as GNU Radio.
We present experimental results demonstrating the learning behavior of HCA in signal processing contexts, and discuss its role as a complementary mechanism alongside deep learning-based orchestration in future wireless architectures.
| Talk Length | 15 Minutes |
|---|---|
| Link to Open Source Code | https://github.com/SimpliRF/gnuradio_mcp |