NLPIR 2025 Tutorial
Speakers: Prof. Herwig Unger, FernUniversität in Hagen, Germany
Dr. Mario Kubek, FernUniversität in Hagen, Germany
Time: 14:00, December 12, 2025
Need to finish the registration (30USD)
Registration link:https://www.zmeeting.org/register/NLPIR2025
Brain-inspired Computing (for Natural Language Processing)
The human brain remains the most powerful, efficient, and adaptive computing system known, operating on a fraction of the power of modern supercomputers. Brain-inspired computing is an emerging paradigm that seeks to leverage the brain's computational principles—such as event-driven processing, synaptic plasticity, and sparse coding—to overcome the limitations of traditional von Neumann architectures. This tutorial provides a comprehensive introduction to this rapidly evolving field. We will begin by exploring key neurobiological inspirations, including the dynamics of neurons and foundational learning rules like Hebb's rule ("cells that fire together, wire together") and long-term potentiation.
While current deep learning models like Transformers have achieved remarkable success, we will clarify why their massive data and energy requirements represent a fundamental limit to achieving sustainable and robust intelligence. This critique sets the stage for a deeper exploration of intelligence itself. We will then introduce a leading theoretical framework for cortical computation: the memory-prediction model and the Thousand Brains Theory, as championed by Jeff Hawkins and others. This framework posits that the brain's core operation is not mere pattern classification, but a continuous process of maintaining temporal sequences and making predictions based on a model of the world—a model built through movement and sensory input, and fundamentally reliant on sparse, energy-efficient representations.
Finally, we will bridge theory and practice by demonstrating how this brain-inspired perspective can be applied to Natural Language Processing (NLP). We will explore novel architectural concepts that move beyond static embeddings and dense attention mechanisms, incorporating principles like predictive coding and reference frames to create models that learn more continuously, robustly, and with greater contextual understanding. This tutorial will equip attendees with a new lens through which to view the future of machine intelligence.