This research proposes an Event-Based In-Memory Computing (EB-IMC) architecture specifically designed for processing Dynamic Vision Sensor (DVS) data using ReRAM crossbar arrays. The architecture features event-driven memory access, reconfigurable compute engines with mixed-signal processing, hierarchical temporal attention mechanisms, and a hybrid training methodology combining surrogate gradient descent with hardware-aware quantization. Preliminary analysis indicates potential improvements of 10× in energy efficiency and 5× in latency compared to conventional GPU processing while maintaining competitive accuracy on object recognition tasks across multiple event-based datasets.
Key findings
The proposed EB-IMC architecture demonstrates potential for 10× energy efficiency improvement and 5× latency reduction compared to conventional frame-based GPU processing.
Event-driven memory access patterns eliminate redundant computations on static scene regions, matching the sparse nature of DVS output.
Reconfigurable ReRAM crossbar-based compute engines support both analog in-memory multiplication and digital error correction for robust inference.
The hybrid training methodology combining surrogate gradient descent with hardware-aware quantization ensures robust deployment on analog IMC hardware despite device variability.
The architecture addresses critical challenges including device variability, thermal management, and algorithm-hardware co-design for event-based vision systems.
Limitations & open questions
ReRAM devices exhibit significant cycle-to-cycle and device-to-device variations that can degrade inference accuracy without proper mitigation.
Thermal management in dense crossbar arrays presents technical challenges requiring concrete mitigation strategies.
The integration of IMC architectures with event-based vision processing remains largely unexplored, creating uncertainties in optimal network topology and training procedures for this specific hardware paradigm.