Now you are in the subtree of TECHNOLOGY and MARKETS public knowledge tree. 

Learning with CIM-stt-MRAM

2021 AILC: Accelerate On-chip Incremental Learning with Compute-in-Memory Technology Yandong Luo; Shimeng Yu

pdf?

In this paper, we proposed AILC, a compute-in-memory (CIM)-based accelerator for on-chip incremental learning using STT-MRAM technology.

System-level benchmark shows that AILC could achieve 147×, 3.7×~28.7×, 2.05×~2.9× higher energy efficiency than Nvidia Titan-V GPU, RRAM-based CIM accelerators and edge TPU/GPU, respectively. Compared to the baseline, the throughput of AILC is improved by 2.0×~2.2× on average with the hardware resources assignment protocol, which results in 4.1×~21.4× higher throughput than edge TPU/GPU.

Yandong Luo

2020 Benchmark Non-volatile and Volatile Memory Based Hybrid Precision Synapses for In-situ Deep Neural Network Training

2020 Accelerating Deep Neural Network In-Situ Training With Non-Volatile and Volatile Memory Based Hybrid Precision Synapses

2020 A Variation Robust Inference Engine Based on STT-MRAM with Parallel Read-Out Yandong Luo

2019 DNN+NeuroSim: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators with Versatile Device Technologies Xiaochen Peng

https://github.com/xialeiliu/Awesome-Incremental-Learning

2016 Improving read performance of STT-MRAM based main memories through Smash Read and Flexible Read

  • Lide Duan

2021 Designing Efficient and High-performance AI Accelerators with Customized STT-MRAM Kaniz Mishty

Compared to an SRAMbased implementation, the STT-AI accelerator achieves 75% area and 3% power savings at iso-accuracy. Furthermore, with a relaxed bit error rate and negligible AI accuracy trade-off, the
designed STT-AI Ultra accelerator achieves 75.4%, and 3.5% savings in area and power, respectively over regular SRAM-based accelerators.