Now you are in the subtree of TECHNOLOGY and MARKETS public knowledge tree. 

are Binary NNs good enough?

Ternary NNetworks

2016 Ternary weight networks Zhang, Liu

2018 TRAINED TERNARY QUANTIZATION Zhu, Han, Mao, Dally

2017 - a video lecture by Han Harware and Software Co-design

Motivation

Take a look at Fig. 1 in (Zhang,Liu). You can see that the accuracy of BNN saturates at some level which is noticeably lower than for full precision and Ternary NNs.

A conclusion - probably, Binary NNs are not good enough for some essential reasons (not clear structural reasons?)

but 2-bit (weights and activation) cells are good enough

Collapsing the weights and the unlimited range of ReLU-generated activations into four discrete bins, as required for 2-bit inference computations, causes large accuracy losses for deep learning inference.
...
Combining the PACT and SAWB advances allows us to perform deep learning inference computations with high accuracy down to 2-bit precision.

a short summary from IBM - 2018 Highly Accurate Deep Learning Inference with 2-bit Precision

The proposed scheme achieves zero accuracy degradation for AlexNet
quantized down to 2-bits for weights and activations without requiring any increase in the network size

hardware 9.1 A 7nm 4-Core AI Chip with 25.6TFLOPS Hybrid FP8 Training, 102.4TOPS INT4 Inference and Workload-Aware Throttling

  • particular techniques

2017 Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations - Hubara, Matthieu Courbariaux,etc

1-bit weight and 2-bit activation achieves decent accuracy

2018? Two-Step Quantization for Low-bit Neural Networks Wang,Hu, etc

Questions

  • can VMM (vector-matrix multiplication) via analog crossbar circuitry be extended to 2-bit cells easily?

  • is the building microcontroller on stt-mram bits a good idea? Stochastic computing will be required here to improve signal-noise ratio

  • is it possible to build VMM via crossbar circuitry on a standard DRAM?