Convolutional neural networks need accuracy

281
Nokia404 тыс
Опубликовано 26 января 2024, 12:17
Eliminating numerical instability from convolutional neural networks’ equations

Convolutional neural networks can unlock extraordinary tools for image and video coding, but their limited precision in floating point arithmetic is inescapably problematic. Our post-training quantization technique stops data corruption in its tracks, dividing operations between integer and floating-point domains for maximum numerical stability. See how this technique can realize uncompromised deep learning performance across a variety of platforms.

Ready for better machine performance? Take a look at the whitepaper by Honglei Zhang, Nam Le, Francesco Cricri, Jukka Ahonen and Hamed Rezazadegan Tavakoli: shorturl.at/enwzZ
автотехномузыкадетское