History & Comments
Back
c
Author:Mihail Turlakov
Description:
Description:
# initial theories of DL⏎ #### motivation⏎ ⏎ 2019 [The unreasonable effectiveness of deep learning in artificial intelligence](https://www.pnas.org/content/117/48/30033) Terrence J. Sejnowski ## [RG and Deep Learning](https://www.quantamagazine.org/deep-learning-relies-on-renormalization-physicists-find-20141204/) > The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts - [Learning hierarchical category structure in deep neural networks](https://stanford.edu/~jlmcc/papers/SaxeMcCGanguli13CogSciProc.pdf) - [Stephane Mallat](https://en.wikipedia.org/wiki/St%C3%A9phane_Mallat) awarded Milner RS Prize 2023 for [wavelet+RG view](https://arxiv.org/abs/1601.04920) ## [Information Bottleneck METHOD](https://knowen-production.s3.amazonaws.com/uploads/attachment/file/1999/The%2BInformation%2BBottleneck%2BMethod%2B-Tishby.pdf)[- 2015 [DL and Information Bottleneck](https://arxiv.org/pdf/1503.02406.pdf) ⏎ [Information Bottleneck sheds insight into AI as well as human brain](https://knowen-production.s3.amazonaws.com/uploads/attachment/file/2000/New%2BTheory%2BCracks%2BOpen%2Bthe%2BBlack%2BBox%2Bof%2BDeep%2BLearning%2B-%2BAI%2B-information%2Bbottleneck.pdf) - [Learnability can be undecidable](https://www.nature.com/articles/s42256-018-0002-3) ⏎ > We describe simple scenarios where learnability cannot be proved nor refuted using the standard axioms of mathematics. The main idea is to prove an equivalence between learnability and compression. # Parents * history and in-depth of AI
Sign in to add a new comment