8 results · ● Live web index
flagshippioneering.com article

A Timeline of Deep Learning | Flagship Pioneering

https://www.flagshippioneering.com/timelines/a-timeline-of-deep-learning

Research on neural networks stalls after MIT’s Marvin Minsky and Seymour Papert argue, in a book called “Perceptrons,” that the method would be too limited to be useful even if neural networks had many more layers of artificial neurons than Rosenblatt’s machine did. The backpropagation algorithm had been applied in computers in the 1970s, but now researchers put it to wider use in neural networks. Google researcher Ian Goodfellow plays two neural networks off each other to create what he calls a “generative adversarial network.” One network is programmed to generate data—such as an image of a face—while the other, known as the discriminator, evaluates whether it’s plausibly real. A deep learning system called AlphaGo beats human Go champion Lee Sedol after absorbing thousands of examples of past games played by people. The same team develops AlphaFold, a set of deep learning and generative neural networks to predict the structure of proteins from their amino acid sequences.

Visit
vrungta.substack.com article

Timeline of Deep Learning's Evolution - by Vikash Rungta

https://vrungta.substack.com/p/timeline-of-deep-learnings-evolution

Their breakthroughs and the work of others like Fei-Fei Li, Yann LeCun, and the team at Google Brain brought deep learning into the limelight, and made AI a transformative force of our time. * **2024**: Geoffrey Hinton and John Hopfield receive the Nobel Prize in Physics for their foundational work in machine learning with artificial neural networks. The story of deep learning begins in the 1980s, when researchers like John Hopfield and Geoffrey Hinton started exploring the potential of neural networks. ### The Birth of a New Era: The Rise of ImageNet and AlexNet. The true turning point for AI came in the mid-2000s when Fei-Fei Li, a computer science professor, recognized the importance of large datasets for effective machine learning. Open-source frameworks like TensorFlow and PyTorch further democratized AI, allowing anyone—from academic researchers to hobbyists—to develop deep learning models. Hinton’s work on backpropagation provided the framework that made deep learning practical, while Hopfield’s contributions to energy-based models reshaped the understanding of how learning processes could be modeled computationally.

Visit
sebastianraschka.com article

A Brief Summary of the History of Neural Networks and ...

https://sebastianraschka.com/pdf/lecture-notes/stat453ss21/L02_dl-history_sli…

Image source: https://www.researchgate.net/profile/Alexander_Magoun2/ publication/265789430/figure/fig2/AS:392335251787780@1470551421849/ ADALINE-An-adaptive-linear-neuron-Manually-adapted-synapses-Designed-and-built-by-Ted.png Neural Networks and Deep Learning -- A Timeline Sebastian Raschka STAT 453: Intro to Deep Learning 7 Widrow and Hoff's ADALINE (1960) A nicely differentiable neuron model Widrow, B., & Hoff, M. Sebastian Raschka STAT 453: Intro to Deep Learning 30 Graph neural networks (A gentle introduction to graph neural networks: https://heartbeat.fritz.ai/introduction-to-graph-neural-networks-c5a9f4aa9e99) Sebastian Raschka STAT 453: Intro to Deep Learning 31 Large-scale language models Model sizes of language models from 2018–2020 (Credit: State of AI Report 2020) https://ruder.io/research-highlights-2020/ Sebastian Raschka STAT 453: Intro to Deep Learning 32 https://arxiv.org/abs/2101.01169 "Transformer is data-hungry in nature e.g., a large- scale dataset like ImageNet [14 million images] is not enough to train vision transformer from scratch so [10] proposes to ..." Sebastian Raschka STAT 453: Intro to Deep Learning 33 Next Lecture: The Perceptron Sebastian Raschka STAT 453: Intro to Deep Learning 34 Important: Homework for next lecture (ungraded) As preparation for next lecture https://sebastianraschka.com/blog/2020/numpy-intro.html

Visit
en.wikipedia.org article

History of artificial neural networks - Wikipedia

https://en.wikipedia.org/wiki/History_of_artificial_neural_networks

* [(Top)](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#). * [3.1 LSTM](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#LSTM). * [5 Deep learning](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Deep_learning). * [7.2 Transformer](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Transformer). * [8.3 Deep learning](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Deep_learning_2). * [11 Notes](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Notes). * [Read](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). * [Read](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). popularized backpropagation.[[31]](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_note-32). They reported up to 70 times faster training.[[85]](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_note-86). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-fukuneoscholar_61-0)**Fukushima, K. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-wz1988_68-0)**Zhang, Wei (1988). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-wz1990_69-0)**Zhang, Wei (1990). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-70)**Fukushima, Kunihiko; Miyake, Sei (1982-01-01). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-LECUN1989_71-0)**LeCun _et al._, "Backpropagation Applied to Handwritten Zip Code Recognition," _Neural Computation_, 1, pp. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-73)**Zhang, Wei (1991). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-74)**Zhang, Wei (1994). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng1992_75-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng19932_76-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng1997_77-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-81)**Sven Behnke (2003). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:62_88-0)**Ciresan, D. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:9_91-0)**Ciresan, D.; Meier, U.; Schmidhuber, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-szegedy_94-0)**Szegedy, Christian (2015). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-101)**Linn, Allison (2015-12-10). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-olli2010_106-0)**Niemitalo, Olli (February 24, 2010). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-108)**Gutmann, Michael; Hyvärinen, Aapo. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Cherry_1953_115-0)**Cherry EC (1953). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-118)**Fukushima, Kunihiko (1987-12-01). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:12_121-0)**Soydaner, Derya (August 2022). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-122)**Giles, C. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-123)**Feldman, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-125)**Schmidhuber, Jürgen (January 1992). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-135)**Levy, Steven. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-138)**Kohonen, Teuvo (1982). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-139)**Von der Malsburg, C (1973). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-141)**Smolensky, Paul (1986). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-144)**Sejnowski, Terrence J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-hinton2006_146-0)**[Hinton, G. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-hinton2009_147-0)**Hinton, Geoffrey (2009-05-31). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-149)**Watkin, Timothy L. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-150)**Schwarze, H; Hertz, J (1992-10-15). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-151)**Mato, G; Parga, N (1992-10-07). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-schmidhuber19922_153-0)**Schmidhuber, Jürgen (1992). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-154)**Hanson, Stephen; Pratt, Lorien (1988). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-157)**Yang, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-158)**Strukov, D.

Visit
dataversity.net article

A Brief History of Deep Learning - Dataversity

https://www.dataversity.net/articles/brief-history-deep-learning/

Deep learning, is a more evolved branch of machine learning, and uses layers of algorithms to process data, and imitate the thinking process, or to develop *abstractions*. The history of deep learning can be traced back to 1943, when Walter Pitts and Warren McCulloch created a computer model based on the neural networks of the human brain. The use of top-down connections and new learning methods have allowed for a variety of neural networks to be realized. Back propagation, the use of errors in training deep learning models, evolved significantly in 1970. This time is also when the second AI winter (1985-90s) kicked in, which also effected research for neural networks and deep learning. The next significant evolutionary step for deep learning took place in 1999, when computers started becoming faster at processing data and GPU (graphics processing units) were developed. The free-spirited project explored the difficulties of “unsupervised learning.” Deep learning uses “supervised learning,” meaning the convolutional neural net is trained using labeled data (think images from ImageNet).

Visit
import.io news

History of Deep Learning

https://www.import.io/post/history-of-deep-learning

But instead of trying to grasp the intricacies of the field – which could be an ongoing and extensive series of articles unto itself – let’s just take a look at some of the major developments in the history of machine learning (and by extension, deep learning and AI). ## 1965 – The first working deep learning networks. Using Microsoft’s neural-network software on its XC50 supercomputers with 1,000 Nvidia Tesla P100 graphic processing units, they can perform deep learning tasks on data in a fraction of the time they used to take – hours instead of days. ## 1965 – The first working deep learning networks. Using Microsoft’s neural-network software on its XC50 supercomputers with 1,000 Nvidia Tesla P100 graphic processing units, they can perform deep learning tasks on data in a fraction of the time they used to take – hours instead of days.

Visit
techrxiv.org article

Deep Learning: Historical Overview from Inception to ...

https://www.techrxiv.org/users/778163/articles/912792-deep-learning-historica…

This study aims to provide a historical narrative of deep learning, tracing its origins from the cybernetic era to its current state-of-the-art status.

Visit