8 results · ● Live web index
sebastianraschka.com article

[PDF] A Brief Summary of the History of Neural Networks and Deep Learning

https://sebastianraschka.com/pdf/lecture-notes/stat479ss19/L02_dl-history_sli…

Mathematical formulation of a biological neuron, could solve AND, OR, NOT problems Sebastian Raschka STAT 479: Deep Learning SS 2019 !3 Neural Networks and Deep Learning -- A timeline Frank Rosenblatt's Perceptron (1957) Hebb, D. Image source: https://www.researchgate.net/profile/Alexander_Magoun2/ publication/265789430/figure/fig2/AS:392335251787780@1470551421849/ ADALINE-An-adaptive-linear-neuron-Manually-adapted-synapses-Designed-and-built-by-Ted.png Sebastian Raschka STAT 479: Deep Learning SS 2019 !5 Neural Networks and Deep Learning -- A timeline ---+ + + + + ----Minsky and Pappert (1969): "Perceptrons" book => Problem: Perceptrons (and ADALINEs) could not solve XOR problems! Sebastian Raschka STAT 479: Deep Learning SS 2019 !11 Neural Networks and Deep Learning -- A timeline • 2nd "AI Winter" in the late 1990's and 2000's • Probably due to popularity of Support Vector Machines and Random Forests • Also, neural networks were still expensive to train, until GPUs came into play (data from ~2017) Image source: https://www.amax.com/blog/?p=907 Krizhevsky, A., Sutskever, I., & Hinton, G.

Visit
flagshippioneering.com article

A Timeline of Deep Learning | Flagship Pioneering

https://www.flagshippioneering.com/timelines/a-timeline-of-deep-learning

Research on neural networks stalls after MIT’s Marvin Minsky and Seymour Papert argue, in a book called “Perceptrons,” that the method would be too limited to be useful even if neural networks had many more layers of artificial neurons than Rosenblatt’s machine did. The backpropagation algorithm had been applied in computers in the 1970s, but now researchers put it to wider use in neural networks. Google researcher Ian Goodfellow plays two neural networks off each other to create what he calls a “generative adversarial network.” One network is programmed to generate data—such as an image of a face—while the other, known as the discriminator, evaluates whether it’s plausibly real. A deep learning system called AlphaGo beats human Go champion Lee Sedol after absorbing thousands of examples of past games played by people. The same team develops AlphaFold, a set of deep learning and generative neural networks to predict the structure of proteins from their amino acid sequences.

Visit
en.wikipedia.org article

History of artificial neural networks - Wikipedia

https://en.wikipedia.org/wiki/History_of_artificial_neural_networks

* [(Top)](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#). * [3.1 LSTM](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#LSTM). * [5 Deep learning](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Deep_learning). * [7.2 Transformer](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Transformer). * [8.3 Deep learning](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Deep_learning_2). * [11 Notes](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Notes). * [Read](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). * [Read](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). popularized backpropagation.[[31]](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_note-32). They reported up to 70 times faster training.[[85]](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_note-86). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-fukuneoscholar_61-0)**Fukushima, K. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-wz1988_68-0)**Zhang, Wei (1988). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-wz1990_69-0)**Zhang, Wei (1990). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-70)**Fukushima, Kunihiko; Miyake, Sei (1982-01-01). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-LECUN1989_71-0)**LeCun _et al._, "Backpropagation Applied to Handwritten Zip Code Recognition," _Neural Computation_, 1, pp. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-73)**Zhang, Wei (1991). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-74)**Zhang, Wei (1994). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng1992_75-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng19932_76-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng1997_77-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-81)**Sven Behnke (2003). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:62_88-0)**Ciresan, D. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:9_91-0)**Ciresan, D.; Meier, U.; Schmidhuber, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-szegedy_94-0)**Szegedy, Christian (2015). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-101)**Linn, Allison (2015-12-10). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-olli2010_106-0)**Niemitalo, Olli (February 24, 2010). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-108)**Gutmann, Michael; Hyvärinen, Aapo. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Cherry_1953_115-0)**Cherry EC (1953). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-118)**Fukushima, Kunihiko (1987-12-01). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:12_121-0)**Soydaner, Derya (August 2022). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-122)**Giles, C. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-123)**Feldman, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-125)**Schmidhuber, Jürgen (January 1992). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-135)**Levy, Steven. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-138)**Kohonen, Teuvo (1982). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-139)**Von der Malsburg, C (1973). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-141)**Smolensky, Paul (1986). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-144)**Sejnowski, Terrence J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-hinton2006_146-0)**[Hinton, G. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-hinton2009_147-0)**Hinton, Geoffrey (2009-05-31). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-149)**Watkin, Timothy L. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-150)**Schwarze, H; Hertz, J (1992-10-15). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-151)**Mato, G; Parga, N (1992-10-07). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-schmidhuber19922_153-0)**Schmidhuber, Jürgen (1992). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-154)**Hanson, Stephen; Pratt, Lorien (1988). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-157)**Yang, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-158)**Strukov, D.

Visit
dev.to article

𝐃𝐞𝐞𝐩 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠: 𝐀 𝐓𝐢𝐦𝐞𝐥𝐢𝐧𝐞 𝐨𝐟 𝐊𝐞𝐲 𝐌𝐢𝐥𝐞𝐬𝐭𝐨𝐧𝐞𝐬 - DEV Community

https://dev.to/ananya2306/-3024

## DEV Community. Cover image for 𝐃𝐞𝐞𝐩 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠: 𝐀 𝐓𝐢𝐦𝐞𝐥𝐢𝐧𝐞 𝐨𝐟 𝐊𝐞𝐲 𝐌𝐢𝐥𝐞𝐬𝐭𝐨𝐧𝐞𝐬. # 𝐃𝐞𝐞𝐩 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠: 𝐀 𝐓𝐢𝐦𝐞𝐥𝐢𝐧𝐞 𝐨𝐟 𝐊𝐞𝐲 𝐌𝐢𝐥𝐞𝐬𝐭𝐨𝐧𝐞𝐬. • First mathematical model of a biological neuron. • Foundation of artificial neural networks. • First learning algorithm for neural networks. **1985 -- 𝐁𝐨𝐥𝐭𝐳𝐦𝐚𝐧𝐧 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 (𝐇𝐢𝐧𝐭𝐨𝐧 & 𝐒𝐞𝐣𝐧𝐨𝐰𝐬𝐤𝐢)**. **1986 -- 𝐁𝐚𝐜𝐤𝐩𝐫𝐨𝐩𝐚𝐠𝐚𝐭𝐢𝐨𝐧 (𝐑𝐮𝐦𝐞𝐥𝐡𝐚𝐫𝐭, 𝐇𝐢𝐧𝐭𝐨𝐧, 𝐖𝐢𝐥𝐥𝐢𝐚𝐦𝐬)**. • Enabled training of multilayer networks. **𝐋𝐚𝐭𝐞 1980𝐬 -1990𝐬 -- 𝐀𝐈 𝐖𝐢𝐧𝐭𝐞𝐫**. • Shift toward simpler ML models. **𝐋𝐚𝐭𝐞 1990𝐬 - 2000𝐬 -- 𝐆𝐏𝐔 𝐂𝐨𝐦𝐩𝐮𝐭𝐢𝐧𝐠**. • Enabled large-scale deep learning. • Enabled data and image generation. • Reduced need for labeled data. • Scaled transformers for multimodal AI. ## Top comments (0). Hide child comments as well. Google AI is the official AI Model and Platform Partner of DEV. Neon is the official database partner of DEV. Algolia is the official search partner of DEV. DEV Community — A space to discuss and keep up software development and manage your software career.

Visit
cs.stanford.edu research

Neural Networks - History - CS Stanford

https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks…

| The Artificial Neuron History Comparison Architecture Applications Future Sources | Neural Network Header **History: The 1940's to the 1970's** In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on how neurons might work. In order to describe how neurons in the brain might work, they modeled a simple neural network using electrical circuits. MADALINE was the first neural network applied to a real world problem, using an adaptive filter that eliminates echoes on phone lines. It is based on the idea that while one active perceptron may have a big error, one can adjust the weight values to distribute it across the network, or at least to adjacent perceptrons. Despite the later success of the neural network, traditional von Neumann architecture took over the computing scene, and neural research was left behind. In the same time period, a paper was written that suggested there could not be an extension from the single layered neural network to a multiple layered neural network.

Visit
galileo-unbound.blog article

A Short History of Neural Networks - Galileo Unbound

https://galileo-unbound.blog/2025/02/05/a-short-history-of-neural-networks/

* ai, Artificial Intelligence, Attention mechanism, convolutional neural network, Deep Learning, History of Physics, Hopfield network, Machine Learning, neural networks, Neurodynamics, Nonlinear Dynamics, recurrent neural network, technology, van der Pol oscillator. Drawing from the work of McCulloch and Pitts, his team constructed a software system and then constructed a hardware model that adaptively updated the strength of the inputs, that they called neural weights, as it was trained on test images. PDP was an exciting framework for artificial intelligence, and it captured the general behavior of natural neural networks, but it had a serious problem: How could all of the neural weights be trained? The breakthrough that propelled Geoff Hinton to world-wide acclaim was the success of AlexNet, a neural network constructed by his graduate student Alex Krizhevsky at Toronto in 2012 consisting of 650,000 neurons with 60 million parameters that were trained using two early Nvidia GPUs. It won the ImageNet challenge that year, enabled by its deep architecture and representing a marked advancement that has been proceeding unabated today.

Visit
vrungta.substack.com article

Timeline of Deep Learning's Evolution - by Vikash Rungta

https://vrungta.substack.com/p/timeline-of-deep-learnings-evolution

Their breakthroughs and the work of others like Fei-Fei Li, Yann LeCun, and the team at Google Brain brought deep learning into the limelight, and made AI a transformative force of our time. * **2024**: Geoffrey Hinton and John Hopfield receive the Nobel Prize in Physics for their foundational work in machine learning with artificial neural networks. The story of deep learning begins in the 1980s, when researchers like John Hopfield and Geoffrey Hinton started exploring the potential of neural networks. ### The Birth of a New Era: The Rise of ImageNet and AlexNet. The true turning point for AI came in the mid-2000s when Fei-Fei Li, a computer science professor, recognized the importance of large datasets for effective machine learning. Open-source frameworks like TensorFlow and PyTorch further democratized AI, allowing anyone—from academic researchers to hobbyists—to develop deep learning models. Hinton’s work on backpropagation provided the framework that made deep learning practical, while Hopfield’s contributions to energy-based models reshaped the understanding of how learning processes could be modeled computationally.

Visit