8 results ·
● Live web index
E
en.wikipedia.org
article
https://en.wikipedia.org/wiki/History_of_artificial_neural_networks
* [(Top)](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#). * [3.1 LSTM](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#LSTM). * [5 Deep learning](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Deep_learning). * [7.2 Transformer](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Transformer). * [8.3 Deep learning](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Deep_learning_2). * [11 Notes](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Notes). * [Read](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). * [Read](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). popularized backpropagation.[[31]](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_note-32). They reported up to 70 times faster training.[[85]](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_note-86). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-fukuneoscholar_61-0)**Fukushima, K. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-wz1988_68-0)**Zhang, Wei (1988). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-wz1990_69-0)**Zhang, Wei (1990). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-70)**Fukushima, Kunihiko; Miyake, Sei (1982-01-01). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-LECUN1989_71-0)**LeCun _et al._, "Backpropagation Applied to Handwritten Zip Code Recognition," _Neural Computation_, 1, pp. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-73)**Zhang, Wei (1991). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-74)**Zhang, Wei (1994). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng1992_75-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng19932_76-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng1997_77-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-81)**Sven Behnke (2003). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:62_88-0)**Ciresan, D. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:9_91-0)**Ciresan, D.; Meier, U.; Schmidhuber, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-szegedy_94-0)**Szegedy, Christian (2015). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-101)**Linn, Allison (2015-12-10). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-olli2010_106-0)**Niemitalo, Olli (February 24, 2010). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-108)**Gutmann, Michael; Hyvärinen, Aapo. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Cherry_1953_115-0)**Cherry EC (1953). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-118)**Fukushima, Kunihiko (1987-12-01). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:12_121-0)**Soydaner, Derya (August 2022). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-122)**Giles, C. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-123)**Feldman, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-125)**Schmidhuber, Jürgen (January 1992). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-135)**Levy, Steven. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-138)**Kohonen, Teuvo (1982). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-139)**Von der Malsburg, C (1973). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-141)**Smolensky, Paul (1986). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-144)**Sejnowski, Terrence J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-hinton2006_146-0)**[Hinton, G. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-hinton2009_147-0)**Hinton, Geoffrey (2009-05-31). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-149)**Watkin, Timothy L. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-150)**Schwarze, H; Hertz, J (1992-10-15). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-151)**Mato, G; Parga, N (1992-10-07). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-schmidhuber19922_153-0)**Schmidhuber, Jürgen (1992). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-154)**Hanson, Stephen; Pratt, Lorien (1988). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-157)**Yang, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-158)**Strukov, D.
C
cs.stanford.edu
research
https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks…
| The Artificial Neuron History Comparison Architecture Applications Future Sources | Neural Network Header **History: The 1940's to the 1970's** In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on how neurons might work. In order to describe how neurons in the brain might work, they modeled a simple neural network using electrical circuits. MADALINE was the first neural network applied to a real world problem, using an adaptive filter that eliminates echoes on phone lines. It is based on the idea that while one active perceptron may have a big error, one can adjust the weight values to distribute it across the network, or at least to adjacent perceptrons. Despite the later success of the neural network, traditional von Neumann architecture took over the computing scene, and neural research was left behind. In the same time period, a paper was written that suggested there could not be an extension from the single layered neural network to a multiple layered neural network.
L
lawrencecummins.com
news
https://www.lawrencecummins.com/post/the-evolution-of-artificial-neural-netwo…
The history of artificial intelligence (AI), particularly artificial neural networks (ANNs), is a narrative characterized by successes, setbacks, and continuous innovation. Their work laid the theoretical foundation for ANNs, positing that neural networks could perform logical functions like the human brain. The Perceptron was an early neural network capable of binary classification tasks. Interest in ANNs diminished during the 1970s, leading to what is commonly referred to as the "AI winter." The limitations of early neural networks, especially exposed by Marvin Minsky and Seymour Papert in their 1969 book, "Perceptrons," The AI winter is when interest in artificial intelligence (AI) research and development declined, resulting in reduced funding. Advances in the broader AI field stagnated, and the perceived over-promising of neural networks contributed to a decline in research funding. Recurrent Neural Networks (RNNs), capable of processing sequential data, found applications in natural language processing, with much of the foundational work attributed to researchers like Jürgen Schmidhuber. **The Future Landscape of Artificial Neural Networks**.
R
researchgate.net
research
https://www.researchgate.net/figure/Timeline-of-the-history-of-artificial-neu…
Timeline of the history of artificial neural networks and deep learning. Deep learning's peak corresponds with Hinton's et al breakthrough paper.
P
pub.towardsai.net
article
https://pub.towardsai.net/a-brief-history-of-neural-nets-472107bc2c9c
They developed a simple neural network using electrical circuits to show how neurons in the brain might work. * **1958:**_Frank Rosenblatt_ develops the _perceptron_ (single-layer neural network) inspired by the way neurons work in the brain. * **1982:**_John_ _Hopfield_ develops the Hopfield Network, a recurrent Neural Net, which describes relationships between binary (firing or not-firing) neurons. * **1998:**_LeNet_-5 — a Convolutional Neural Network was developed by _Yann_ _LeCun et al.._ Convolutional Neural Nets are especially suited for image data. * **2006:**_Geoffrey Hinton_ creates the _Deep Belief Network_, a generative model. * **2009:**_Ruslan Salakhutdinov_ and _Geoffrey Hinton_ present _Deep Boltzmann Machine_, a generative model similar to a Deep Belief Network, but allowing bidirectional in the bottom layer. The U-Net consists of a encoder convolutional network connected with a decoder network to upsample the image. * **2020**: _OpenAI_ publishes Generative Pre-trained Transformer 3 (GPT-3), a deep learning model to produce human-like text. Image 24: AI Agents: Complete Course.
Y
youtube.com
video
https://www.youtube.com/watch?v=AA2ettRM6_Q
Neural Networks Explained: From 1943 Origins to Deep Learning Revolution 🚀 | AI History & Evolution
The AI Guy
1400 subscribers
258 likes
10587 views
10 Jun 2024
Discover the fascinating history of neural networks, from their origins in 1943 to the groundbreaking deep learning advancements of today. Learn how pioneering scientists like Warren McCulloch, Walter Pitts, Frank Rosenblatt, John Hopfield, Geoffrey Hinton, and others contributed to this revolutionary field. Understand key developments like the perceptron, backpropagation, and the role of GPUs in transforming AI. Join us on this journey through time to see how neural networks have evolved to shape modern machine learning and artificial intelligence. 🚀 #NeuralNetworks #DeepLearning #AIHistory #MachineLearning #ArtificialIntelligence
9 comments
M
medium.com
article
https://medium.com/@wsmaisys/from-biology-to-brilliance-a-brief-history-of-ar…
# 🧠 From Biology to Brilliance: A Brief History of Artificial Neural Networks | by Waseem M Ansari | Medium. Image 2: Waseem M Ansari. > **_Reference_**_:__“The Neuron Doctrine, 1891–1951” — Shepherd, G.M. This model marked the beginning of **connectionism** in AI — the idea that simple units (neurons) could be connected to simulate brain-like learning. This led to what is known as the **AI Winter**, a period of declining interest and funding in neural network research. Image 8: Waseem M Ansari. Image 9: Waseem M Ansari. Image 11: The Hidden Geometry of Language That Powers AI: The NLP Guide🚀. Image 12: Waseem M Ansari. Image 13: The Breakthrough That Taught Machines to See: A Non-Technical Guide to AI Vision 👀⚡. Image 14: Waseem M Ansari. ## The Breakthrough That Taught Machines to See: A Non-Technical Guide to AI Vision 👀⚡ ### How computers see and understand images with unprecedented accuracy. Image 16: Waseem M Ansari. Image 17: If You Understand These 5 AI Terms, You’re Ahead of 90% of People.
S
sebastianraschka.com
article
https://sebastianraschka.com/pdf/lecture-notes/stat453ss21/L02_dl-history_sli…
Image source: https://www.researchgate.net/profile/Alexander_Magoun2/ publication/265789430/figure/fig2/AS:392335251787780@1470551421849/ ADALINE-An-adaptive-linear-neuron-Manually-adapted-synapses-Designed-and-built-by-Ted.png Neural Networks and Deep Learning -- A Timeline Sebastian Raschka STAT 453: Intro to Deep Learning 7 Widrow and Hoff's ADALINE (1960) A nicely differentiable neuron model Widrow, B., & Hoff, M. Sebastian Raschka STAT 453: Intro to Deep Learning 30 Graph neural networks (A gentle introduction to graph neural networks: https://heartbeat.fritz.ai/introduction-to-graph-neural-networks-c5a9f4aa9e99) Sebastian Raschka STAT 453: Intro to Deep Learning 31 Large-scale language models Model sizes of language models from 2018–2020 (Credit: State of AI Report 2020) https://ruder.io/research-highlights-2020/ Sebastian Raschka STAT 453: Intro to Deep Learning 32 https://arxiv.org/abs/2101.01169 "Transformer is data-hungry in nature e.g., a large- scale dataset like ImageNet [14 million images] is not enough to train vision transformer from scratch so [10] proposes to ..." Sebastian Raschka STAT 453: Intro to Deep Learning 33 Next Lecture: The Perceptron Sebastian Raschka STAT 453: Intro to Deep Learning 34 Important: Homework for next lecture (ungraded) As preparation for next lecture https://sebastianraschka.com/blog/2020/numpy-intro.html