8 results · ● Live web index
pub.towardsai.net article

A Brief History of Neural Nets

https://pub.towardsai.net/a-brief-history-of-neural-nets-472107bc2c9c

They developed a simple neural network using electrical circuits to show how neurons in the brain might work. * **1958:**_Frank Rosenblatt_ develops the _perceptron_ (single-layer neural network) inspired by the way neurons work in the brain. * **1982:**_John_ _Hopfield_ develops the Hopfield Network, a recurrent Neural Net, which describes relationships between binary (firing or not-firing) neurons. * **1998:**_LeNet_-5 — a Convolutional Neural Network was developed by _Yann_ _LeCun et al.._ Convolutional Neural Nets are especially suited for image data. * **2006:**_Geoffrey Hinton_ creates the _Deep Belief Network_, a generative model. * **2009:**_Ruslan Salakhutdinov_ and _Geoffrey Hinton_ present _Deep Boltzmann Machine_, a generative model similar to a Deep Belief Network, but allowing bidirectional in the bottom layer. The U-Net consists of a encoder convolutional network connected with a decoder network to upsample the image. * **2020**: _OpenAI_ publishes Generative Pre-trained Transformer 3 (GPT-3), a deep learning model to produce human-like text. Image 24: AI Agents: Complete Course.

Visit
topbots.com article

A Brief History Of Neural Network Architectures

https://www.topbots.com/a-brief-history-of-neural-network-architectures/

Neural Network Architectures Eugenio Culurciello LeNet. The LeNet5 architecture was a breakthrough – in particular the insight that image features are distributed across the entire image and convolutions with learnable parameters are an effective way to extract similar features at multiple location with few parameters. Neural Network Architectures Eugenio Culurciello Inception. These ideas will later be used in more recent network architectures as Inception and ResNet. The VGG networks uses multiple 3×3 convolutional layers to represent complex features. Network-in-network (NiN) architectures had the great and simple insight of using 1×1 convolutions to provide more combinational power to the features of a convolutional layers. In fact the bottleneck layers have been proven to perform at state-of-art on the ImageNet dataset, for example, and will be also used in later architectures such as ResNet. The reason for the success is that the input features are correlated, and thus redundancy can be removed by combining them appropriately with the 1×1 convolutions.

Visit
cs.stanford.edu research

Neural Networks - History - CS Stanford

https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks…

| The Artificial Neuron History Comparison Architecture Applications Future Sources | Neural Network Header **History: The 1940's to the 1970's** In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on how neurons might work. In order to describe how neurons in the brain might work, they modeled a simple neural network using electrical circuits. MADALINE was the first neural network applied to a real world problem, using an adaptive filter that eliminates echoes on phone lines. It is based on the idea that while one active perceptron may have a big error, one can adjust the weight values to distribute it across the network, or at least to adjacent perceptrons. Despite the later success of the neural network, traditional von Neumann architecture took over the computing scene, and neural research was left behind. In the same time period, a paper was written that suggested there could not be an extension from the single layered neural network to a multiple layered neural network.

Visit
youtube.com video

Neural Networks Explained: From 1943 Origins to Deep Learning Revolution 🚀 | AI History & Evolution

https://www.youtube.com/watch?v=AA2ettRM6_Q

Neural Networks Explained: From 1943 Origins to Deep Learning Revolution 🚀 | AI History & Evolution The AI Guy 1400 subscribers 258 likes 10587 views 10 Jun 2024 Discover the fascinating history of neural networks, from their origins in 1943 to the groundbreaking deep learning advancements of today. Learn how pioneering scientists like Warren McCulloch, Walter Pitts, Frank Rosenblatt, John Hopfield, Geoffrey Hinton, and others contributed to this revolutionary field. Understand key developments like the perceptron, backpropagation, and the role of GPUs in transforming AI. Join us on this journey through time to see how neural networks have evolved to shape modern machine learning and artificial intelligence. 🚀 #NeuralNetworks #DeepLearning #AIHistory #MachineLearning #ArtificialIntelligence 9 comments

Visit
galileo-unbound.blog article

A Short History of Neural Networks | Galileo Unbound

https://galileo-unbound.blog/2025/02/05/a-short-history-of-neural-networks/

* ai, Artificial Intelligence, Attention mechanism, convolutional neural network, Deep Learning, History of Physics, Hopfield network, Machine Learning, neural networks, Neurodynamics, Nonlinear Dynamics, recurrent neural network, technology, van der Pol oscillator. Drawing from the work of McCulloch and Pitts, his team constructed a software system and then constructed a hardware model that adaptively updated the strength of the inputs, that they called neural weights, as it was trained on test images. PDP was an exciting framework for artificial intelligence, and it captured the general behavior of natural neural networks, but it had a serious problem: How could all of the neural weights be trained? The breakthrough that propelled Geoff Hinton to world-wide acclaim was the success of AlexNet, a neural network constructed by his graduate student Alex Krizhevsky at Toronto in 2012 consisting of 650,000 neurons with 60 million parameters that were trained using two early Nvidia GPUs. It won the ImageNet challenge that year, enabled by its deep architecture and representing a marked advancement that has been proceeding unabated today.

Visit
en.wikipedia.org article

History of artificial neural networks - Wikipedia

https://en.wikipedia.org/wiki/History_of_artificial_neural_networks

* [(Top)](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#). * [3.1 LSTM](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#LSTM). * [5 Deep learning](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Deep_learning). * [7.2 Transformer](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Transformer). * [8.3 Deep learning](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Deep_learning_2). * [11 Notes](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#Notes). * [Read](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). * [Read](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks). popularized backpropagation.[[31]](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_note-32). They reported up to 70 times faster training.[[85]](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_note-86). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-fukuneoscholar_61-0)**Fukushima, K. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-wz1988_68-0)**Zhang, Wei (1988). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-wz1990_69-0)**Zhang, Wei (1990). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-70)**Fukushima, Kunihiko; Miyake, Sei (1982-01-01). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-LECUN1989_71-0)**LeCun _et al._, "Backpropagation Applied to Handwritten Zip Code Recognition," _Neural Computation_, 1, pp. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-73)**Zhang, Wei (1991). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-74)**Zhang, Wei (1994). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng1992_75-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng19932_76-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Weng1997_77-0)**J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-81)**Sven Behnke (2003). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:62_88-0)**Ciresan, D. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:9_91-0)**Ciresan, D.; Meier, U.; Schmidhuber, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-szegedy_94-0)**Szegedy, Christian (2015). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-101)**Linn, Allison (2015-12-10). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-olli2010_106-0)**Niemitalo, Olli (February 24, 2010). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-108)**Gutmann, Michael; Hyvärinen, Aapo. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-Cherry_1953_115-0)**Cherry EC (1953). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-118)**Fukushima, Kunihiko (1987-12-01). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-:12_121-0)**Soydaner, Derya (August 2022). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-122)**Giles, C. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-123)**Feldman, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-125)**Schmidhuber, Jürgen (January 1992). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-135)**Levy, Steven. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-138)**Kohonen, Teuvo (1982). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-139)**Von der Malsburg, C (1973). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-141)**Smolensky, Paul (1986). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-144)**Sejnowski, Terrence J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-hinton2006_146-0)**[Hinton, G. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-hinton2009_147-0)**Hinton, Geoffrey (2009-05-31). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-149)**Watkin, Timothy L. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-150)**Schwarze, H; Hertz, J (1992-10-15). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-151)**Mato, G; Parga, N (1992-10-07). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-schmidhuber19922_153-0)**Schmidhuber, Jürgen (1992). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-154)**Hanson, Stephen; Pratt, Lorien (1988). **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-157)**Yang, J. **[^](https://en.wikipedia.org/wiki/History_of_artificial_neural_networks#cite_ref-158)**Strukov, D.

Visit