8 results ·
● Live web index
D
dataversity.net
article
https://www.dataversity.net/articles/brief-history-deep-learning/
Deep learning, is a more evolved branch of machine learning, and uses layers of algorithms to process data, and imitate the thinking process, or to develop *abstractions*. The history of deep learning can be traced back to 1943, when Walter Pitts and Warren McCulloch created a computer model based on the neural networks of the human brain. The use of top-down connections and new learning methods have allowed for a variety of neural networks to be realized. Back propagation, the use of errors in training deep learning models, evolved significantly in 1970. This time is also when the second AI winter (1985-90s) kicked in, which also effected research for neural networks and deep learning. The next significant evolutionary step for deep learning took place in 1999, when computers started becoming faster at processing data and GPU (graphics processing units) were developed. The free-spirited project explored the difficulties of “unsupervised learning.” Deep learning uses “supervised learning,” meaning the convolutional neural net is trained using labeled data (think images from ImageNet).
P
people.idsia.ch
article
https://people.idsia.ch/~juergen/deep-learning-history.html
This "alternative history" essentially goes like this: *"In 1969, Minsky & Papert showed that shallow NNs without hidden layers are very limited and the field was abandoned until a new generation of neural network researchers took a fresh look at the problem in the 1980s."* However, the 1969 book addressed a "problem" of Gauss & Legendre's shallow learning (circa 1800) that had already been solved 4 years prior by Ivakhnenko & Lapa's popular deep learning method, and then also by Amari's SGD for MLPs. Minsky neither cited this work nor corrected his book later. In particular, in 1990-91, we laid foundations of Generative AI, publishing principles of (1) Generative Adversarial Networks for Artificial Curiosity and Creativity (now used for deepfakes), (2) Transformers (the T in ChatGPT—see the 1991 Unnormalized Linear Transformer), (3) Pre-training for deep NNs (see the P in ChatGPT), (4) NN distillation (key for DeepSeek), and (5) recurrent World Models for Reinforcement Learning and Planning in partially observable environments.
B
builtin.com
article
https://builtin.com/artificial-intelligence/deep-learning-history
Built In Logo Built In Logo. Company Photo Can't find your company? Built In Staff | Jun 29, 2022. The first serious deep learning breakthrough came in the mid-1960s, when Soviet mathematician Alexey Ivakhnenko (helped by his associate V.G. Lapa) created small but functional neural networks. In 1986, Carnegie Mellon professor and computer scientist Geoffrey Hinton — now a Google researcher and long known as the “Godfather of Deep Learning” — was among several researchers who helped make neural networks cool again, scientifically speaking, by demonstrating that more than just a few of them could be trained using backpropagation for improved shape recognition and word prediction. Hinton and LeCun recently were among three AI pioneers to win the 2019 Turing Award. 63 Examples of Artificial Intelligence in Business 88 Companies Hiring AI Engineers. 63 Examples of Artificial Intelligence in Business. 88 Companies Hiring AI Engineers What Is Elon Musk’s Terafab Project? What Is Elon Musk’s Terafab Project?
I
import.io
news
https://www.import.io/post/history-of-deep-learning
But instead of trying to grasp the intricacies of the field – which could be an ongoing and extensive series of articles unto itself – let’s just take a look at some of the major developments in the history of machine learning (and by extension, deep learning and AI). ## 1965 – The first working deep learning networks. Using Microsoft’s neural-network software on its XC50 supercomputers with 1,000 Nvidia Tesla P100 graphic processing units, they can perform deep learning tasks on data in a fraction of the time they used to take – hours instead of days. ## 1965 – The first working deep learning networks. Using Microsoft’s neural-network software on its XC50 supercomputers with 1,000 Nvidia Tesla P100 graphic processing units, they can perform deep learning tasks on data in a fraction of the time they used to take – hours instead of days.
M
medium.com
article
https://medium.com/nextgenllm/the-evolution-of-deep-learning-key-milestones-a…
The foundations of deep learning were laid in 1943 when Walter Pitts and Warren McCulloch introduced the first artificial neuron. This concept
V
vrungta.substack.com
article
https://vrungta.substack.com/p/timeline-of-deep-learnings-evolution
Their breakthroughs and the work of others like Fei-Fei Li, Yann LeCun, and the team at Google Brain brought deep learning into the limelight, and made AI a transformative force of our time. * **2024**: Geoffrey Hinton and John Hopfield receive the Nobel Prize in Physics for their foundational work in machine learning with artificial neural networks. The story of deep learning begins in the 1980s, when researchers like John Hopfield and Geoffrey Hinton started exploring the potential of neural networks. ### The Birth of a New Era: The Rise of ImageNet and AlexNet. The true turning point for AI came in the mid-2000s when Fei-Fei Li, a computer science professor, recognized the importance of large datasets for effective machine learning. Open-source frameworks like TensorFlow and PyTorch further democratized AI, allowing anyone—from academic researchers to hobbyists—to develop deep learning models. Hinton’s work on backpropagation provided the framework that made deep learning practical, while Hopfield’s contributions to energy-based models reshaped the understanding of how learning processes could be modeled computationally.
D
developer.nvidia.com
article
https://developer.nvidia.com/blog/deep-learning-nutshell-history-training/
The main hurdle at this point was to train big, deep networks, which suffered from the vanishing gradient problem, where features in early layers could not be learned because no learning signal reached these layers. Additional material: Deep Learning in Neural Networks: An Overview. Backpropagation of errors, or often simply backpropagation, is a method for finding the gradient of the error with respect to weights over a neural network. Figure 1: Backpropagation for an arbitrary layer in a deep neural network. Figure 1: Backpropagation for an arbitrary layer in a deep neural network. We can imagine a forward pass in which a matrix (dimensions: number of examples x number of input nodes) is input to the network and propagated t through it, where we always have the order (1) input nodes, (2) weight matrix (dimensions: input nodes x output nodes), and (3) output nodes, which usually also have a non-linear activation function (dimensions: examples x output nodes). ### Accelerate Machine Learning with the cuDNN Deep Neural Network Library.
E
en.wikipedia.org
article
https://en.wikipedia.org/wiki/Deep_learning
[Jump to content](https://en.wikipedia.org/wiki/Deep_learning#bodyContent). * [(Top)](https://en.wikipedia.org/wiki/Deep_learning#). * [1 Overview](https://en.wikipedia.org/wiki/Deep_learning#Overview). * [2 Interpretations](https://en.wikipedia.org/wiki/Deep_learning#Interpretations). * [3 History](https://en.wikipedia.org/wiki/Deep_learning#History)Toggle History subsection. * [3.1 Before 1980](https://en.wikipedia.org/wiki/Deep_learning#Before_1980). * [3.2 1980s-2000s](https://en.wikipedia.org/wiki/Deep_learning#1980s-2000s). * [3.3 2000s](https://en.wikipedia.org/wiki/Deep_learning#2000s). * [4.1 Deep neural networks](https://en.wikipedia.org/wiki/Deep_learning#Deep_neural_networks). * [4.1.1 Challenges](https://en.wikipedia.org/wiki/Deep_learning#Challenges). * [5 Hardware](https://en.wikipedia.org/wiki/Deep_learning#Hardware). * [6.2 Image recognition](https://en.wikipedia.org/wiki/Deep_learning#Image_recognition). * [6.7 Bioinformatics](https://en.wikipedia.org/wiki/Deep_learning#Bioinformatics). * [6.11 Image restoration](https://en.wikipedia.org/wiki/Deep_learning#Image_restoration). * [6.13 Materials science](https://en.wikipedia.org/wiki/Deep_learning#Materials_science). * [6.14 Military](https://en.wikipedia.org/wiki/Deep_learning#Military). * [6.17 Image reconstruction](https://en.wikipedia.org/wiki/Deep_learning#Image_reconstruction). * [9.1 Theory](https://en.wikipedia.org/wiki/Deep_learning#Theory). * [9.2 Errors](https://en.wikipedia.org/wiki/Deep_learning#Errors). * [10 See also](https://en.wikipedia.org/wiki/Deep_learning#See_also). * [11 References](https://en.wikipedia.org/wiki/Deep_learning#References). * [12 Further reading](https://en.wikipedia.org/wiki/Deep_learning#Further_reading). * [Article](https://en.wikipedia.org/wiki/Deep_learning "View the content page [c]"). * [Read](https://en.wikipedia.org/wiki/Deep_learning). * [Read](https://en.wikipedia.org/wiki/Deep_learning). * [Deep learning](https://en.wikipedia.org/wiki/Deep_learning). (p.112 [[81]](https://en.wikipedia.org/wiki/Deep_learning#cite_note-81)). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-ELMAN_256-0)**Elman, Jeffrey L. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-BLAKESLEE_259-0)**S. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-260)**Mazzoni, P.; Andersen, R. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-261)**O'Reilly, Randall C. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-262)**Testolin, Alberto; Zorzi, Marco (2016). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-265)**Cash, S.; Yuste, R. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-266)**Olshausen, B; Field, D (1 August 2004). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-METZ2013_270-0)**Metz, C. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-271)**Gibney, Elizabeth (2016). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-274)**Metz, Cade (6 November 2017). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-277)**Marcus, Gary (14 January 2018). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-Knight_2017_278-0)**Knight, Will (14 March 2017). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-280)**Alex Hern (18 June 2015). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-285)**Zhu, S.C.; Mumford, D. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-286)**Miller, G. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-287)**Eisner, Jason. **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-291)**Gibney, Elizabeth (2017). **[^](https://en.wikipedia.org/wiki/Deep_learning#cite_ref-292)**Tubaro, Paola (2020). 64 languages[Add topic](https://en.wikipedia.org/wiki/Deep_learning#).