8 results ·
● Live web index
Y
youtube.com
video
https://www.youtube.com/watch?v=AA2ettRM6_Q
Neural Networks Explained: From 1943 Origins to Deep Learning Revolution 🚀 | AI History & Evolution
The AI Guy
1400 subscribers
258 likes
10587 views
10 Jun 2024
Discover the fascinating history of neural networks, from their origins in 1943 to the groundbreaking deep learning advancements of today. Learn how pioneering scientists like Warren McCulloch, Walter Pitts, Frank Rosenblatt, John Hopfield, Geoffrey Hinton, and others contributed to this revolutionary field. Understand key developments like the perceptron, backpropagation, and the role of GPUs in transforming AI. Join us on this journey through time to see how neural networks have evolved to shape modern machine learning and artificial intelligence. 🚀 #NeuralNetworks #DeepLearning #AIHistory #MachineLearning #ArtificialIntelligence
9 comments
G
galileo-unbound.blog
article
https://galileo-unbound.blog/2025/02/05/a-short-history-of-neural-networks/
* ai, Artificial Intelligence, Attention mechanism, convolutional neural network, Deep Learning, History of Physics, Hopfield network, Machine Learning, neural networks, Neurodynamics, Nonlinear Dynamics, recurrent neural network, technology, van der Pol oscillator. Drawing from the work of McCulloch and Pitts, his team constructed a software system and then constructed a hardware model that adaptively updated the strength of the inputs, that they called neural weights, as it was trained on test images. PDP was an exciting framework for artificial intelligence, and it captured the general behavior of natural neural networks, but it had a serious problem: How could all of the neural weights be trained? The breakthrough that propelled Geoff Hinton to world-wide acclaim was the success of AlexNet, a neural network constructed by his graduate student Alex Krizhevsky at Toronto in 2012 consisting of 650,000 neurons with 60 million parameters that were trained using two early Nvidia GPUs. It won the ImageNet challenge that year, enabled by its deep architecture and representing a marked advancement that has been proceeding unabated today.
P
pub.towardsai.net
article
https://pub.towardsai.net/a-brief-history-of-neural-nets-472107bc2c9c
They developed a simple neural network using electrical circuits to show how neurons in the brain might work. * **1958:**_Frank Rosenblatt_ develops the _perceptron_ (single-layer neural network) inspired by the way neurons work in the brain. * **1982:**_John_ _Hopfield_ develops the Hopfield Network, a recurrent Neural Net, which describes relationships between binary (firing or not-firing) neurons. * **1998:**_LeNet_-5 — a Convolutional Neural Network was developed by _Yann_ _LeCun et al.._ Convolutional Neural Nets are especially suited for image data. * **2006:**_Geoffrey Hinton_ creates the _Deep Belief Network_, a generative model. * **2009:**_Ruslan Salakhutdinov_ and _Geoffrey Hinton_ present _Deep Boltzmann Machine_, a generative model similar to a Deep Belief Network, but allowing bidirectional in the bottom layer. The U-Net consists of a encoder convolutional network connected with a decoder network to upsample the image. * **2020**: _OpenAI_ publishes Generative Pre-trained Transformer 3 (GPT-3), a deep learning model to produce human-like text. Image 24: AI Agents: Complete Course.
C
cs.stanford.edu
research
https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks…
| The Artificial Neuron History Comparison Architecture Applications Future Sources | Neural Network Header **History: The 1940's to the 1970's** In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on how neurons might work. In order to describe how neurons in the brain might work, they modeled a simple neural network using electrical circuits. MADALINE was the first neural network applied to a real world problem, using an adaptive filter that eliminates echoes on phone lines. It is based on the idea that while one active perceptron may have a big error, one can adjust the weight values to distribute it across the network, or at least to adjacent perceptrons. Despite the later success of the neural network, traditional von Neumann architecture took over the computing scene, and neural research was left behind. In the same time period, a paper was written that suggested there could not be an extension from the single layered neural network to a multiple layered neural network.
M
medium.com
article
https://medium.com/@wsmaisys/from-biology-to-brilliance-a-brief-history-of-ar…
# 🧠 From Biology to Brilliance: A Brief History of Artificial Neural Networks | by Waseem M Ansari | Medium. Image 2: Waseem M Ansari. > **_Reference_**_:__“The Neuron Doctrine, 1891–1951” — Shepherd, G.M. This model marked the beginning of **connectionism** in AI — the idea that simple units (neurons) could be connected to simulate brain-like learning. This led to what is known as the **AI Winter**, a period of declining interest and funding in neural network research. Image 8: Waseem M Ansari. Image 9: Waseem M Ansari. Image 11: The Hidden Geometry of Language That Powers AI: The NLP Guide🚀. Image 12: Waseem M Ansari. Image 13: The Breakthrough That Taught Machines to See: A Non-Technical Guide to AI Vision 👀⚡. Image 14: Waseem M Ansari. ## The Breakthrough That Taught Machines to See: A Non-Technical Guide to AI Vision 👀⚡ ### How computers see and understand images with unprecedented accuracy. Image 16: Waseem M Ansari. Image 17: If You Understand These 5 AI Terms, You’re Ahead of 90% of People.
M
magazine.caltech.edu
research
https://magazine.caltech.edu/post/ai-machine-learning-history
In 1980, Hopfield left Princeton for Caltech in part due to the Institute’s “splendid computing facilities,” which he would use to test and develop his ideas for neural networks. “Hopfield extracted the essence of neurons.” Abu-Mostafa notes that the theoretical paper Hopfield published in 1982, “Neural networks and physical systems with emergent collective computational abilities,” is the fifth-most-cited Caltech paper of all time. His network was trained to dig a hole in the landscape corresponding to the image pattern being trained,” adds Erik Winfree (PhD ’98), professor of computer science, computation and neural systems, and bioengineering at Caltech, and a former CNS student of Hopfield’s. Even before Anandkumar joined Caltech in 2017, she says she “was fascinated by physics.” In 2011, she analyzed how the success of learning algorithms is tied to the phase transition in the Ising model, the same model upon which Hopfield built his network.
R
researchgate.net
research
https://www.researchgate.net/publication/374723059_Artificial_Neural_Networks…
This chapter contains a description of the historical evolution of artificial neural networks since their inception.
B
blog.planview.com
article
https://blog.planview.com/a-brief-history-of-ai-from-neural-networks-to-chatgpt/
## AI’s 80-Year Journey. The 2020s saw large language models like ChatGPT passing the Turing Test, a milestone that redefined AI’s conversational abilities. From the early days of neural networks and the creation of the (daunting) Turing Test to the era of ChatGPT, AI’s journey is a tapestry of incremental advances, bold leaps, and well-intentioned missteps. * Trends in AI: Should You View Data as a Product? Artificial Intelligence, Transformation, Vision and Trends Trends in AI: Should You View Data as a Product? Trends in AI: Should You View Data as a Product? Artificial Intelligence, Transformation, Vision and Trends Our AI Strategy (and a Blueprint for Yours). Artificial Intelligence, Innovation Management, Transformation, Vision and Trends AI Has Taken the Boardroom by Storm. Project Portfolio Management, Vision and Trends How AI is Improving Early Warning Systems in Project Management. How AI is Improving Early Warning Systems in Project Management. Artificial Intelligence, Project Portfolio Management, Vision and Trends The Revolution is Here: Generative AI and Project Management.