8 results · ● Live web index
ibm.com article

What Is a Neural Network? | IBM

https://www.ibm.com/think/topics/neural-networks

A neural network is a machine learning model that stacks simple "neurons" in layers and learns pattern-recognizing weights and biases from data to map inputs to outputs. Neural networks are among the most influential algorithms in modern machine learning and artificial intelligence (AI). Mathematically, a neural network learns a function    by mapping an input vector    to a predict a response    What distinguishes neural networks from other traditional machine learning algorithms is their layered structure and their ability to perform nonlinear transformation. Modern neural network architectures—such as transformers and encoder-decoder models—follow the same core principles (learned weights and biases, stacked layers, nonlinear activations, end-to-end training by backpropagation). Neural networks learn useful internal representations directly from data, capturing nonlinear structure that classical models miss. Understanding activation functions, training requirements and the main types of networks provides a practical bridge from classical neural nets to today’s generative systems and clarifies why these models have become central to modern AI.

Visit
en.wikipedia.org article

Neural network (machine learning) - Wikipedia

https://en.wikipedia.org/wiki/Neural_network_(machine_learning)

Today, artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence.

Visit
realpython.com article

Python AI: How to Build a Neural Network & Make Predictions – Real Python

https://realpython.com/python-ai-neural-network/

With neural networks, the process is very similar: you start with some random **weights** and **bias** vectors, make a prediction, compare it to the desired output, and adjust the vectors to predict more accurately the next time. To accomplish that, you’ll need to compute the prediction error and update the weights accordingly. If your neural network makes a correct prediction for every instance in your training set, then you probably have an overfitted model, where the model simply remembers how to classify the examples instead of learning to notice features in the data. Now that you know how to compute the error and how to adjust the weights accordingly, it’s time to get back continue building your neural network. In your neural network, you need to update both the weights *and* the bias vectors. In [44]: neural_network.predict(input_vector). The above code makes a prediction, but now you need to learn how to train the network. In [51]: training_error = neural_network.train(input_vectors, targets, 10000).

Visit
humainelabs.com article

Build Your First AI Neural Network: A Step Guide

https://humainelabs.com/blogs/agentic-ai-engineering/build-your-first-ai-neur…

Recurrent Neural Networks (RNNs) process sequential data like text or time series. Emerging architectures like transformers and attention mechanisms are revolutionizing **artificial intelligence neural network** design. **What is the neural network of artificial intelligence?**. An **artificial intelligence neural network** is a computational system modeled after the human brain. **Neural networks** learn patterns automatically from data, while traditional algorithms follow pre-programmed rules. This makes neural networks more adaptable but requires training data and computational resources. Building your first **artificial intelligence neural network** might seem complex, but it's more accessible than you think. ## What is an Artificial Intelligence Neural Network? An **artificial intelligence neural network** is a computational model inspired by how the human brain processes information. Just as our brains use interconnected neurons to think and learn, **artificial neural networks** use mathematical nodes to recognize patterns and make decisions. **Neural networks** matter because they enable machines to learn from experience. ## How Neural Networks Learn: The Training Process.

Visit
iso.org article

ISO - The basis of neural networks: Cracking the code

https://www.iso.org/artificial-intelligence/neural-networks

A subcategory of artificial intelligence, neural networks are AI models with vast and groundbreaking potential. ## What is a neural network? ## What are neural networks used for? Health professionals can use artificial neural networks to help analyse medical images, patient records and genomic data to identify patterns and make predictions, leading to more accurate diagnoses and tailored treatment plans. ## How do neural networks work? **Feedforward**, or forward propagation, is the backbone of how neural networks work, enabling them to make predictions and generate outputs. This is the process by which a neural network adjusts its weights in response to feedback received during training. ## How are neural networks trained? First, neural networks require datasets to learn and make accurate predictions. Because of their complexity, it can be challenging to understand and explain the decision-making process of a neural network. The ISO/IEC 24029 series takes a holistic approach by addressing both ethical concerns and emerging technology requirements to enable the responsible adoption of neural networks.

Visit
pmc.ncbi.nlm.nih.gov official

Artificial Neural Network: Understanding the Basic Concepts without Mathematics - PMC

https://pmc.ncbi.nlm.nih.gov/articles/PMC6428006/

Currently, artificial neural networks predominantly use a weight modification method in the learning process.[4](#B4),[5](#B5),[7](#B7) In the course of modifying the weights, the entire layer requires an activation function that can be differentiated. Biological neurons receive multiple inputs from pre-synaptic neurons.[11](#B11) Neurons in artificial neural networks (nodes) also receive multiple inputs, then they add them and process the sum with a sigmoid function.[5](#B5),[7](#B7) The value processed by the sigmoid function then becomes the output value. In conclusion, the learning process of an artificial neural network involves updating the connection strength (weight) of a node (neuron). [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=IBM%20J%20Res%20Develop&title=Some%20studies%20in%20machine%20learning%20using%20the%20game%20of%20checkers&author=AL%20Samuel&volume=3&publication_year=1959&pages=210-229&)]. [[DOI](https://doi.org/10.1161/CIRCULATIONAHA.115.001593)] [[PMC free article](/articles/PMC5831252/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/26572668/)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Circulation&title=Machine%20learning%20in%20medicine&author=RC%20Deo&volume=132&publication_year=2015&pages=1920-1930&pmid=26572668&doi=10.1161/CIRCULATIONAHA.115.001593&)]. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Make%20Your%20Own%20Neural%20Network&author=T%20Rashid&publication_year=2016&)]. [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Neural%20Networks%20and%20Learning%20Machines&author=S%20Haykin&author=SS%20Haykin&publication_year=2009&)]. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=A%20new%20simple%20recurrent%20network%20with%20real-time%20recurrent%20learning%20process&author=T%20Rashid&author=BQ%20Huang&author=MT%20Kechadi&publication_year=2003&pages=169-174&)]. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Int%20J%20Eng%20Res%20Appl&title=Artificial%20neural%20network:%20a%20brief%20overview&author=M%20Zakaria&author=M%20AL-Shebany&author=S%20Sarhan&volume=4&publication_year=2014&pages=7-12&)]. [[DOI](https://doi.org/10.1017/CBO9780511569920.003)] [[Google Scholar](https://scholar.google.com/scholar_lookup?title=On-Line%20Learning%20in%20Neural%20Networks&author=L%20Bottou&publication_year=1999&)]. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Large-scale%20machine%20learning%20with%20stochastic%20gradient%20descent&author=L%20Bottou&publication_year=2010&pages=177-186&)]. [[DOI](https://doi.org/10.1007/978-3-642-35289-8_25)] [[Google Scholar](https://scholar.google.com/scholar_lookup?title=Neural%20Networks:%20Tricks%20of%20the%20Trade&author=L%20Bottou&publication_year=2012&)]. [[DOI](https://doi.org/10.1016/j.neuron.2012.06.006)] [[PMC free article](/articles/PMC3597383/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/22726828/)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neuron&title=The%20brain%20activity%20map%20project%20and%20the%20challenge%20of%20functional%20connectomics&author=AP%20Alivisatos&author=M%20Chun&author=GM%20Church&author=RJ%20Greenspan&author=ML%20Roukes&volume=74&publication_year=2012&pages=970-974&pmid=22726828&doi=10.1016/j.neuron.2012.06.006&)]. [[DOI](https://doi.org/10.1895/wormbook.1.12.2)] [[PMC free article](/articles/PMC4791530/)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/20891032/)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=WormBook&title=Neurogenesis%20in%20the%20nematode%20Caenorhabditis%20elegans&author=O%20Hobert&publication_year=2010&pages=1-24&pmid=20891032&doi=10.1895/wormbook.1.12.2&)]. [[PubMed](https://pubmed.ncbi.nlm.nih.gov/2185863/)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Bull%20Math%20Biol&title=A%20logical%20calculus%20of%20the%20ideas%20immanent%20in%20nervous%20activity.%201943&author=WS%20McCulloch&author=W%20Pitts&volume=52&publication_year=1990&pages=99-115&pmid=2185863&)]. [[DOI](https://doi.org/10.1016/0893-6080(95)00107-7)] [[PubMed](https://pubmed.ncbi.nlm.nih.gov/12662565/)] [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=Neural%20Netw&title=Characterization%20of%20a%20class%20of%20sigmoid%20functions%20with%20applications%20to%20neural%20networks&author=S%20Ranka&author=CK%20Mohan&author=K%20Mehrotra&author=A%20Menon&volume=9&publication_year=1996&pages=819-835&pmid=12662565&doi=10.1016/0893-6080(95)00107-7&)]. [[Google Scholar](https://scholar.google.com/scholar_lookup?journal=p-Adic%20Numbers%20Ultrametric%20Anal%20Appl&title=Classification%20by%20ensembles%20of%20neural%20networks&author=SV%20Kozyrev&volume=4&publication_year=2012&pages=27-33&)].

Visit