8 results · ● Live web index
kaggle.com article

Neural network optimization techniques - Kaggle

https://www.kaggle.com/getting-started/396914

Gradient descent: This is a famous optimization method used to replace the weights of a neural network. · Stochastic gradient descent (SGD): SGD is a variant of

Visit
towardsdatascience.com article

One more approach to optimize neural networks

https://towardsdatascience.com/one-more-approach-to-optimize-neural-networks-…

The so-called Neural Architecture Search (NAS) is commonly used here. Generally speaking, all NAS algorithms belong to the AutoML category.

Visit
nature.com article

Architectural optimisation in deep neural networks. Tests of a theoretically inspired method | npj Artificial Intelligence

https://www.nature.com/articles/s44387-025-00034-6

We study how to optimise the architecture of a Deep Neural Network by rearranging the neurons within the hidden layers. In this study, we carried out different tests on a relatively small network (with three hidden layers and a total amount of 192 moving neurons) on three basic datasets: the MNIST, the same dataset divided into even and odd digits and the Fashion MNIST25."). We refer with *L*(*p*) at the *p*-th hidden layer within the set of all the ones composing the network *p* ∈ {1, 2, ⋯ , *K*} and the number of neurons inside the layer *L*(*p*) is indicated as *N*(*p*). We want to test the theoretical suggestion on a neural network with finite numbers of neurons using a classification task experiment on the MNIST dataset. We aim at improving the performance metrics of accuracy and robustness described by Eqs. The network used in this experiment is introduced in Fig. 2a, it is composed by one fully-connected input layer of 784 neurons, three hidden layers of 64 neurons each, and an output layer of 10 neurons.

Visit
cs.columbia.edu research

COMS 4705: Lec 3.5: Neural Networks and Optimization

https://www.cs.columbia.edu/~johnhew/coms4705/lectures/lec3.5.html

The goal of this note is to discuss some concepts in optimization---algorithms for choosing the weights of a neural network in order to perform well on a given

Visit
dailydoseofds.com article

15 Ways to Optimize Neural Network Training (With Implementation)

https://www.dailydoseofds.com/15-ways-to-optimize-neural-network-training-wit…

In this article, let me walk you through 15 different ways you can optimize neural network training, from choosing the right optimizers to managing memory and hardware resources effectively. Setting `num_workers` in the PyTorch DataLoader is an easy way to increase the speed of loading data during training. In practice, this helps prevent the GPU from waiting for the data to be fed to it, thus ensuring that your model trains faster. While the CPU may remain idle, this process ensures that the GPU (which is the actual accelerator for our model training) always has data to work with. Formally, this process is known as **memory pinning**, and it is used to speed up the data transfer from the CPU to the GPU by making the training workflow asynchronous. Overall, these two simple settings—`num_workers` and `pin_memory`—can drastically speed up your training procedure, ensuring your model is constantly fed with data and your GPU is fully utilized.

Visit
deeplearning.cs.cmu.edu research

[PDF] Neural Networks: Optimization Part 1 - Deep Learning, CMU

https://deeplearning.cs.cmu.edu/F22/document/slides/lec6.optimization.pdf

A remarkably simple first-order algorithm, that is frequently much more efficient than gradient descent. – And can even be competitive against some of the more

Visit