Posted on

Sensitivity Analysis for Neural Networks (Natural Computing by Daniel S. Yeung, Ian Cloete, Daming Shi, Wing W. Y. Ng

By Daniel S. Yeung, Ian Cloete, Daming Shi, Wing W. Y. Ng

Artificial neural networks are used to version platforms that obtain inputs and convey outputs. The relationships among the inputs and outputs and the illustration parameters are serious concerns within the layout of similar engineering structures, and sensitivity research matters tools for reading those relationships. Perturbations of neural networks are as a result of computing device imprecision, and so they might be simulated by means of embedding disturbances within the unique inputs or connection weights, permitting us to review the features of a functionality less than small perturbations of its parameters.

This is the 1st publication to provide a scientific description of sensitivity research tools for man made neural networks. It covers sensitivity research of multilayer perceptron neural networks and radial foundation functionality neural networks, familiar versions within the laptop studying box. The authors research the purposes of such research in initiatives equivalent to characteristic choice, pattern aid, and community optimization. The booklet may be priceless for engineers utilizing neural community sensitivity research to unravel useful difficulties, and for researchers attracted to foundational difficulties in neural networks.

Show description

Read Online or Download Sensitivity Analysis for Neural Networks (Natural Computing Series) PDF

Similar artificial intelligence books

Stochastic Local Search : Foundations & Applications (The Morgan Kaufmann Series in Artificial Intelligence)

Stochastic neighborhood seek (SLS) algorithms are one of the so much popular and profitable ideas for fixing computationally tough difficulties in lots of components of computing device technology and operations examine, together with propositional satisfiability, constraint delight, routing, and scheduling. SLS algorithms have additionally turn into more and more renowned for fixing tough combinatorial difficulties in lots of program parts, corresponding to e-commerce and bioinformatics.

Neural Networks for Pattern Recognition

This is often the 1st entire therapy of feed-forward neural networks from the point of view of statistical trend popularity. After introducing the fundamental suggestions, the publication examines thoughts for modeling likelihood density services and the houses and advantages of the multi-layer perceptron and radial foundation functionality community versions.

Handbook of Temporal Reasoning in Artificial Intelligence, Volume 1

This assortment represents the first reference paintings for researchers and scholars within the quarter of Temporal Reasoning in synthetic Intelligence. Temporal reasoning has a necessary function to play in lots of parts, relatively synthetic Intelligence. but, formerly, there was no unmarried quantity accumulating jointly the breadth of labor during this quarter.

Programming Multi-Agent Systems in AgentSpeak using Jason

Jason is an Open resource interpreter for a longer model of AgentSpeak – a logic-based agent-oriented programming language – written in Java™. It allows clients to construct complicated multi-agent structures which are able to working in environments formerly thought of too unpredictable for pcs to deal with.

Additional info for Sensitivity Analysis for Neural Networks (Natural Computing Series)

Example text

This is an important problem to address because the performance and training of an RBF network depend very much on these parameters. 1 Related Work There are three ways to construct an RBF network, namely, clustering, pruning and critical vector learning. Bishop (1991) and Xu (1998) followed the clustering method, in which the training examples are grouped and then each neuron is assigned to a cluster. The pruning method, such as Chen et al. (1991) and Mao (2002), creates a neuron for each training example and then prunes the hidden neurons by example selection.

Each hidden neuron produces an activation function, typically a Gaussian kernel: hi = exp − x − ci 2σi2 2 , i = 1,2, . . 2 Construction of RBF Networks with Sensitivity Analysis 49 where ci and σi2 are the center and width of the Gaussian basis function of the i th hidden unit, respectively. The units in the output layer have interconnections with all the hidden units. 2) where h = (h1 h2 . . hK ) is the input vector from the hidden layer, and the wij is the interconnection weight between the j th output neuron and the i th hidden neuron.

By the law of large numbers, when the number of input features is not too small, φj (x) would have a lognormal distribution. 4 Characteristics of the Error Bound From Eq. 7), one may notice that the R∗SM consists of three major components: training error (Remp ), ST-SM (ETQ ( y)2 ) and the constants. The constants A and ε are preselected when the confidence of the bound (1 − η) and the training dataset is fixed. Moreover, the constant B in ε could be preselected when the classifier type is selected by fixing the maximum classifier output bound.

Download PDF sample

Rated 4.75 of 5 – based on 46 votes