Posted on

Explanation-Based Neural Network Learning: A Lifelong by Sebastian Thrun

By Sebastian Thrun

Lifelong studying addresses events during which a learner faces a sequence of alternative studying projects offering the chance for synergy between them. Explanation-based neural community studying (EBNN) is a computing device studying set of rules that transfers wisdom throughout a number of studying projects. while confronted with a brand new studying job, EBNN exploits area wisdom collected in prior studying initiatives to lead generalization within the new one. hence, EBNN generalizes extra adequately from much less information than similar equipment. Explanation-Based Neural community studying: A Lifelong LearningApproach describes the fundamental EBNN paradigm and investigates it within the context of supervised studying, reinforcement studying, robotics, and chess.
`The paradigm of lifelong studying - utilizing previous realized wisdom to enhance next studying - is a promising path for a brand new iteration of computing device studying algorithms. Given the necessity for extra actual studying equipment, it really is tough to visualize a destiny for laptop studying that doesn't comprise this paradigm.'
From the Foreword by way of Tom M. Mitchell.

Show description

Read or Download Explanation-Based Neural Network Learning: A Lifelong Learning Approach PDF

Similar artificial intelligence books

Stochastic Local Search : Foundations & Applications (The Morgan Kaufmann Series in Artificial Intelligence)

Stochastic neighborhood seek (SLS) algorithms are one of the so much well-known and profitable strategies for fixing computationally tricky difficulties in lots of parts of desktop technological know-how and operations examine, together with propositional satisfiability, constraint pride, routing, and scheduling. SLS algorithms have additionally turn into more and more renowned for fixing demanding combinatorial difficulties in lots of program components, equivalent to e-commerce and bioinformatics.

Neural Networks for Pattern Recognition

This can be the 1st complete remedy of feed-forward neural networks from the viewpoint of statistical development reputation. After introducing the fundamental options, the ebook examines recommendations for modeling chance density features and the houses and advantages of the multi-layer perceptron and radial foundation functionality community types.

Handbook of Temporal Reasoning in Artificial Intelligence, Volume 1

This assortment represents the first reference paintings for researchers and scholars within the sector of Temporal Reasoning in synthetic Intelligence. Temporal reasoning has an essential position to play in lots of parts, fairly synthetic Intelligence. but, earlier, there was no unmarried quantity accumulating jointly the breadth of labor during this zone.

Programming Multi-Agent Systems in AgentSpeak using Jason

Jason is an Open resource interpreter for a longer model of AgentSpeak – a logic-based agent-oriented programming language – written in Java™. It permits clients to construct complicated multi-agent structures which are able to working in environments formerly thought of too unpredictable for pcs to deal with.

Extra info for Explanation-Based Neural Network Learning: A Lifelong Learning Approach

Example text

See the Workshop on Combining Inductive and Analytical Learning [209], and the International Symposium on Integrating Knowledge and Neural Heuristics [71]). , first order versus propositional domain theories or discrete-valued versus real-valued target functions), and the particular mechanisms for interleaving inductive and analytical processes. Mechanisms for combining induction and analysis can be grouped roughly into three categories: • Analytical, then inductive. Here each training example is first generalized analytically, and inductive methods are then applied to the results.

2. If the domain is stochastic, the observed prediction error may be large, although the network might perfectly predict the mean outcome. In such cases LOB* is expected to underestimate the slope accuracy. If the noise is significant, the utility of LOB* can be questionable. Explanation-Based Neural Network Learning 39 EBNN: for each training example do begin 1. Determine the target value (inductive training information). 2. If no appropriate domain theory is available goto 6. 3. Explain the training example in terms of domain theory.

Zs-cup. ~ yes As is easy to be seen, slopes carry information as to how to generalize this training instance. 7b. Even though the training instance happens to be liftable, it can be seen from the slope that a change of the feature is_light will decrease the liftability. A change of the feature open_vessel, however, will have approximately no effect on the lift ability. 8. 8 are much more likely to yield helpful slopes. 2. Equipped with a neural network domain theory, let us now be interested in the problem of training the target network f.

Download PDF sample

Rated 4.95 of 5 – based on 30 votes