Summary: | 博士 === 國立中興大學 === 應用數學系 === 87 === Artificial neural networks(ANNs) are referred to as neural networks, connectionism, adaptive networks, artificial neural systems, neurocomputers, and parallel distribution processors. ANNs are mathematical models of theorized mind and exploit the massively parallel local processing, distributed representation properties that are believed to exist in brain. The primary intent of ANNs is to explore and reproduce human information processing tasks such as speech, vision, touch and knowledge processing. In addition, ANNs are used for data compression, near-optimal solutions to combinatorial optimization problems, pattern matching, system modeling, and function approximation. ANNs theory is derived from many disciplines including psychology, mathematics, neuroscience, physics, engineering, computer science, philosophy, biology, and linguistics. It is evident from this diverse listing that ANNs technology represents a “universalization” among the sciences working toward a common goal─building an intelligent system. It is equally evident from the listing that an accurated and complete description of the work in all the listed disciplines is an impossible task. In light of this, we will focus upon ANN paradigms that have applications or application potential.
In this thesis, the improved supervised binary learning model of ANNs and some applications will be proposed. The supervised and unsupervised learning algorithms are two kinds of learning algorithms distinguished by learning methods. And the learning data could be separated to binary and continuous data.
Perceptron is the earliest applicable method of supervised binary learning model, yet it solved only the linearly separable problem. Linearly nonseparable problems have been resolved by backpropagation learning algorithm using hidden units. The convergence of backpropagation was proved, but the learning maybe fail. Therefore, we proposed the improved learning structure and algorithm, which could learn successfully, based on functional link units.
However, the algorithm requires much more storage for functional link units, resulting much more computation time. It is well known that if the given set of binary training patterns is linearly separable, then successful learning can be achieved using only perceptrons without any functional link unit. Consequently, to reduce the number of functional-link cells required for successful learning, it is crucial to provide an easy method for checking the linear separability of each set of binary patterns of the input training example. The method for checking the linear separability of a given set of binary training patterns in a binary supervised learning of an artificial neural network is also proposed in the thesis. If the training patterns are linearly separable, then it can be implemented by the perceptron. And there are some applications of the improved learning algorithm in later chapters.
|