Other
Read books online » Other » Data Mining Mehmed Kantardzic (good english books to read .txt) 📖

Book online «Data Mining Mehmed Kantardzic (good english books to read .txt) 📖». Author Mehmed Kantardzic



1 ... 37 38 39 40 41 42 43 44 45 ... 193
Go to page:
principle is intended for relatively large data sets, namely, when the ratio n/h is large and the empirical risk converges close to the true risk. However, if n/h is small, namely, when the ratio n/h is less than 20, then a modification of the ERM principle is necessary. The inductive principle called SRM provides a formal mechanism for choosing a model with optimal complexity in finite and small data sets. According to SRM, solving a learning problem with a finite data set requires a priori specification of a structure on a set of approximating functions. For example, a set of functions S1 is a subset of S2, and S2 is a subset of S3. The set of approximating functions S1 has the lowest complexity, but the complexity increases with each new superset S2, S3, … , Sk. A simplified graphical representation of the structure is given in Figure 4.6.

Figure 4.6. Structure of a set of approximating functions.

For a given data set, the optimal model estimation is performed following two steps:

1. selecting an element of a structure having optimal complexity, and

2. estimating the model based on the set of approximating functions defined in a selected element of the structure.

Through these two steps the SRM provides a quantitative characterization of the trade-off between the complexity of approximating functions and the quality of fitting the training data. As the complexity increases (increase of the index k for Sk), the minimum empirical risk decreases, and the quality of fitting the data improves. But estimated true risk, measured through the additional testing data set, has a convex form, and in one moment it moves in a direction opposite that of the empirical risk, as shown in Figure 4.7. The SRM chooses an optimal element of the structure that yields the minimal guaranteed bound on the true risk.

Figure 4.7. Empirical and true risk as a function of h (model complexity).

In practice, to implement the SRM approach, it is necessary to be able to

1. calculate or estimate the VC dimension for any element Sk of the structure, and then

2. minimize the empirical risk for each element of the structure.

For most practical inductive-learning methods that use nonlinear approximating functions, finding the VC dimension analytically is difficult, as is the nonlinear optimization of empirical risk. Therefore, rigorous application of the SRM principle cannot only be difficult but, in many cases, impossible with nonlinear approximations. This does not, however, imply that the SLT is impractical. There are various heuristic procedures that are often used to implement SRM implicitly. Examples of such heuristics include early stopping rules and weight initialization, which are often used in artificial neural networks. These heuristics will be explained together with different learning methods in the following chapters. The choice of an SRM-optimization strategy suitable for a given learning problem depends on the type of approximating functions supported by the learning machine. There are three commonly used optimization approaches:

1. Stochastic Approximation (or Gradient Descent). Given an initial estimate of the values for the approximating functions of parameter w, the optimal parameter values are found by repeatedly updating them. In each step, while computing the gradient of the risk function, the updated values of the parameters cause a small movement in the direction of the steepest descent along the risk (error) function.

2. Iterative Methods. Parameter values w are estimated iteratively so that at each iteration the value of the empirical risk is decreased. In contrast to stochastic approximation, iterative methods do not use gradient estimates; instead, they rely on a particular form of approximating functions with a special iterative parameter.

3. Greedy Optimization. The greedy method is used when the set of approxi­mating functions is a linear combination of some basic functions. Initially, only the first term of the approximating functions is used and the corresponding parameters optimized. Optimization corresponds to minimizing the differences between the training data set and the estimated model. This term is then held fixed, and the next term is optimized. The optimization process is repeated until values are found for all parameters w and for all terms in the approximating functions.

These typical optimization approaches and also other more specific techniques have one or more of the following problems:

1. Sensitivity to Initial Conditions. The final solution is very sensitive to the initial values of the approximation function parameters.

2. Sensitivity to Stopping Rules. Nonlinear approximating functions often have regions that are very flat, where some optimization algorithms can become “stuck” for a long time (for a large number of iterations). With poorly designed stopping rules these regions can be interpreted falsely as local minima by the optimization algorithm.

3. Sensitivity to Multiple Local Minima. Nonlinear functions may have many local minima, and optimization methods can find, at best, one of them without trying to reach global minimum. Various heuristics can be used to explore the solution space and move from a local solution toward a globally optimal solution.

Working with finite data sets, SLT reaches several conclusions that are important guidelines in a practical implementation of data-mining techniques. Let us briefly explain two of these useful principles. First, when solving a problem of inductive learning based on finite information, one should keep in mind the following general commonsense principle: Do not attempt to solve a specified problem by indirectly solving a harder general problem as an intermediate step. We are interested in solving a specific task, and we should solve it directly. Following SLT results, we stress that for estimation with finite samples, it is always better to solve a specific learning problem rather than attempt a general one. Conceptually, this means that posing the problem directly will then require fewer samples for a specified level of accuracy in the solution. This point, while obvious, has not been clearly stated in most of the classical textbooks on data analysis.

Second, there is a general belief that for inductive-learning methods with finite data sets, the best performance is provided by a model of optimal complexity, where the optimization is based

1 ... 37 38 39 40 41 42 43 44 45 ... 193
Go to page:

Free ebook «Data Mining Mehmed Kantardzic (good english books to read .txt) 📖» - read online now

Comments (0)

There are no comments yet. You can be the first!
Add a comment