Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonlinear optimization training

Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training. Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training.
Since this study is restricted to a structural system that is operated in its linear region, the adaptive activation functions approximate to a hn-ear function after sufficient training. Therefore the general nonlinear optimization problem given by (4.11)-(4.13) can be simplified for a hnear case. After the neural network is sufficiently trained, (4.11) can be written as... [Pg.68]

Controller emulation A simple applieation in eontrol is the use of neural networks to emulate the operation of existing eontrollers. It may be that a nonlinear plant requires several tuned PID eontrollers to operate over the full range of eontrol aetions. Or again, an LQ optimal eontroller has diffieulty in running in real-time. Figure 10.28 shows how the eontrol signal from an existing eontroller may be used to train, and to finally be replaeed by, a neural network eontroller. [Pg.361]

Several nonlinear QSAR methods have been proposed in recent years. Most of these methods are based on either ANN or machine learning techniques. Both back-propagation (BP-ANN) and counterpropagation (CP-ANN) neural networks [33] were used in these studies. Because optimization of many parameters is involved in these techniques, the speed of the analysis is relatively slow. More recently, Hirst reported a simple and fast nonlinear QSAR method in which the activity surface was generated from the activities of training set compounds based on some predefined mathematical functions [34]. [Pg.313]

Off-line analysis, controller design, and optimization are now performed in the area of dynamics. The largest dynamic simulation has been about 100,000 differential algebraic equations (DAEs) for analysis of control systems. Simulations formulated with process models having over 10,000 DAEs are considered frequently. Also, detailed training simulators have models with over 10,000 DAEs. On-line model predictive control (MPC) and nonlinear MPC using first-principle models are seeing a number of industrial applications, particularly in polymeric reactions and processes. At this point, systems with over 100 DAEs have been implemented for on-line dynamic optimization and control. [Pg.87]

While these optimization-based approaches have yielded very useful results for reactor networks, they have a number of limitations. First, proper problem definition for reactor networks is difficult, given the uncertainties in the process and the need to consider the interaction of other process subsystems. Second, all of the above-mentioned studies formulated nonconvex optimization problems for the optimal network structure and relied on local optimization tools to solve them. As a result, only locally optimal solutions could be guaranteed. Given the likelihood of extreme nonlinear behavior, such as bifurcations and multiple steady states, even locally optimal solutions can be quite poor. In addition, superstructure approaches are usually plagued by the question of completeness of the network, as well as the possibility that a better network may have been overlooked by a limited superstructure. This problem is exacerbated by reaction systems with many networks that have identical performance characteristics. (For instance, a single PFR can be approximated by a large train of CSTRs.) In most cases, the simpler network is clearly more desirable. [Pg.250]

Compared with linear and nonlinear regression methods, the advantage of ANN is its ability to correlate a nonlinear function without assumption about the form of this function beforehand. And the trained ANN can be used for unknown prediction. Therefore, ANN has been widely used in data processing of SAR. But if we use ANN solely, sometimes the results of prediction may be not very reliable. Experimental results indicate that some of the test samples predicted by ANN as optimal samples are really not true optimal samples. This is a typical example of so-called overfitting that makes the prediction results of trained ANN not reliable enough. Since the data files in many practical problems usually have strong noise and non-uniform sample point distribution, the overfitting problem may lead to more serious mistake in these practical problems. [Pg.195]


See other pages where Nonlinear optimization training is mentioned: [Pg.5]    [Pg.17]    [Pg.5]    [Pg.17]    [Pg.4]    [Pg.5]    [Pg.22]    [Pg.23]    [Pg.27]    [Pg.27]    [Pg.437]    [Pg.43]    [Pg.43]    [Pg.234]    [Pg.141]    [Pg.457]    [Pg.195]    [Pg.195]    [Pg.56]    [Pg.111]    [Pg.215]    [Pg.59]    [Pg.28]    [Pg.195]    [Pg.321]    [Pg.197]    [Pg.360]    [Pg.147]    [Pg.188]    [Pg.106]    [Pg.213]    [Pg.317]    [Pg.340]    [Pg.343]    [Pg.223]    [Pg.27]    [Pg.76]    [Pg.571]    [Pg.1346]    [Pg.374]    [Pg.445]    [Pg.147]   
See also in sourсe #XX -- [ Pg.5 ]




SEARCH



Optimization nonlinear

© 2024 chempedia.info