Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonlinear transfer function

Kolmogorov s theorem thus effectively states that a three-layer net with N 2N -)-1) neurons using continuously increasing nonlinear transfer functions can compute any continuous function of N variables. Unfortunately, the theorem tells us nothing about how to select the required transfer functions or set the weights in our net. [Pg.549]

It is interesting to note that the data processing that occurs dnring the operation of a PCR model is jnst a special case of that of a ANN feed-forward network, where the inpnt weights (W) are the PC loadings (P, in Eqnation 12.19), the output weights (W2) are the y loadings (q, in Eqnation 12.30), and there is no nonlinear transfer function in the hidden layer [67]. [Pg.388]

Fig. 10.8. Stacked plate system and its transfer function, (a) A fivefold stacked-plate vibration isolation system, with four sets of viton pieces between metal plates, (b) Solid curve, measured transfer function. Dashed curve, transfer function calculated with a constant stiffness. Dash-dotted curve, transfer function calculated with the measured nonlinear transfer function. (After Okano et al., 1987.)... Fig. 10.8. Stacked plate system and its transfer function, (a) A fivefold stacked-plate vibration isolation system, with four sets of viton pieces between metal plates, (b) Solid curve, measured transfer function. Dashed curve, transfer function calculated with a constant stiffness. Dash-dotted curve, transfer function calculated with the measured nonlinear transfer function. (After Okano et al., 1987.)...
The architecture of a common NN is shown in Fig. 10.8. The design depends on the types of sensor responses, on their dynamic range, drift, and so on. In short, it depends on all the complexities of the transfer functions of different types of sensors. Once again there is an input layer containing m input elements. It is massively interconnected to the n nodes of the next hidden layer %n at which the weighing factors Wn operate on the signal. There can be more than one hidden layer, if necessary. The connection to the output layer 0O has the form of nonlinear transfer function /hid for example,... [Pg.325]

The functions written in capital letters in (4.2.13)-(4.2.16) are the Fourier transforms of the functions written in small letters in (4.2.5)-(4.2.8). The superscript s indicates that the nonlinear transfer functions K (o)i,. .., o> ) in (4.2.15) and (4.2.16) are the Fourier transforms of impulse-response functions with indistinguishable time arguments, where the causal time order ti > >r is not respected. These transfer functions are invariant against permutation of frequency arguments. Equivalent expressions for the Fourier transforms of impulse-response functions with time-ordered arguments cannot readily be derived. [Pg.132]

This fimction is continuous and varies monotonically from a lower bound of 0 to an upper bound of 1 and has a continuous derivative. The transfer function in the output layer can be different from than that used in the rest of the network. Often, it is linear, f(NET)=NET, since this speeds up the training process. On the other hand, a sigmoid function has a high level of noise immunity, a feature that can be very useful. Currently, the majority of current CNNs use a nonlinear transfer function such as a sigmoid since it provides a number of advantages. In theory, however, any nonpolynomial function that is bounded and differentiable (at least piecewise) can be used as a transfer... [Pg.23]

In case of ANN, nonlinear transfer functions are used in the hidden neurons. The output of the k hidden neuron having log-sigmoid transfer function can be expressed as follows ... [Pg.41]

The net signal is then modified by a so-called transfer function and sent as output to other neurons. The most widely used transfer function is sigmoidal it has two plateau areas having the values zero and one. and between these an area in which it is increasing nonlinearly. Figure 9-15 shows an example of a sigmoidal transfer function. [Pg.453]

Transfer function models are linear in nature, but chemical processes are known to exhibit nonhnear behavior. One could use the same type of optimization objective as given in Eq. (8-26) to determine parameters in nonlinear first-principle models, such as Eq. (8-3) presented earlier. Also, nonhnear empirical models, such as neural network models, have recently been proposed for process applications. The key to the use of these nonlinear empirical models is naving high-quality process data, which allows the important nonhnearities to be identified. [Pg.725]

A key featui-e of MPC is that a dynamic model of the pi ocess is used to pi-edict futui e values of the contmlled outputs. Thei-e is considei--able flexibihty concei-ning the choice of the dynamic model. Fof example, a physical model based on fifst principles (e.g., mass and energy balances) or an empirical model coiild be selected. Also, the empirical model could be a linear model (e.g., transfer function, step response model, or state space model) or a nonhnear model (e.g., neural net model). However, most industrial applications of MPC have relied on linear empirical models, which may include simple nonlinear transformations of process variables. [Pg.740]

The significance of instrument band width and modulation transfer function was discussed in connection with Equation (3) to characterize the roughness of nominally smooth surfaces. The mechanical (stylus) profilometer has a nonlinear response, and, strictly speaking, has no modulation transfer function because of this. The smallest spatial wavelength which the instrument can resolve, 4nin> given in terms of the stylus radius rand the amplitude aoi the structure as... [Pg.720]

Neural networks can also be classified by their neuron transfer function, which typically are either linear or nonlinear models. The earliest models used linear transfer functions wherein the output values were continuous. Linear functions are not very useful for many applications because most problems are too complex to be manipulated by simple multiplication. In a nonlinear model, the output of the neuron is a nonlinear function of the sum of the inputs. The output of a nonlinear neuron can have a very complicated relationship with the activation value. [Pg.4]

Of the 112 equations, about 12% were ordinary differential equations, 75 % were algebraic equations, 10 % were integral equations, and 3 % were transfer functions. About 40 % were nonlinear equations. The detailed list of equations and variables is far too long to be repeated here the equations take fourteen pages to list in the original reference. The interested reader will find the derivation of the equations carefully described and the equations themselves clearly arranged by subsystem in the original reference. [Pg.228]

A number of techniques have been proposed. We will discuss only the more conventional methods that are widely used in the chemical and petroleum industries. Only the identification of linear transfer-function models will be discussed. Nonlinear identification is beyond the scope of this book. [Pg.503]

ATV is a dosedloop test, so the process will not drift away from the setpoint. This keeps the process in the linear region where we are trying to get transfer functions. This is precisely why the method works well on highly nonlinear processes. The process is never pushed very far away from the steadystate conditions. [Pg.521]

An analysis of the transfer function of this system can be made using the matrix method described by Okano et al. (1987). However, the stiffness of the rubber pieces is highly nonlinear. Okano et al. (1987) found that the measured transfer function does not fit theoretical predictions based on a constant stiffness. A nonlinear elastic behavior must be taken into account. Another problem with the metal-stack system is that the resonance frequency is around... [Pg.249]

For oo, (co) approaches unity, provided that l — t(co) < 1. In this case, 0 k) is just the object estimated by inverse filtering. For k finite, the inverse-filter estimate is modified by a factor that suppresses frequencies for which t (co) is small. The larger k is, the less is this suppression. For typical transfer functions t(co) that suppress high frequencies, the factor B(co) controls the high-frequency content of o k In the spectrum domain, it is also possible to derive simple expressions for filters y(x) that are fully equivalent to an arbitrary number of relaxation iterations. Blass and Halsey (1981) have done so, but the highly useful nonlinear modifications of these methods cannot be incorporated. [Pg.84]


See other pages where Nonlinear transfer function is mentioned: [Pg.114]    [Pg.114]    [Pg.387]    [Pg.389]    [Pg.34]    [Pg.685]    [Pg.361]    [Pg.1273]    [Pg.490]    [Pg.811]    [Pg.218]    [Pg.943]    [Pg.701]    [Pg.1368]    [Pg.283]    [Pg.1338]    [Pg.114]    [Pg.114]    [Pg.387]    [Pg.389]    [Pg.34]    [Pg.685]    [Pg.361]    [Pg.1273]    [Pg.490]    [Pg.811]    [Pg.218]    [Pg.943]    [Pg.701]    [Pg.1368]    [Pg.283]    [Pg.1338]    [Pg.721]    [Pg.724]    [Pg.720]    [Pg.548]    [Pg.697]    [Pg.163]    [Pg.535]    [Pg.505]    [Pg.516]    [Pg.3]    [Pg.704]    [Pg.89]    [Pg.252]    [Pg.254]    [Pg.104]   
See also in sourсe #XX -- [ Pg.131 ]




SEARCH



Nonlinear function

Transfer function

Transfer function functions

Transference function

© 2024 chempedia.info