Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parameter vector

TRANSFER VECTOR FOR PARAMETERS VECTOR OF CENTRAL DIFFERENCE INCREMENTS FOP CALCULATING DERIVATIVES WRT THE PARAMETERS VECTOR OF CENTRAL DIFFERENCE INCREMENTS FOR... [Pg.252]

Let us consider that equivalent planar OSD are completely characterized by a parameter vector p = where I is the ligament (the distance between the bottom of the OSD and the... [Pg.172]

Let us consider that the niunber of echoes M and the incident wavelet (/) (e.g., a normalized comer echo) are known. Least Squares approach for estimating parameter vectors x and requires the solution to the nonlinear least squares problem ... [Pg.175]

At this point we introduce the formal notation, which is commonly used in literature, and which is further used throughout this chapter. In the new notation we replace the parameter vector b in the calibration example by a vector x, which is called the state vector. In the multicomponent kinetic system the state vector x contains the concentrations of the compounds in the reaction mixture at a given time. Thus x is the vector which is estimated by the filter. The response of the measurement device, e.g., the absorbance at a given wavelength, is denoted by z. The absorbtivities at a given wavelength which relate the measured absorbance to the concentrations of the compounds in the mixture, or the design matrix in the calibration experiment (x in eq. (41.3)) are denoted by h. ... [Pg.585]

What type of objective function should we minimize This is the question that we are always faced with before we can even start the search for the parameter values. In general, the unknown parameter vector k is found by minimizing a scalar function often referred to as the objective function. We shall denote this function as S(k) to indicate the dependence on the chosen parameters. [Pg.13]

Solution of the above linear equation yields the least squares estimates of the parameter vector, k, ... [Pg.28]

Given all the above it can be shown that the (1 -a)lOO%> joint confidence region for the parameter vector k is an ellipsoid given by the equation ... [Pg.33]

In this case, the unknown parameter vector k is the 2-dimensional vector [k, k2]T. There is only one independent variable (xi=t) and only one output variable. Therefore, the model in our standard notation is... [Pg.55]

Parameter vector Independent variables Output vector Model Equation... [Pg.62]

The basic problem is to search for the parameter vector k that minimizes S(k) by following an iterative scheme, i.e.,... [Pg.67]

Since we have a minimization problem, significant computational savings can be realized by noting in the implementation of the LJ optimization procedure that for each trial parameter vector, we do not need to complete the summation in Equation 5.23. Once the LS Objective function exceeds the smallest value found up to that point (S ), a new trial parameter vector can be selected. [Pg.81]

The only difference here is that it is further assumed that some or all of the components of the initial state vector x0 are unknown. Let the q-dimensional vector p (0 < q < ri) denote the unknown components of the vector x0. In this class of parameter estimation problems, the objective is to determine not only the parameter vector k but also the unknown vector p containing the unknown elements of the initial state vector x(to). [Pg.93]

An estimate ky> of the unknown parameter vector is available at the j,h iteration. Equation 6.1 then becomes... [Pg.111]

Having the smoothed values of the state variables at each sampling point and the derivatives, q we have essentially transformed the problem to a "usual" linear regression problem. The parameter vector is obtained by minimizing the following LS objective function... [Pg.117]

Thus, the error in the solution vector is expected to be large for an ill-conditioned problem and small for a well-conditioned one. In parameter estimation, vector b is comprised of a linear combination of the response variables (measurements) which contain the error terms. Matrix A does not depend explicitly on the response variables, it depends only on the parameter sensitivity coefficients which depend only on the independent variables (assumed to be known precisely) and on the estimated parameter vector k which incorporates the uncertainty in the data. As a result, we expect most of the uncertainty in Equation 8.29 to be present in Ab. [Pg.142]

In order to improve the convergence characteristics and robustness of the Gauss-Newton method, Levenberg in 1944 and later Marquardt (1963) proposed to modify the normal equations by adding a small positive number, y2, to the diagonal elements of A. Namely, at each iteration the increment in the parameter vector is obtained by solving the following equation... [Pg.144]

Given a set of data points (x y,), i=l,...,N and a mathematical model of the form, y = f(x,k), the objective is to determine the unknown parameter vector k by minimizing the least squares objective function subject to the equality constraint, namely... [Pg.159]

Step 1. Generate/assume an initial guess for the parameter vector k. [Pg.161]

The most general case is covered by the well-known Kuhn-Tucher conditions for optimality. Let us assume the most general case where we seek the unknown parameter vector k, that will... [Pg.165]

The unknown parameter vector k is obtained by minimizing the corresponding least squares objective function where the weighting matrix Q, is chosen based on the statistical characteristics of the error term e, as already discussed in Chapter 2. [Pg.169]

Following the same approach as in Chapter 6 for ODE models, we linearize the output vector around the current estimate of the parameter vector kw to yield... [Pg.169]

Let us assume that N experiments have been conducted up to now and given an estimate of the parameter vector based on the experiments performed up to now, we wish to design the next experiment so that we maximize the information that shall be obtained. In other words, the problem we are attempting to solve is ... [Pg.187]

If we do not have any particular preference for a specific parameter or a particular subset of the parameter vector, we can minimize the variance of all parameters simultaneously by minimizing the volume of the joint 95% confidence region. Obviously a small joint confidence region is highly desirable. [Pg.188]

Step 6. Based on the additional measurement of the response variables, estimate the parameter vector and its covariance matrix. [Pg.190]

Whenever a new measurement, yn, becomes available, the parameter vector is updated to 0 by the formula... [Pg.219]


See other pages where Parameter vector is mentioned: [Pg.173]    [Pg.176]    [Pg.888]    [Pg.76]    [Pg.97]    [Pg.97]    [Pg.231]    [Pg.234]    [Pg.237]    [Pg.244]    [Pg.10]    [Pg.24]    [Pg.54]    [Pg.57]    [Pg.59]    [Pg.80]    [Pg.80]    [Pg.80]    [Pg.80]    [Pg.113]   
See also in sourсe #XX -- [ Pg.218 ]




SEARCH



Reciprocal vectors system parameters

Vector parameter file

© 2024 chempedia.info