Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimality of the solution

Let us limit ourselves, at the beginning, to the linear objective function (12.2.2). The detailed discussion of this case is presented in Madron and Veverka (1992). It is shown there that the solution is the optimum one (i.e. with the minimum value of function (12.2.2)), with the exception that there are some unobservable nonrequired quantities present. In the latter case the optimality of the solution is not warranted, even if the method looks like good heuristics. The solution then does not represent a global optimum, but only a good, feasible solution applicable to solution of practical problems. [Pg.452]

Much more difficult is the situation in the case of a nonlinear objective function of the type (12.2.3). Numerous case studies have shown that the optimum solution according to the objective function (12.2.2), i.e. optimisation as the cost of the measurement concerns, yields a solution that is unacceptable from the point of view of the precision of results. It means that the solution with minimum costs is, at the same time, the solution with some unmeasured quantities that are theoretically observable, but with unacceptably low precision (for example, the confidence interval wider than the value of the quantity). In this case, the following optimisation method can be recommended. The method is analogous to the direct search in graphs that proved efficient for optimisation of measurement designs in single-component mass balancing see Subsection 12.1.2. [Pg.452]

Let us have two measurement designs with the same number of measured quantities. Let us define the distance o/these two measurement designs as the number of distinct measured quantities between these two designs. The designs of distance one differ only in one measured quantity (one measured quantity in one design is replaced by one unmeasured quantity in the other design). [Pg.452]

The proposed optimisation algorithm enables one to find local optimum of a general objective function. The evaluation of objective functions requires only a modest amount of computation (in the essence the multiplication of matrices Z, 2 and Z, 4b by vectors of variances of measured quantities). Even problems of realistic dimensionality can be solved efficiently on personal computers. [Pg.453]


We ehoose to earry out only few numerieal experiments to seleet the solution parameters. Detailed optimization of the solution parameters is diffieult and often expensive eomputationally, so we do not reeommend it. Finally, we must validate the model. Though detailed experimental data for the veloeity and pressure profiles are not available for this partieular RFR, we ean employ the data on the overall pressure drop aeross the bed to validate the model to some extent. We find that the predieted overall pressure drop aeross the bed (10 kPa) shows good agreement with the available data. [Pg.819]

The data shown in this section demonstrate that the simultaneous optimization of the solute geometry and the solvent polarization is possible and it provides the same results as the normal approach. In the case of CPCM it already performs better than the normal scheme, even with a simple optimization algorithm, and it will probably be the best choice when large molecules are studied (when the PCM matrices cannot be kept in memory). This functional can thus be directly used to perform MD simulations in solution without considering explicit solvent molecules but still taking into account the dynamics of the solvent. On the other hand, the DPCM functional presents numerical difficulties that must be studied and overcome in order to allow its use for dynamic simulations in solution. [Pg.77]

There are myriad possibilities for the synthesis of a complex organic substance, Typically, with present computers, we can examine only one route out of about 10,000 reasonable routes. On a supercomputer, concurrent parallel explorations of the branches at the top of the decision tree would allow us to eliminate poorer routes near the top of the tree and discover, at a relatively early stage, the crucial last few steps of most or all of the really promising pathways. This would be an enormous advance toward optimality of the solutions. [Pg.112]

Alternatively, reaction field calculations with the IPCM (isodensity surface polarized continuum model) [73,74] can be performed to model solvent effects. In this approach, an isodensity surface defined by a value of 0.0004 a.u. of the total electron density distribution is calculated at the level of theory employed. Such an isodensity surface has been found to define rather accurately the volume of a molecule [75] and, therefore, it should also define a reasonable cavity for the soluted molecule within the polarizable continuum where the cavity can iteratively be adjusted when improving wavefunction and electron density distribution during a self consistent field (SCF) calculation at the HF or DFT level. The IPCM method has also the advantage that geometry optimization of the solute molecule is easier than for the PISA model and, apart from this, electron correlation effects can be included into the IPCM calculation. For the investigation of Si compounds (either neutral or ionic) in solution both the PISA and IPCM methods have been used. [41-47]... [Pg.241]

Table 1 shows the optimum makespans computed with the monolithic model and the CPU times in seconds, including the time required to prove the optimality of the solution. The last two columns of the table list the makespans and the CPU times obtained with the decomposition approach. Out of the 28 instances, 17 could be solved to optimality, and the maximum relative optimality gap is less than 7 %. The results obtained for the larger instances indicate that our method scales quite well. The maximum CPU time is less than 3 seconds, versus more than two hours for the monolithic approach. In sum, the analysis shows that the decomposition approach is able to provide good feasible schedules at very modest computational expense. Hence, the method is well-suited... [Pg.161]

The simulation includes the definition of the trajectory to be simulated, the generation of the sensor output, i.e. velocity and angle increments, the solution of the non-linear coupled system of differential equations describing the translational and rotational movement of the IMU, and the optimization of the solution by using external information about some parameters of the trajectory. The flow diagram of the... [Pg.31]

For the calculation of the electrostatic term the system under study is characterized by two dielectric constants, in the interior of the cavity the constant will have a value of unity, and in the exterior the value of the dielectric constant of the solvent. From this point the total electrostatic potential is evaluated. Beyond the mere classical outlining of the problem, Quantum Mechanics allows us to examine more deeply the analysis of the solute inserted in the field of reaction of the solvent, making the relevant modifications in the quantum mechanical equations of the system under study with a view to introducing a term due to the solvent reaction field. This permits a widening of the benefits which the use of the continuum methods grant to other facilities provided for a quantical treatment of the system, such as the optimization of the solute geometry the analysis of its wave function, the obtaining of its harmonic frequencies, " etc. all of which in the presence of the solvent. In this way a full analysis of the interaction solute-solvent can be reached at low computational cost. [Pg.23]

As noted in [13], the process of extracting gates to form a subcircuit may suffer from complications when subpaths of combinatorial logic between peripheral nodes are not modeled. These subpaths introduce additional timing constraints that, if left absent from the model, could invalidate the optimality of the solution. [Pg.35]

Areas in which further developments are expected are related to the optimization of the solution of air and water pollution, gas purification (removal of oxides of sulfur and nitrogen, of hydrogen sulfide, motor vehicle emissions, etc.), gas separation, mineral industries, regeneration, etc. Many of these areas will require the use of new forms of activated carbon such as cloth, felts, fibers, monoliths, etc., and consequently a search for the appropriate precursor and preparation mode is essential. Other areas in continuous progress will be gas storage, carbon molecular sieves and heterogeneous catalysis, all of them requiring considerable research efforts in the next few years. [Pg.468]

In this paper we have presented a new heuristic method to solve complex problems. The method is inspired on the way rivers are created in nature. By using it, we can obtain acceptable solutions to NP-hard problems in reasonable times. Our method obtains competitive results when compared with other more matured schemes, like ants colonies. In the TSP case study, our current experimental results shows that the RFD method is preferable to AGO if the optimality of the solution is the main requirement. The fast reinforcement of shortcuts, the avoidance of local cycles, the punishment of wrong paths, and the creation of global direction tendencies motivate these results. [Pg.175]


See other pages where Optimality of the solution is mentioned: [Pg.46]    [Pg.171]    [Pg.172]    [Pg.177]    [Pg.17]    [Pg.141]    [Pg.386]    [Pg.409]    [Pg.521]    [Pg.131]    [Pg.32]    [Pg.215]    [Pg.312]    [Pg.77]    [Pg.2356]    [Pg.3388]    [Pg.452]    [Pg.227]    [Pg.43]    [Pg.231]    [Pg.273]    [Pg.169]    [Pg.155]    [Pg.116]   


SEARCH



Numerical solution of the optimization problem

Optimization optimal solution

© 2024 chempedia.info