Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hybrid algorithms

The second approach to speeding up the SSA involves separating the system into slow and fast subsets of reactions. In these methods, analytical or numerical approximations to the dynamics of the fast subset are computed while the slow subset is stochastically simulated. In one of the first such methods, Rao and Arkin (see Further reading) applied a quasi-steady-state assumption to the fast reactions and treated the remaining slow reactions as stochastic events. [Pg.301]

Salis and Kaznessis separated the system into slow and fast reactions and managed to overcome the inadequacies and achieve a substantial speed up compared to the SSA while retaining accuracy. Fast reactions are approximated as a continuous Markov process, through Chemical Langevin Equations (CLE), discussed in Chapter 13, and the slow subset is approximated through jump equations derived by extending the Next Reaction variant approach. [Pg.301]


Use of a Monte Carlo or a cluster (Hybrid) algorithm to calculate ionization constants of the titratable groups, net average charges, and electrostatic free energies as functions of pH. [Pg.188]

The hybrid algorithm is in general suitable for any two-stage stochastic mixed-integer linear program with integer requirements in the first-stage and in the... [Pg.212]

Jenny, P., S. B. Pope, M. Muradoglu, and D. A. Caughey (2001b). A hybrid algorithm for the joint PDF equation of turbulent reactive flows. Journal of Computational Physics 166, 218-252. [Pg.415]

How close to the bifurcation limits does your bisection program succeed when the graphics solutions are used as starting points for fzero What are the sizes of the residues in the computed solutions near the bifurcation points Which of the proposed steady-state finders of part (a) or (b) do you prefer Be careful and monitor your hybrid algorithm s effort via clock and etime. [Pg.133]

Haaland DM, Melgaard DK. New classical least-squares/partial least-squares hybrid algorithm for spectral analyses. Applied Spectroscopy 2001, 55, 1-8. [Pg.353]

Keywords Real-time Optimization, Genetic Algorithm, Sequential Quadratic Programming, Hybrid Algorithms, Hydrogenation Reactors. [Pg.483]

In order to illustrate the application of the developed hybrid algorithm, the optimization of a three-phase hydrogenation catalytic slurry reactor is considered. The study aims to determine the optimal operating conditions that lead to maximization of profit. [Pg.484]

The GA-SQP hybrid algorithm was built in order to run a micro-GA with the same code parameters as in section 5.1, except by the maximal number of generations, which was stipulated to 5. Afterwards, a SQP algorithm is used to improve the best individual found by the GA. Table 3 brings the optimal characteristics found by the hybrid algorithm, as well as the computational time the code demanded for the search. [Pg.487]

The hybrid structure was proved to be of high efficiency. First of all, it is easy to see, from Tables 2 and 3 that the profit and conversion are slightly greater for the optimal point found by the GA-SQP algorithm. Secondly, and very importantly, the computational time was significant lower for the hybrid algorithm, with scales compatible with real time applications for a supervisory control. [Pg.487]

Knowles, J. (2006). ParEGO a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems, IEEE Transactions on Evolutionary Computation 10, 1, pp. 50-66. [Pg.149]

Abstract. Artificial neural networks (ANN) are useful components in today s data analysis toolbox. They were initially inspired by the brain but are today accepted to be quite different from it. ANN typically lack scalability and mostly rely on supervised learning, both of which are biologically implausible features. Here we describe and evaluate a novel cortex-inspired hybrid algorithm. It is found to perform on par with a Support Vector Machine (SVM) in classification of activation patterns from the rat olfactory bulb. On-line unsupervised learning is shown to provide significant tolerance to sensor drift, an important property of algorithms used to analyze chemo-sensor data. Scalability of the approach is illustrated on the MNIST dataset of handwritten digits. [Pg.34]

Fig. 2.1. Outline of the hybrid algorithm. The unstructured array of sensors is clustered using multi-dimensional scaling (MDS) with a mutual information (MI) based distance measure. Then Vector Quantization (VQ) is used to partition the sensor into correlated groups. Each such group provides input to one module of an associative memory layer. VQ is used again to provide each module unit with a specific receptive field, i.e. to become a feature detector. Finally, classification is done by means of BCPNN. Fig. 2.1. Outline of the hybrid algorithm. The unstructured array of sensors is clustered using multi-dimensional scaling (MDS) with a mutual information (MI) based distance measure. Then Vector Quantization (VQ) is used to partition the sensor into correlated groups. Each such group provides input to one module of an associative memory layer. VQ is used again to provide each module unit with a specific receptive field, i.e. to become a feature detector. Finally, classification is done by means of BCPNN.
Fig. 2.3. Patches generated from the MNIST data by the MI + MDS + VQ + VQ steps of the hybrid algorithm, (a) The 12 different patches are colour coded. Note that some patches comprise more than one subfield, (b) Example of the specific receptive field of one of the 10 units in the patch marked with orange (with two subfields). Fig. 2.3. Patches generated from the MNIST data by the MI + MDS + VQ + VQ steps of the hybrid algorithm, (a) The 12 different patches are colour coded. Note that some patches comprise more than one subfield, (b) Example of the specific receptive field of one of the 10 units in the patch marked with orange (with two subfields).
This result section has three main parts, the first showing a straight-forward comparison of our novel hybrid algorithm with other methods, the second demonstrating the drift-tolerance of this algorithm relative to other methods, and the third demonstrating its scaling performance. [Pg.41]

Fig. 2.5. Scaling perfomiance of the new hybrid algorithm. Dependence of classification performance on the number of units in each hypercolumn (Johansson 2006). Fig. 2.5. Scaling perfomiance of the new hybrid algorithm. Dependence of classification performance on the number of units in each hypercolumn (Johansson 2006).

See other pages where Hybrid algorithms is mentioned: [Pg.469]    [Pg.327]    [Pg.213]    [Pg.379]    [Pg.210]    [Pg.139]    [Pg.144]    [Pg.351]    [Pg.132]    [Pg.133]    [Pg.133]    [Pg.149]    [Pg.339]    [Pg.98]    [Pg.299]    [Pg.483]    [Pg.484]    [Pg.484]    [Pg.488]    [Pg.354]    [Pg.330]    [Pg.254]    [Pg.184]    [Pg.240]    [Pg.144]    [Pg.360]    [Pg.453]    [Pg.453]    [Pg.11]    [Pg.20]    [Pg.36]    [Pg.42]    [Pg.42]    [Pg.43]    [Pg.310]   
See also in sourсe #XX -- [ Pg.79 , Pg.80 , Pg.100 , Pg.215 , Pg.221 , Pg.224 , Pg.227 ]




SEARCH



Hybrid Intelligence Algorithm Process

Hybrid Intelligent Algorithm Design

Hybrid evolutionary algorithm

Hybrid static/dynamic algorithm

Hybrid stochastic algorithm

Onion-type Hybrid Multiscale Simulations and Algorithms

Process of Hybrid Genetic Algorithm Based on Stochastic Simulation

Steps for Hybrid Intelligent Algorithm

© 2024 chempedia.info