Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Basic Algorithm

The component material balance and distribution coefficient relations, Equations 2.7 and 2.9, are combined to express X and F in terms of feed composition and vapor fraction  [Pg.93]

If the /f values were a function of temperature and pressure only and if a set of values was available at a specified T and P, a solution of Equation 2.14 would satisfy both the material balance and equilibrium relations. This may be demonstrated by a binary system where the feed or combined mixture composition is given, along with the given fixed /f-values. This implies that the temperature and pressure are fixed, and that the /f-values are composition-independent. [Pg.94]

The solution to this equation is / = 0.3333. This is the vapor fraction of the mixture, that is, if the total mixture is 100 kmol or the feed flow rate to the phase separator is 100 kmol/h, then the products are 33.33 kmol/h vapor and 66.67 kmol/h liquid. Once / is determined, the liquid and vapor compositions are calculated by Equations 2.13a or 2.13b  [Pg.94]

The other components may be calculated as above. Eor a binary, the second component may be calculated by difference  [Pg.94]

In general, X is also a function of Xi and F , themselves unknown until Equations 2.13a and 2.13b are solved. This entails the need for an iterative scheme, whereby a set of distribution coefficients is assumed and Equation 2.14 is solved. The resulting compositions are checked to determine if the equilibrium condition. Equation 1.19, is met for all the components. The distribution coefficients are updated based on calculated compositions, and the process is repeated until the equilibrium relations are met. The algorithm may be summarized in the following steps  [Pg.95]


Je next introduce the basic algorithms and then describe some of the mmy variants upon lem. We then discuss two methods called evolutionary algorithms and simulated anneal-ig, which are generic methods for locating the globally optimal solution. Finally, we discuss jme of the ways in which one might cinalyse the data from a conformational malysis in rder to identify a representative set of conformations. [Pg.474]

The maximum dissimilarity algorithm works in an iterative manner at each step one compormd is selected from the database and added to the subset [Kennard and Stone 1969]. The compound selected is chosen to be the one most dissimilar to the current subset. There are many variants on this basic algorithm which differ in the way in which the first compound is chosen and how the dissimilarity is measured. Three possible choices for fhe initial compormd are (a) select it at random, (b) choose the molecule which is most representative (e.g. has the largest sum of similarities to the other molecules) or (c) choose the molecule which is most dissimilar (e.g. has the smallest sum of similarities to the other molecules). [Pg.699]

D. Frenkel, B. Smit. Understanding Molecular Simulation From Basic Algorithms to Applications. San Diego, CA Academic Press, 1996. [Pg.506]

Sphere exclusion algorithms are closely related to DECS methods. The basic algorithm operates by selecting a compound and then excluding from... [Pg.199]

The remainder of this chapter is structured as follows. In Section II the problem of deriving an estimate of an unknown function from empirical data is posed and studied in a theoretical level. Then, following Vapnik s original work (Vapnik, 1982), the problem is formulated in mathematical terms and the sources of the error related to any proposed solution to the estimation problem are identified. Considerations on how to reduce these errors show the inadequacy of the NN solutions and lead in Section III to the formulation of the basic algorithm whose new element is the pointwise presentation of the data and the dynamic evolution of the solution itself. The algorithm is subsequently refined by incorporating the novel idea of structural adaptation guided by the use of the L" error measure. The need... [Pg.161]

We are now ready to propose the basic algorithm for the solution of the learning problem. [Pg.174]

With every specification of the above parameters, the basic algorithm will be refined and its properties will be studied, until the complete algorithm is revealed. However, each of the presented algorithms can be considered as a point of departure, where different solutions from the finally proposed can be obtained. [Pg.175]

Dolata, D. P., Carter, R. E. WIZARD applications of expert system techniques to conformational analysis. 1. The basic algorithms exemplified on simple hydrocarbons. J. Chem. Inf. Comp. Sci. 1987, 27, 36-47. [Pg.203]

The optimization of value-added processes is a subject that scientists all over the world have been dealing with for more than 70 years. The first basic algorithms for so-called Linear Programming (LP) were developed at American and European universities already in the 1930s, for the first time allowing the planning and simulation of simple business processes. LP soon became the base of the first software systems and even today almost all Supply Chain Management (SCM) or... [Pg.59]

This choice is made to simplify the discussion which follows. Any other model could be substituted without changing the basic algorithm. [Pg.359]

Dealing with Z BZ directly has several advantages if n — m is small. Here the matrix is dense and the sufficient conditions for local optimality require that Z BZ be positive definite. Hence, the quasi-Newton update formula can be applied directly to this matrix. Several variations of this basic algorithm... [Pg.204]

In RNA bioinformatics, only a few basic algorithms, namely those for structure prediction based on thermodynamic rules, are available as web tools. In addition, RNA structure prediction is a computationally rather demanding process, so that the sequence lengths that can be dealt with on the web are limited. Because of these limitations we recommend that you install the software locally on a computer in your lab (or your laptop). [Pg.177]

The methods for estimating paleoaltitudes based on thermodynamic methods have been summarized. The basic algorithm requires comparing the local paleoclimate from some elevated region to that at sea level and at approximately the same latitude. It is argued that comparing... [Pg.190]

Since in this work the process models used are in time continuous form and measurements are in discrete form, the EKF in discrete/continuous formulation [27] is used. The basic algorithm of the EKF can be summarized as follows ... [Pg.113]

The refinement procedure utilises the fact that if some query node Q(X) has another node Q(fV) at some specific distance ) ( and/or angle), and if some database node D(Z) matches with Q(W), then there must also be some node D(Y) at the appropriate distance(s) from D(Z) which matches with Q(X) this is a necessary, but not sufficient, condition for a subgraph isomorphism to be present (except in the limiting case of all the query nodes having been matched, when the condition is both necessary and sufficient). The refinement procedure is called before each possible assignment of a database node to a query node and the matched substructure is increased by one node if, and only if, the condition holds for all nodes W, X, Y and Z. The basic algorithm terminates once a match has been detected or until a mismatch has been confirmed [70] it is easy to extend the algorithm to enable the detection of all matches between a query pattern and a database structure, as is required for applications such as those discussed here. [Pg.85]

This review discusses several approaches for the automatic identification of common structural features or structural similarity of organic molecules. The organization of the chapter is as follows. Section 2 gives an overview of the methods for structural feature analysis. Identification of common structural features is discussed in Sect. 3 with a few applications in structure-activity studies, which is subsequently followed by the identification of structural similarity in Sect. 4. The quantification of structural similarity is discussed in Sect. 5. The basic algorithms of these approaches and the relative software systems are also referred to with some illustrative examples. [Pg.106]

D.W. Clarke, C. Mohtadi, and P.S. Tuffs. Generalized predictive control—Part I. The basic algorithm. Automatica, 23 137-148, 1987. [Pg.118]

In the intervening decade since the inception of GRAM, numerous refinements to the basic algorithm have been published. Wilson, Sanchez, and Kowalski [25] proposed three initial improvements. Inserting R, + R2 for R, in Equation 12.8 solves stability problems encountered when R2 contains components absent in R,. Here, the diagonal matrix A now contains the fractional contribution, e.g., Att = 0 if the th species is absent in R2 and Att = 1 if the Hh species is absent in R,. Second, the significant joint row and column spaces of R, and R2 can be more rapidly calculated with a NIPALS-based algorithm than with the SVD. Finally, the joint row... [Pg.485]

PARAFAC refers both to the parallel factorization of the data set R by Equation 12.1a and Equation 12.lb and to an alternating least-squares algorithm for determining X, Y, and Z in the two equations. The ALS algorithm is known as PARAFAC, emanating from the work by Kroonenberg [31], and as CANDECOMP, for canonical decomposition, based on the work of Harshman [32], In either case, the two basic algorithms are practically identical. [Pg.491]

Exploratory data analysis (EDA). This analysis, also called pretreatment of data , is essential to avoid wrong or obvious conclusions. The EDA objective is to obtain the maximum useful information from each piece of chemico-physical data because the perception and experience of a researcher cannot be sufficient to single out all the significant information. This step comprises descriptive univariate statistical algorithms (e.g. mean, normality assumption, skewness, kurtosis, variance, coefficient of variation), detection of outliers, cleansing of data matrix, measures of the analytical method quality (e.g. precision, sensibility, robustness, uncertainty, traceability) (Eurachem, 1998) and the use of basic algorithms such as box-and-whisker, stem-and-leaf, etc. [Pg.157]

The reader may note the absence of specific computer programs throughout the text. There are many ways to approach a solution and the assumption is made that the reader is computer literate and can handle the solution once a basic algorithm is laid out. Thus, our general approach to computer-related problems is to ... [Pg.21]

An estimator (or more specifically an optimal state estimator ) in this usage is an algorithm for obtaining approximate values of process variables which cannot be directly measured. It does this by using knowledge of the system and measurement dynamics, assumed statistics of measurement noise, and initial condition information to deduce a minimum error state estimate. The basic algorithm is usually some version of the Kalman filter.14 In extremely simple terms, a stochastic process model is compared to known process measurements, the difference is minimized in a least-squares sense, and then the model values are used for unmeasurable quantities. Estimators have been tested on a variety of processes, including mycelial fermentation and fed-batch penicillin production,13 and baker s yeast fermentation.15 The... [Pg.661]

Once a new approximation matrix is defined, the search direction in the basic Algorithm [A4] is formulated accordingly. The vector p is computed from... [Pg.41]

After a demonstration of the method s abilities on simulated data,8 the algorithm was soon applied to several cases with real data such as metal-lothionein,32 tendamistat,33 and basic pancreatic trypsin inhibitor (BPTI).34 Furthermore, aside from the original program, DISMAN,8 the basic algorithm has been implemented in other programs such as DADAS35 and the... [Pg.150]

In previous sections almost all the basic algorithms of the synthon model of organic chemistry were briefly sketched. In this subsection we will present a more rigorous and detailed definition of the key algorithms, as the exact definition is needed for deep understanding of the possibilities, and limitations, of the synthon model. [Pg.165]


See other pages where Basic Algorithm is mentioned: [Pg.542]    [Pg.185]    [Pg.115]    [Pg.200]    [Pg.174]    [Pg.174]    [Pg.177]    [Pg.218]    [Pg.412]    [Pg.152]    [Pg.101]    [Pg.153]    [Pg.111]    [Pg.2]    [Pg.87]    [Pg.175]    [Pg.354]    [Pg.335]    [Pg.536]    [Pg.79]    [Pg.118]    [Pg.13]    [Pg.14]    [Pg.30]    [Pg.345]    [Pg.87]   


SEARCH



© 2024 chempedia.info