Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Computational assumption

However, a number of biological findings could not be explained by the CPCA analysis. The authors attribute this to the fact that the ephrin-Eph kinase interactions are best described by an induced fit mechanism. The necessary computational assumption of keeping the protein structure rigid did not reflect the flexibility of the protein interfaces and their structural adaptability. The homology models, on the other hand, provided a rigid protein structure that is biased by the template protein EphB2. [Pg.77]

However, all existing security proofs are reduction proofs. To prove security against non-uniform attackers, one needs computational assumptions in the non-uniform model, too. Hence, all such theorems have a uniform and a non-uniform variant. As uniform reductions imply non-uniform reductions, but not vice versa, one has automatically proved both versions if one proves the uniform version [Pfit88, Gold91]. [Pg.38]

As mentioned in Section 5.2.9, Combinations , it is usefiil in practice to add the strong requirement of the signer on disputes for the degree low to the minimal requirements. This means that on a computational assumption, the following is required even if the signer does not take part in a dispute ... [Pg.166]

Remark 7.30 (Existence of zero-knowledge proof schemes). Zero-knowledge proof schemes exist for all languages in NP under certain computational assumptions, but of course, those schemes have not been constructed and proved with respect to the definition made here. [GoMW91] works with arbitrarily powerful provers and in the non-uniform model, but makes a remark that the proof should also work for provers with arbitrary auxiliary inputs. [Gold93] is for polynomial-time provers and uniform, but, as mentioned, only proves the existence... [Pg.191]

The key computational assumption in Pinch Point Analysis is constant CP on the interval where the streams are matched. If not, stream segmentation is necessary. [Pg.432]

Witn the advent of electronic computers, it is no longer necessary to make drastic simplifying assumptions to reduce the... [Pg.25]

The econom/c mode/for evaluation of investment (or divestment) opportunities is normally constructed on a computer, using the techniques to be introduced in this section. The uncertainties in the input data and assumptions are handled by establishing a base case (often using the best guess values of the variables) and then performing sensitivities on a limited number of key variables. [Pg.304]

The reservoir model will usually be a computer based simulation model, such as the 3D model described in Section 8. As production continues, the monitoring programme generates a data base containing information on the performance of the field. The reservoir model is used to check whether the initial assumptions and description of the reservoir were correct. Where inconsistencies between the predicted and observed behaviour occur, the model is reviewed and adjusted until a new match (a so-called history match ) is achieved. The updated model is then used to predict future performance of the field, and as such is a very useful tool for generating production forecasts. In addition, the model is used to predict the outcome of alternative future development plans. The criterion used for selection is typically profitability (or any other stated objective of the operating company). [Pg.333]

Maxwell s equation are the basis for the calculation of electromagnetic fields. An exact solution of these equations can be given only in special cases, so that numerical approximations are used. If the problem is two-dimensional, a considerable reduction of the computation expenditure can be obtained by the introduction of the magnetic vector potential A =VxB. With the assumption that all field variables are sinusoidal, the time dependence... [Pg.312]

In general, the phonon density of states g(cn), doi is a complicated fimction which can be directly measured from experiments, or can be computed from the results from computer simulations of a crystal. The explicit analytic expression of g(oi) for the Debye model is a consequence of the two assumptions that were made above for the frequency and velocity of the elastic waves. An even simpler assumption about g(oi) leads to the Einstein model, which first showed how quantum effects lead to deviations from the classical equipartition result as seen experimentally. In the Einstein model, one assumes that only one level at frequency oig is appreciably populated by phonons so that g(oi) = 5(oi-cog) and, for each of the Einstein modes. is... [Pg.357]

If we wish to know the number of (VpV)-collisions that actually take place in this small time interval, we need to know exactly where each particle is located and then follow the motion of all the particles from time tto time t+ bt. In fact, this is what is done in computer simulated molecular dynamics. We wish to avoid this exact specification of the particle trajectories, and instead carry out a plausible argument for the computation of r To do this, Boltzmann made the following assumption, called the Stosszahlansatz, which we encountered already in the calculation of the mean free path ... [Pg.678]

The two sources of stochasticity are conceptually and computationally quite distinct. In (A) we do not know the exact equations of motion and we solve instead phenomenological equations. There is no systematic way in which we can approach the exact equations of motion. For example, rarely in the Langevin approach the friction and the random force are extracted from a microscopic model. This makes it necessary to use a rather arbitrary selection of parameters, such as the amplitude of the random force or the friction coefficient. On the other hand, the equations in (B) are based on atomic information and it is the solution that is approximate. For ejcample, to compute a trajectory we make the ad-hoc assumption of a Gaussian distribution of numerical errors. In the present article we also argue that because of practical reasons it is not possible to ignore the numerical errors, even in approach (A). [Pg.264]

The constant 9 is the initial population of level Ek and thus computable from the initial data, Eq. (16). All this turns out to be true if the following assumption on the eigenspaces and eigenenergies of H q) is fulfilled ... [Pg.386]

Molecular enthalpies and entropies can be broken down into the contributions from translational, vibrational, and rotational motions as well as the electronic energies. These values are often printed out along with the results of vibrational frequency calculations. Once the vibrational frequencies are known, a relatively trivial amount of computer time is needed to compute these. The values that are printed out are usually based on ideal gas assumptions. [Pg.96]

There are numerous articles and references on computational research studies. If none exist for the task at hand, the researcher may have to guess which method to use based on its assumptions. It is then prudent to perform a short study to verify the method s accuracy before applying it to an unknown. When an expert predicts an error or best method without the benefit of prior related research, he or she should have a fair amount of knowledge about available options A savvy researcher must know the merits and drawbacks of various methods and software packages in order to make an informed choice. The bibliography at the end of this chapter lists sources for reviewing accuracy data. Appendix A of this book provides short reviews of many software packages. [Pg.135]

The solvent accessible surface area (SASA) method is built around the assumption that the greatest amount of interaction with the solvent is in the area very close to the solute molecule. This is accounted for by determining a surface area for each atom or group of atoms that is in contact with the solvent. The free energy of solvation AG° is then computed by... [Pg.208]

The rotational isomeric state (RIS) model assumes that conformational angles can take only certain values. It can be used to generate trial conformations, for which energies can be computed using molecular mechanics. This assumption is physically reasonable while allowing statistical averages to be computed easily. This model is used to derive simple analytic equations that predict polymer properties based on a few values, such as the preferred angle... [Pg.308]


See other pages where Computational assumption is mentioned: [Pg.15]    [Pg.116]    [Pg.126]    [Pg.127]    [Pg.134]    [Pg.312]    [Pg.404]    [Pg.405]    [Pg.4]    [Pg.30]    [Pg.111]    [Pg.15]    [Pg.116]    [Pg.126]    [Pg.127]    [Pg.134]    [Pg.312]    [Pg.404]    [Pg.405]    [Pg.4]    [Pg.30]    [Pg.111]    [Pg.156]    [Pg.603]    [Pg.1376]    [Pg.2966]    [Pg.390]    [Pg.15]    [Pg.99]    [Pg.179]    [Pg.372]    [Pg.102]    [Pg.137]    [Pg.314]    [Pg.355]    [Pg.226]    [Pg.279]    [Pg.70]    [Pg.127]    [Pg.59]    [Pg.36]    [Pg.61]    [Pg.409]   
See also in sourсe #XX -- [ Pg.116 ]




SEARCH



© 2024 chempedia.info