Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Problem size

Iterative approaches, including time-dependent methods, are especially successfiil for very large-scale calculations because they generally involve the action of a very localized operator (the Hamiltonian) on a fiinction defined on a grid. The effort increases relatively mildly with the problem size, since it is proportional to the number of points used to describe the wavefiinction (and not to the cube of the number of basis sets, as is the case for methods involving matrix diagonalization). Present computational power allows calculations... [Pg.2302]

It is a well-known fact that the Hartree-Fock model does not describe bond dissociation correctly. For example, the H2 molecule will dissociate to an H+ and an atom rather than two H atoms as the bond length is increased. Other methods will dissociate to the correct products however, the difference in energy between the molecule and its dissociated parts will not be correct. There are several different reasons for these problems size-consistency, size-extensivity, wave function construction, and basis set superposition error. [Pg.223]

Although the suite of programs can be run from the floppy disk, for reasons of speed and problem size, running from a hard disk is recommended. To do this, in DOS, type MD directory name, enter, and cd directory name, enter. Type copy a . and the disk will be copied to that directory. Type FTAPSUIT and the screen shown in Figure 6.4-3 is presented. [Pg.240]

Problem Size Duration and Flux from a BLEVE (CCSP, 1989)... [Pg.344]

SCF CPU and Disk Requirements by Problem Size for Linear CnH2n+2... [Pg.32]

The table on the next page indicates the relationship between problem size and resource requirements for various theoretical methods. Problem size is measured primarily as the total number of basis functions (N) involved in a calculation, which itself depends on both the system size and the basis set chosen some items depend also on the number of occupied and virtual (unoccupied) orbitals (O and V respectively), which again depend on both the molecular system and the basis set. The table lists both the formal, algorithmic dependence and the actual dependence as implemented in Gaussian (as of this writing), which may be somewhat better due to various computational techniques... [Pg.122]

The table indicates how resource usage varies by problem size. For example, it indicates that for direct MP2 energy calculations, CPU requirements scale roughly as the fourth power of the number of basis functions if the number of electrons stays the same. Using the table with timings from previous jobs (using the same method and executed on the same computer system) should enable you to estimate how long a potential job will run. [Pg.122]

The complexity of an object, thought of as a final state of a formal computational process, is then classified according to how fast He grows as a function of the problem size. The first nontrivial class of problems - class P - for example, consists of problems for which the computation time increases as some polynomial function of N < 0 N° ) for some a < 00. Problems that can be solved with... [Pg.623]

We use computational solution of the steady Navier-Stokes equations in cylindrical coordinates to determine the optimal operating conditions.Fortunately in most CVD processes the active gases that lead to deposition are present in only trace amounts in a carrier gas. Since the active gases are present in such small amounts, their presence has a negligible effect on the flow of the carrier. Thus, for the purposes of determining the effects of buoyancy and confinement, the simulations can model the carrier gas alone (or with simplified chemical reaction models) - an enormous reduction in the problem size. This approach to CVD modeling has been used extensively by Jensen and his coworkers (cf. Houtman, et al.) ... [Pg.337]

In this chapter, state sequence network (SSN) representation has been presented. Based on this representation, a continuous-time formulation for scheduling of multipurpose batch processes is developed. This representation involves states only, which are characteristic of the units and tasks present in the process. Due to the elimination of tasks and units which are encountered in formulations based on the state task network (STN), the SSN based formulation leads to a much smaller number of binary variables and fewer constraints. This eventually leads to much shorter CPU times as substantiated by both the examples presented in this chapter. This advantage becomes more apparent as the problem size increases. In the second literature example, which involved a multipurpose plant producing two products, this formulation required 40 binary variables and gave a performance index of 1513.35, whilst other continuous-time formulations required between 48 (Ierapetritou and Floudas, 1998) and 147 binary variables (Zhang, 1995). [Pg.37]

Despite advances in MILP solution methods, problem size is still a major issue since scheduling problems are known to be NP-hard (i.e., exponential increase of computation time with size in worst case). While effective modeling can help to overcome to some extent the issue of computational efficiency, special solution strategies such as decomposition and aggregation are needed in order to address the ever increasing sizes of real-world problems. [Pg.182]

These combinatorial problems, and many others as well, have a finite number of feasible solutions, a number that increases rapidly with problem size. In a job-shop scheduling problem, the size is measured by the number of jobs. In a traveling salesman problem, it is measured by the number of arcs or nodes in the graph. For a particular problem type and size, each distinct set of problem data defines an instance of the problem. In a traveling salesman problem, the data are the travel times between cities. In a job sequencing problem the data are the processing and set-up times, the due dates, and the penalty costs. [Pg.390]

Scatter search has been implemented in software called OPTQUEST (see www.opttek.com). OPTQUEST is available as a callable library written in C, which can be invoked from any C program, or as a dynamic linked library (DLL), which can be called from a variety of languages including C, Visual Basic, and Java. The callable library consists of a set of functions that (1) input the problem size and data, (2) set options and tolerances, (3) perform steps 1 through 3 to create an initial reference set, (4) retrieve a trial solution from OPTQUEST to be input to the improvement method, and (5) input the solution resulting from the improvement method back into OPTQUEST, which uses it as the input to step 7 of the scatter search protocol. The improvement method is provided by the user. We use the term improvement loosely here because the user can simply provide an evaluation of the objective and constraint functions. [Pg.409]

Either approach results in gradient calculations with costs proportional to problem size effort for evaluating gradients with adjoint approaches is... [Pg.219]

We can prove that the above algorithm converges in polynomial time (i.e., the number of floating-point operations is proportional to a polynomial in the problem sizes m and n) by choosing appropriately cn., and a. See Refs. [Pg.113]

Knipling R, Wang S. Revised Estimates of the U.S. Drowsy Driver Crash Problem Size Based on the General Estimates System Case Reviews, the 39th Annual Proceedings of the Association for the Advancement of Automotive Medicine, Chicago, IL, October 16-18, 1995. [Pg.248]

Execution times for the overall ammonia plant model, of which the C02 capture system is a small part, are on the order of 30 s for the parameter estimation case, and less than a minute for an Optimize case. The model consists of over 65,000 variables, 60,000 equations, and over 300,000 nonzero Jacobian elements (partial derivates of the equation residuals with respect to variables). This problem size is moderate for RTO applications since problems over four times as large have been deployed on many occasions. Residuals are solved to quite tight tolerances, with the tolerance for the worst scaled residual set at approximately 1.0 x 10 9 or less. A scaled residual is the residual equation imbalance times its Lagrange multiplier, a measure of its importance. Tight tolerances are required to assure that all equations (residuals) are solved well, even when they involve, for instance, very small but important numbers such as electrolyte molar balances. [Pg.146]

For each outer loop function and gradient evaluation 4 and 14 inner loop problems were solved respectively (a total of 124 inner loop problems). For the inner loop problems 12-14 iterations for Tasks 1 and 3 and 5-7 iterations for Tasks 2 and 4 were usually required. For this problem size and detail of dynamic and physical properties models the computation time of slightly over 5 hrs (using SPARC-1 Workstation) is acceptable. It is to note that the optimum number of plates and optimum recovery for Task 1 (Table 7.2) are very close to initial number of plates and recovery (Table 7.1). This is merely a coincidence. However, during function evaluation step the optimisation algorithm hit lower and upper bounds of the variables (shown in Table 7.1) a number of times. Note that the choices of variable bounds were done through physical reasoning as explained in detail in Chapter 6 and Mujtaba and Macchietto (1993). [Pg.213]

We assume that the function value and gradient are evaluated together in an operations (additions and multiplications), where n is the problem size and a is a problem-dependent number. The Hessian can then be computed in (a/2)n(n + 1) operations. When a sparse preconditioner M is used, we denote its number of nonzeros by m and the number of nonzeros in its Cholesky factor, L, by /. (We assume here that M either is positive-definite or is factored by a modified Cholesky factorization.) Thus M can be computed in about (a/2)nm operations. As discussed in the previous section, it is advantageous to reorder the variables a priori to minimize the fill-in for M. Alternatively, a precon-... [Pg.47]


See other pages where Problem size is mentioned: [Pg.95]    [Pg.97]    [Pg.363]    [Pg.33]    [Pg.33]    [Pg.81]    [Pg.144]    [Pg.411]    [Pg.624]    [Pg.200]    [Pg.77]    [Pg.107]    [Pg.354]    [Pg.390]    [Pg.121]    [Pg.60]    [Pg.159]    [Pg.57]    [Pg.114]    [Pg.105]    [Pg.118]    [Pg.280]    [Pg.411]    [Pg.439]    [Pg.136]    [Pg.76]    [Pg.19]    [Pg.27]    [Pg.49]    [Pg.59]    [Pg.9]   
See also in sourсe #XX -- [ Pg.80 ]




SEARCH



Equivalent size problem

Fleet sizing problem

Heat exchangers sizing problem

Particle size reduction milling problems

Problem of small sample size

Sample size problems with

Size Consistency Problem in the Energy

Sizes vapor pressure problem

Support Vector Machine Data Processing Method for Problems of Small Sample Size

The minimum habitat-size problem

Troubleshooting sizing problems

© 2024 chempedia.info