Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data-flow analysis

For an integer type, the maximum size is 32 bits and the number is assumed to be in 2 s complement form. Optionally a synthesis system may perform data flow analysis of the model to determine the maximum size of an integer variable. For example,... [Pg.10]

A data flow analysis should be conducted to identify the creation and maintenance of electronic records. The life cycle of a record is shown in Figure 15.3 (based on GERM ). [Pg.359]

Other attempts to automatically discover the structure and behavior of a software system come from the field of software reengineering. Cimitile et al. [600] describe an approach that involves the use of data flow analysis in order to determine various properties of the source code to be wrapped. A necessary prerequisite for this - and most of the other techniques in the area of software reengineering - is the availability of the source code that is to be analyzed. Again, a-posteriori integration, as presented in this section, is not constrained by this requirements. [Pg.589]

Cimitile, A., de Lucia, A., de Carlini, U. Incremental migration strategies Data flow analysis for wrapping. In Proceedings of the 5 Working Conference on Reverse Engineering (WCRE 98), Hawaii, USA, pp. 59-68. IEEE Computer Society Press, Los Alamitos (1998)... [Pg.823]

To start architectural synthesis, an initial data flow analysis is required. It is also this process which resolves the very different nature of current designer interface languages (Vhdl, HardwareC, Silage,. ..). By standardization on the result of this analysis, the input alternatives become available for all the synthesis projects. [Pg.25]

P. Feautrier. Data flow analysis of array and scalar references. Recherche Operationnelle Operations Research, 1991. [Pg.93]

Data-flow analysis A collection of techniques for reasoning, at compile time, about the flow of values at run-time. [Pg.12]

Analyses such as constant propagation are formulated as problems in data-flow analysis. Many data-flow problems have been developed for use in optimization. These include the following ... [Pg.17]

The remaining, rather technical correctness criteria are specific to every SLDNF refutation procedure. These aspects are usually a simple manual task. They can also be handled by data-flow analysis or abstract interpretation techniques that reveal, for each directionality, a correct permutation of the program clauses, and a correct permutation of the literals in the body of each clause, and sometimes the addition or deletion of literals. Multi-directionality is usually difficult to achieve. The corresponding tool of the Folon environment is described by [De Boeck and Le Charlier 90]. [Pg.62]

Static analysis is a technique (usually automated) which does not involve execution of code but consists of algebraic examination of source code. It involves a succession of procedures whereby the paths through the code, the use of variables, and the algebraic functions of the algorithms are analyzed. There are packages available which carry out the procedures and, indeed, modern compilers frequently carry out some of the static analysis procedures such as data flow analysis. [Pg.89]

Data-flow analysis is a technique to determine the lifetime of values, which is used extensively in compiler construction [2]. High-level synthesis often uses data-flow analysis to determine the intended behavior, for example, unfolding variables [15, 38]. In HIS, lifetime information is used during allocation and scheduling. [Pg.83]

HIS uses a modified global data-flow analysis based loosely on du-chaining (definition-use) [2]. Since most high-level synthesis literature does not address this step in detail and often restricts it to so called basic blocks (e.g., [38]), and since data-flow analysis can be computationally expensive, we include the exact algorithm used below. It first computes the reachability for each value assigned and then its lifetime. HIS performs global data-flow analysis not restricted to basic blocks. In this case the control flow (conditional execution and iteration) must be taken into account ... [Pg.83]

Registers are allocated to store values which are generated in one control step and used in another control step. These values are determined using lifetime information generated during global data flow analysis. Each value alive at the... [Pg.91]

Global data-flow analysis and path analysis are used for scheduling and allocation. AFAP (as-fast-as-possible) schedules are obtained by scheduling all... [Pg.100]

Incompatibility arises from overlapping life times of variables. To determine this situation data flow analysis is used to gather the needed information. We assume, that the CFG is reducible. The data flow malysis is used to calculate mainly for each node q four sets. LVTOP(q) is the set of variable live at the entering of a node qi. LVBOT(q) is the set of variables live by leaving the node q. Secondly the set RDTOP(q) defines the set of variables that are assigned a value at the top of node qj. RDBOT(q) is defined respectively. The sets are defined by recursive equations ... [Pg.375]

Data flow analysis is used to compute data dependencies. We give a short account for it here, as our algorithm uses a particular data flow analysis. Details can be found in, e.g., [19]. [Pg.52]

Data flow analysis is often defined over a flowchart representation of the program a flowchart is essentially a CFG where the nodes are individual statements, or conditions, rather than basic blocks. (For our model language in Section 2.1 the nodes are assignments, conditions, or skip statements.) The edges in the flowchart are the program points a data flow analysis computes a set of data... [Pg.52]

The data dependence analysis uses SLV rather than RD. If the SLV set succeeding an assignment x = a contains x then we know that some part of the slicing criterion will be dependent on the value of x produced there, and thus X = a can be immediately put into the slice already during the data flow analysis. No DDG has to be built. [Pg.54]

Solving the local slicing problems amounts to computing both data and control dependencies to identify statements to slice. We use the SLV analysis, which is a backward data flow analysis, for computing data dependencies where the analysis of a block will proceed backwards from the statement (s) of the slicing criterion towards the condition. If the block is acyclic then the local analysis will terminate there if it is cyclic then it will continue backwards, through the backedge towards the condition. [Pg.56]


See other pages where Data-flow analysis is mentioned: [Pg.242]    [Pg.179]    [Pg.310]    [Pg.310]    [Pg.28]    [Pg.28]    [Pg.149]    [Pg.179]    [Pg.99]    [Pg.171]    [Pg.69]    [Pg.81]    [Pg.83]    [Pg.88]    [Pg.128]    [Pg.51]    [Pg.52]    [Pg.53]    [Pg.53]    [Pg.57]   
See also in sourсe #XX -- [ Pg.9 , Pg.179 ]

See also in sourсe #XX -- [ Pg.27 ]




SEARCH



Continuous flow method, data analysis

Data flow

Plug flow reactors isothermal data, analysis

Stirred-flow reactor data analysis using

© 2024 chempedia.info