Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Complexity metrics

The different classes of molecular skeletons are grouped into different boxes in the charts each skeleton is accompanied by the value of the complexity metric, S, the specific complexity metric, S/H, and, in boldfece characters, by a high-rank taxonomic assignment of the organismic source. [Pg.19]

Values of the skeletal complexity metric S for these alkaloids are small, in the range 36-49. In contrast, S/H values are relatively high, in particular for strychnos alkaloids (0.88, Chart 6.1.3.A) this reflects a high molecular complexity that stems from the many cycles and chirality centers, in spite of the low molecular weights. [Pg.21]

S = complexity metric H = bond count fijr molecular size (Whitlock1998). [Pg.90]

Easily available advanced synthons, such as the carbohydrates, amino acids, hydroxyacids, and terpenoids, make the synthetic task easier than the complexity metrics of the target suggests this is especially true for the glycosides, if the carbohydrate portion can be introduced intactly. It must also be borne in mind that the S metric is counted in a linearly additive hion, neglecting interactions between the functional groups (Whitlock 1998) such interactions are not treated adequately by any method so far proposed to calculate the molecular complexity. Moreover, no attention was paid here to the graphic analysis of the synthesis plan based on the molecular complexity of the intermediates these aspects have recently been reviewed (Bertz 1993 Whitlock 1998 Chanon 1998). [Pg.216]

Another application using both functional and physical complexity theories was presented by Kim (1999). He found that in lean manufacmring, the system complexity, which is affected by increased product variety, is much less than in an equivalent mass production system. He proposed a series of system complexity metrics based on a complexity model developed using systems theory. These measures are (1) relationships between system components (number of flow paths, number of crossings in the flow paths, total travel distance by a part, and number of combinations of product and machine assignments) and (2) number of elementary system components. These metrics include a mix of structure (static time-independent) and operation (dynamic and time-dependent) factors. No suggestion regarding their relative importance or how they may be combined into one system complexity metrics was offered. [Pg.236]

Additionally, it is possible to use attribute models to estimate or predict software reliability. This means that software reliability is predicted from attributes other than failure data. For example, it may be estimated from different complexity metrics, particularly in early phases of a project. Then the estimates are based on experience from earlier projects, collected in a reliability reference model as outlined in Fig. 3. [Pg.319]

Software vulnerability is a strong theme nowadays when someone speaks about software rehability. This paper makes some connections between software vul-nerabihty and software reliability. It was shown that some software rehabihty models are not adequate to analyze entire set of failure data. This is why a new generation of models considering environmental factors, explanatory variables covering complexity metrics is necessary. Also, we need more collections of data for testing ideas. [Pg.1285]

Complexity metric (SX) it is generally considered that a correlation exists between a module s complexity and its reHabiHty. Although complexity can be measured in different ways, the authors of the model define... [Pg.2298]

Two types of SC A are apparent, syntactic and semantic analysis and these are used for two purposes demonstration of code integrity and functional correctness. Syntactic Analysis is based on the structure of the code and has the ability to detect control flow errors (e.g. unreachable code), data flow errors (e.g. an OUT parameter that is not always written) and coding standard violations. Additionally a syntactic technique. Information Flow analysis, can be used to support software partitioning arguments by demonstrating that sections of code are independent of each other. Syntactic Analysis can also be used to measure software and generate software metrics, though these tend to be relatively simple metrics such as McCabe s complexity metric. [Pg.168]

Data from Figure 1 in E. J. Chaisson. Energy rate density as a complexity metric and evolutionary driver. 2011. Complexity. 16 (3), p. 27. DOI 10.1002/cplx.20323. [Pg.246]

Select appropriate metrics for multidimensional optimization use ligand efficiency and lipophilic efficiency metrics in hit-to-lead optimization and change to more complex metrics emphasizing dosage to support lead optimization. [Pg.9]

In the second phase, questions of dealing with the long-term issues of having an analyzer need to be evaluated. This is a very complex metric, which can include variables such as staffing those is responsible for day-to-day operation and the per year running cost of having such an analyzer. Running costs themselves include a need to define hardware consumables, process downtime for hardware maintenance, technician time, and the process model maintenance cost. [Pg.939]

Gianazza, D., Guittet, K. 2006. Evaluation of air traffic complexity metrics using neural networks and sector status. Proceedings of the 2nd International Conference on Research in Air Transportation, ICRAT2006, Belgrade, Serbia and Montenegro. [Pg.987]

Assigning the CPTs was done in a session where experts expressed how they believed different nodes should be coupled, how the nodes would relate to each other and how this in turn would impact the complexity metric node. Rather than coming up with complete CPTs directly, the session was structured so that the experts decided the CPT framework through the use of numbers, lead words and color maps the CPTs were decided indirectly. Some benefits from this approach were that the experts were able to express their beliefs about the entire BBN. Detailed discussions on the CPTs were avoided and simplification of the node relations ensured that the experts had an easier time understanding each other. Based on the results from the session, the initial CPTs will be created and adjusted using a subset of 20 LDs. [Pg.62]

Early complexity metrics were based almost entirely on the amount of code produced. Therefore, a function which contained 10 lines of code was considered more complex than one which contained only 5. However, since lines of code often varies due to stylistic differences, Halstead developed a set of measures based on the number of operators and operands present within the code (Kearney et ah, 1986). [Pg.124]

While this metric is very useful, it is obviously not the last word on complexity metrics. [Pg.129]


See other pages where Complexity metrics is mentioned: [Pg.95]    [Pg.14]    [Pg.14]    [Pg.40]    [Pg.1299]    [Pg.108]    [Pg.171]    [Pg.280]    [Pg.306]    [Pg.311]    [Pg.92]    [Pg.55]    [Pg.6]    [Pg.97]    [Pg.123]    [Pg.124]    [Pg.129]    [Pg.533]    [Pg.41]   
See also in sourсe #XX -- [ Pg.117 , Pg.123 , Pg.129 ]




SEARCH



© 2024 chempedia.info