Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Supporting Other Schemata

The presented synthesis mechanism is guided by version 3 of the divide-and-conquer logic algorithm schema. This preliminary restriction (made in Section 11.2) has considerably simplified the notations needed for the theoretical presentation. The support of version 4 (relations of any non-zero arity) is actually a pretty straightforward extension, because only some additional vectorization is needed. Version 4 is actually supported by the implementation of the synthesis mechanism. [Pg.198]

The support of any number of minimal or non-minimal forms and of compound induction parameters is considered future research, as considerable extensions to the already defined tasks need to be developed. Note however that Section 5.2.2 shows that single minimal forms and non-minimal forms are more frequent than one might believe at first sight. As outlined in Section 8.4, there is still a lot of space for designing even more sophisticated divide-and-conquer schemas. [Pg.198]

However, version 3 of the divide-and-conquer schema is hardwired into our synthesis mechanism. This results from a hardwired sequence of instantiations of predicate-variables of that schema, as well as from a hardwired mapping between these predicate-variables and the methods of the developed tool-box. Parameterizing the synthesis mechanism on algorithm schemas and tool-boxes would thus be a first step towards supporting schemas reflecting design strategies other than divide-and-conquer. [Pg.198]

In other words, a Step 0 would be to select an appropriate schema, and the subsequent steps would be either a hardwired sequence (specific to the selected schema) of applications of methods, or a user-guided selection of variables and methods. Our grand view of algorithm synthesis systems thus is one of a large workbench with a disparate tool-box of highly specialized methods for a set of schemas that covers (as much as possible of) the space of all possible algorithms. [Pg.198]

In defense of our hardwiring the divide-and-conquer schema, we should however make the following two remarks. First, the hardwired sequence of predicate-variable instantiations is justifiable by our arguing (see Section 11.3 and Section 12.3.2) that this sequence is probably the only one in the context of logic programming. Second, the hardwired mapping between the predicate-variables and the methods of the toolbox is justifiable by pure common sense. [Pg.198]


Transparency. Schema evolution should result into minimal or no degradation of availability or performance of the changed system. Furthermore, applications and other schema consumers should largely be isolated from the changes, e.g., by support for backward compatibility, versioning, or views. [Pg.151]

One of the most widely used chemical structure-encoding schemas in the pharmaceutical industry is the MDL Connection Table (CT) File Format. Both Molfile and SD File are based on MDL CT File Format to represent chemical structures. A Molfile represents a single chemical structure. An SD File contains one to many records, each of which has a chemical structure and other data that are associated with the structure. MDL Connection Table File Format also supports RG File to describe a single Rgroup query, rxnfile, which contains structural information of a single reaction, RD File, which has one to many records, each of which has a reaction and data associated with the reaction, and lastly, MDL s newly developed XML representation of the above—XD File. The CT File Format definition can be downloaded from the MDL website http //www.mdl.com/downloads/public/ctfile/ctfile.jsp. [Pg.3]

Other structure-encoding schemas are developed by software vendors and academia such as Daylight Smiles, CambridgeSoft ChemDraw Exchange (CDX), and Chemical Markup Language (CML), and they all have advantages and disadvantages. The MDL CT File Format is the only one that is supported by almost all chemical informatics software vendors. [Pg.3]

The simulation is an affirmation of the basic premise underlying schema theory, namely, that the nature of the connections among situational knowledge elements is critical. The statistical analyses of chapter 7 provide additional support. On the one hand, the simulation allows us to look closely at individual differences by modeling each student s performance. On the other hand, the statistical analyses point to several group characteristics with respect to learning new concepts. Both analyses are based on the cognitive maps. [Pg.360]

ArrayExpress implementation at EBI will run on an Oracle 8i platform however, database schema will be easily portable to other RDBMSs. The supported data import format will be MAML, a MIAME-compliant XML language Images will not be stored inside the database, they will be archived on tapes or direct access media such as CD-R or DVD-R. [Pg.137]

As indicated in Fig. 1. lb, the individual matchers may either be executed sequentially, independently (in parallel), or in some mixed fashion. In the sequential approach, the matchers are not executed independently, but the results of initial matchers are used as input by subsequent matchers. A common strategy, e.g., used in Cupid (Madhavan et al. 2001), is to first execute a linguistic matcher to compare the names of schema elements and then use the obtained similarities as input for structure-based matching. In the parallel matcher strategy, individual matchers are autonomous and can be independently executed from other matchers. This supports a high flexibility to select matchers for execution and combination. Furthermore, these matchers may also physically be executed in parallel, e.g., on multicore or multiserver hardware. On the other hand, the autonomy of individual matchers may introduce redundant computations, e.g., of name similarities to be used for structural matching. The mixed strategy combines sequential and parallel matcher execution and is thus most complex. [Pg.8]

On the other hand, there is little support to propagate schema changes to dependent schema objects, such as views, foreign keys, and indexes. When one alters a table, either the dependent objects must themselves be manually altered in some way, or the alteration must be aborted. The latter approach takes the majority of the time. For instance, SQL Server aborts any attempt to alter a column if it is part of any index, unless the alteration is within strict limits - namely, the alteration is a widening of a text or binary column. Dropped columns simply cannot participate in any index. DB2 has similar restrictions Oracle invalidates dependent objects like views so that they must be revalidated on next use and fails to execute them if they do not compile against the new schema version. [Pg.159]

First, for every statement expressed using the SMO language, there are formal semantics associated with it that describe forward and reverse translation of schemas. The reverse translation defines, for each statement, the inverse action that effectively undoes the translation. The only SMO statements that lack these forward and reverse translations are the CREATE TABLE and DROP TABLE operations logical formalism for these statements is impossible, since one is effectively stating that a tuple satisfies a predicate in the before or after state, but that the predicate itself does not exist in the other state. The work on PRISM describes quasi-inverses of such operations for instance, if one had copied the table before dropping it, one could recover the dropped information from other sources. PRISM offers some support for allowing a user to manually specify such inverses. [Pg.162]

Abstract Merging schemas or other structured data occur in many different data models and applications, including merging ontologies, view integration, data integration, and computer supported collaborative work. This paper describes some of the key works in merging schemas and discusses some of the commonalities and differences. [Pg.223]

However, the code is only one element of the complete strategy. The code supports the creation of consistent integration in a complex and heterogeneous system landscape, but cannot be a replacement for an adequate process and system architecture strategy. The dilemma, to optimize on one hand discipline-related processes with highly specialized solutions and on the other hand to integrate all these applications to the perfect optimum, is not solvable with the COP alone. Today, the different methods, systems and data models don t follow a corporate schema. Thus, the incompatibility is quasi implied. [Pg.568]


See other pages where Supporting Other Schemata is mentioned: [Pg.198]    [Pg.2]    [Pg.13]    [Pg.150]    [Pg.152]    [Pg.153]    [Pg.192]    [Pg.196]    [Pg.46]    [Pg.33]    [Pg.2559]    [Pg.213]    [Pg.295]    [Pg.438]    [Pg.98]    [Pg.2309]    [Pg.6]    [Pg.17]    [Pg.22]    [Pg.39]    [Pg.69]    [Pg.139]    [Pg.142]    [Pg.166]    [Pg.176]    [Pg.246]    [Pg.254]    [Pg.262]    [Pg.327]    [Pg.389]    [Pg.318]    [Pg.1096]    [Pg.182]    [Pg.279]    [Pg.225]    [Pg.61]    [Pg.53]    [Pg.53]    [Pg.262]    [Pg.258]    [Pg.179]   


SEARCH



Other Supports

Schema

© 2024 chempedia.info