Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Standards, XML

CHEMOMENTUM is working with the XML standard. XML has been identified as the preferable standard within the information technology community for QSAR (see for instance the US EPA DSSTox database and the EC projects CHEMOMENTUM, DEMETRA, and OpenMolGRID). [Pg.196]

The data of POE contains metadata MD(xi), independent claims lC(xi), de-pendent claims DC(xi), claim components CC(xi), component synonyms CS (xi), and key phrases KP(xi). The Microsoft Office Visio tool is used to build the patent ontology and is translated into standard XML in the IPDSS to analyze the patent context. The IPDSS then identifies potential patent infringement and evaluates patent validity. [Pg.535]

In the present situation, a paper version of the publication would in many cases be the only output accessible some time after the termination of the project. With the help of the system presented in this paper, the researcher first of all uses templates, enabling him/her to create standardized spreadsheets. A database for references together with document templates allow the researcher to create a publication, which meets the particular layout required, and at the same time serves as a base for a standardized XML-ftle (to be automatically created). The created files as well as raw data files are submitted to the central database, which validates the structure and asks for additional information about the resources supplied and their interconnections. The latter can be edited with a graphical user interface (figure 1), which provides access to the particular struc-... [Pg.332]

Knowledge database and intelligent agents assimilating sensor-measurement-information data, data exchange standards (XML, etc.) representation of uncertainties in databases, building and supporting physical property archives (NIST, Webbook, etc.) collaborative environments... [Pg.191]

Documents should be provided, where possible, as individual Portable Document Format (PDF) files, while Extensible Markup Language (XML) must be used to provide a user interface that enables navigation and viewing via a standard web browser. This offers the potential for an applicant to make a compete submission on... [Pg.100]

As stated on the OMG (Object Management) website (http //www.omg. org/), a lack of data standards results in data conversions, loss of information, lack of interoperability, etc. Current standards du jour are XML (Extensible Markup Language) [17], LSID (Life Sciences Identifiers), and now the RDF (Resource Description Framework) from the W3C (World Wide Web Consortium), which is extensible though hard to implement. Substantial work on OO (Object Oriented) modeling of life science data types takes place at the OMG s LSR (Life Sciences Research) group—this is discussed below. [Pg.174]

Because XML is an open standard, many industries are developing open standards for XML data exchange. CDISC is the organization leading XML data standardization for the clinical trial industry. [Pg.68]

XML will become more integral to the work of statistical programmers in the pharmaceutical industry as the standards, applications providers, and vendors make more use of this technology. Eventually you should expect the FDA to move away from SAS transport files to XML files as their standard data format for electronic data submission. [Pg.73]

Stanislaus, R., Jiang, L.H., Swartz, M., Arthur, J. Almeida, J.S. (2004). An XML standard for the dissemination of annotated 2D gel electrophoresis data complemented with mass spectrometry results. BMC Bioinformatics 5, 9. [Pg.90]

The SPL specification is a document markup standard that specifies the structure and semantics for the regulatory requirements and content of product labeling. The SPL is derived from the HL7 clinical document architecture (CDA), which specifies the structure and semantics of "clinical documents" for the purpose of exchange. This specification includes a detailed descriphon of the information model for structured product labeling objects as well as the XML representahon of that model. The information model is based on the HL7 reference informahon model (RIM) and uses the HL7 Version 3 Data Types, (see footnote ) This version of the specificahon focuses on drug product labeling. [Pg.482]

The text/plain example above demonstrates that HTTP networks can support distributed information systems when given appropriate languages, that is, languages that describe abstractions appropriate to that information system. Many other standard MIME types are useful. Most are very specific, for example, image/gif is a specific format for bitmapped images, application/PDF is a page description format and application/tar is a 4.3 BSD archive. Some describe more general abstractions, for example, application/xml . Private (unauthorized) MIME types are also available, for example, chemical/x-pdb and chemical/x-smiles . [Pg.250]

Frenkel M, Chiroco RD, Diky V et al. (2006) XML-based lUPAC standard for experimental, predicted, and critically evaluated thermodynamic property data storage and capture (Ther-moML) (lUPAC Recommendations 2006). Pure Appl Chem 78 541-612... [Pg.146]

Service-oriented architecture (SOA) is a hot topic these days and is considered by many people to be the enterprise computing framework of the future. In SOA, each software unit runs on a piece of hardware as a service that can be called by many different consumers. For example, a compound registration service can be called by a library enumeration tool and a chemistry e-notebook to fulfill compound registration tasks. The most popular SOA is Web services that are based on HTTP and XML or SOAP standards although SOA as a concept has existed for awhile and is not limited to Web services. SOA has a lot of advantages, the most important of which is code reuse and improved productivity. However, it also presents a lot of challenges. [Pg.42]

Before reaching the point of complete data integration as given above, there are intermediary levels of data integration that are beneficial to better analysis of data from process analyzers. The best case would be to have all the data in a human readable form that is independent of the application data format. Over the years several attempts have been made to have a universal format for spectroscopic data, including JCAMP-DX and extensible markup language (XML). Because many instrument vendors use proprietary databases, and there is not a universal standard, the problem of multiple data formats persists. This has led to an entire business of data integration by third parties who aid in the transfer of data from one source to another, such as between instruments and the plant s distributed control system (DCS). [Pg.434]

In 2002, IUPAC initiated work in the development of terminology of a standard for analytical data. The standard format, XML, is intended to be universal for all types of analytical instrumentation, without permutations for different techniques. The XML format is designed to have information content of data defined in several layers. The most generic information is in the first layer, or core. More specific information about the instrumentation, sample details and experimental settings are stored in subsequent layers. The layers are defined as core, sample, technique, vendor, enterprise and user.29 The existence of a universal format will aid in the analysis of data from multiple sources, as well as in the archival and retrieval of data from historical processes. [Pg.434]

XML technologies [22, 23] have become a general standard for storing and converting all kinds of data and technically more robust solutions than mmCif based on XML, such as PROXIML [24], were proposed but still not accepted by the Research Collaboratory for Structural Bioinformatics. [Pg.133]

This information is collected by the vulnerability assessment process. A vulnerability report is generated by a vulnerability assessment tool (for example Nessus3) as an XML file. Information in this file is imported in the events database as contextual information associated with hosts. Since vulnerability reports are associated with security references (bugtraq, CVE, etc.) and IDS signatures are also associated with the same information, it is fairly straightforward to infer the events that create a serious risk for the information system. If an event has as target the host associated with vulnerability X, and as signature one also associated with vulnerability X, then the risk is serious. This is a standard process that is in use in most intrusion-detection products. [Pg.362]


See other pages where Standards, XML is mentioned: [Pg.290]    [Pg.113]    [Pg.2559]    [Pg.2233]    [Pg.215]    [Pg.422]    [Pg.54]    [Pg.630]    [Pg.345]    [Pg.666]    [Pg.290]    [Pg.113]    [Pg.2559]    [Pg.2233]    [Pg.215]    [Pg.422]    [Pg.54]    [Pg.630]    [Pg.345]    [Pg.666]    [Pg.228]    [Pg.179]    [Pg.224]    [Pg.240]    [Pg.240]    [Pg.242]    [Pg.757]    [Pg.1076]    [Pg.114]    [Pg.389]    [Pg.391]    [Pg.124]    [Pg.481]    [Pg.243]    [Pg.62]    [Pg.66]    [Pg.267]    [Pg.157]    [Pg.6]    [Pg.38]    [Pg.533]    [Pg.204]   
See also in sourсe #XX -- [ Pg.111 ]




SEARCH



XML

© 2024 chempedia.info