Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data workflow

The central engine of this data workflow is the process of spectral deconvolution. During spectral deconvolution, sets of multiply charged ions associated with particular proteins are reduced to a simplified spectrum representing the neutral mass forms of those proteins. Our laboratory makes use of a maximum entropy-based approach to spectral deconvolution (Ferrige et al., 1992a and b) that attempts to identify the most likely distribution of neutral masses that accounts for all data within the m/z mass spectrum. With this approach, quantitative peak intensity information is retained from the source spectrum, and meaningful intensity differences can be obtained by comparison of LC/MS runs acquired and processed under similar conditions. [Pg.301]

SuetaInability of software solutions for LIMS and data workflow... [Pg.17]

Informatic environment pivotal data management for each of the preceding steps, which involves capture of data, storage in a consistent way, query and also stream data analysis (data workflow). [Pg.241]

Figure 12,4 Data workflow with statistical analysis. The data processing and analysis follows the following sequence (a) Acquisition of files for the protein in the apo state and bound with the ligands of interest over several time points, (b) Isotope measurement and %D determination, (c) Aggregate sample percent deuteratlon Is arrived at by the mean of the individual on-exchange time point mean values. The standard deviation for the sample Is the RMS error of each time point divided by the number of time points, (d) Perturbation %D = ligand aggregate %D-apo aggregate %D. Standard error is calculated by taking the root sum squared of aggregate sample standard deviations, (e) Results are subjected to t-test and Tukey comparison, (f) Visualization components represent the results. Reproduced from Ref. [1], with permission from Elsevier... Figure 12,4 Data workflow with statistical analysis. The data processing and analysis follows the following sequence (a) Acquisition of files for the protein in the apo state and bound with the ligands of interest over several time points, (b) Isotope measurement and %D determination, (c) Aggregate sample percent deuteratlon Is arrived at by the mean of the individual on-exchange time point mean values. The standard deviation for the sample Is the RMS error of each time point divided by the number of time points, (d) Perturbation %D = ligand aggregate %D-apo aggregate %D. Standard error is calculated by taking the root sum squared of aggregate sample standard deviations, (e) Results are subjected to t-test and Tukey comparison, (f) Visualization components represent the results. Reproduced from Ref. [1], with permission from Elsevier...
Robust Design Optimization for Earthquake Loads, Fig. 7 Data workflow in uncertain analysis with -r quantities... [Pg.2370]

Time Reduction and Increased Efficiencies. Time reduction and the corollary of increased efficiencies appear to be the main factors driving the short-term benefits deriving from implementation of an electronic notebook system. The argument is fairly simple, and there are good data [1] to show that the benefits are real and realistic. Most studies and projects associated with implementation of ELN within a research discipline focus on the reduction in time taken to set up a typical experiment and to document the experiment once completed. Further time savings are evident when examining workflows such as report or patent preparation, or when thinking about time taken to needlessly repeat previously executed experiments. [Pg.219]

Contextual design is a flexible software design approach that collects multiple customer-centered techniques into an integrated design process [7]. The approach is centered around contextual inquiry sessions in which detailed information is gathered about the way individual people work and use systems and the associated information flow. The data from each contextual inquiry session are used to create sequence models that map the exact workflow in a session along with any information breakdowns, flow models that detail the flow of information between parties and systems (much akin to but less formal... [Pg.234]

Most e-Clinical software consists of integrated suites of applications that support the clinical research process, including various ways of data entry that include in-house data entry, remote data capture, batch data load, and scan forms. These suites enable customers to quickly and easily design studies, capture clinical data, and automate workflow. Some e-clinical software systems are also Internet based. [Pg.614]

The ultimate goal of all scientists is to analyze their data thoroughly until they are sure that it is valid and to then analyze it in a more global context and discuss it with their colleagues. This workflow requires enterprise level IT tools that can effectively compare and correlate multiple HTS campaigns that generated millions of results from hundreds of thousands of compounds, recognize and chart trends and hierarchies of association and help the scientist visualize them, annotate them, and render the visualizations in media that can be used to share that vision with other members of the team. [Pg.63]

We have discussed individual analyses and the demands to achieve optimization of instrumentation. However, an analytical laboratory must deal with series of samples and we must consider another factor if we want to optimize complete workflows cycle time optimization. Cycle time is defined as the time from finishing the analysis of one sample to the time the next sample is finished. This can be easily determined on Microsoft Windows -based operating systems by examining the data file creation time stamps of two consecutive samples. A better way is calculating the average of a reasonable number of samples. [Pg.108]

As vitally important as the capabilities for experimental planning, screening, and data analysis are the procedures for preparation of inorganic catalysts. In contrast to the procedures usually applied in conventional catalyst synthesis, the synthetic techniques have to be adapted to the number of catalysts required in the screening process. Catalyst production can become a bottleneck and it is therefore necessary to ensure that HTE- and CombiChem-capable synthesis technologies are applied to ensure a seamless workflow. [Pg.385]

Bigure 16.5 summarizes our approach to using validated QSAR models for virtual screening as applied to the anticonvulsant dataset. It presents a practical example of the drug discovery workflow that can be generalized for any dataset in which sufficient data to develop reliable QSAR models is available. [Pg.448]

An important aspect in all pipelines is the evolution of the decision-making. As more data become available, the structure determination paths can be scrutinized thoroughly in order to increase the efficiency of the overall workflow. [Pg.166]


See other pages where Data workflow is mentioned: [Pg.24]    [Pg.220]    [Pg.688]    [Pg.61]    [Pg.155]    [Pg.29]    [Pg.24]    [Pg.220]    [Pg.688]    [Pg.61]    [Pg.155]    [Pg.29]    [Pg.62]    [Pg.174]    [Pg.179]    [Pg.218]    [Pg.235]    [Pg.237]    [Pg.241]    [Pg.57]    [Pg.352]    [Pg.301]    [Pg.312]    [Pg.314]    [Pg.314]    [Pg.314]    [Pg.201]    [Pg.77]    [Pg.114]    [Pg.118]    [Pg.406]    [Pg.418]    [Pg.233]    [Pg.414]    [Pg.417]    [Pg.179]    [Pg.366]    [Pg.444]    [Pg.526]    [Pg.539]    [Pg.127]   
See also in sourсe #XX -- [ Pg.416 , Pg.417 ]




SEARCH



Data analysis workflow

Data mining workflows

Workflow

© 2024 chempedia.info