Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Research data sources

F. 9.3 Trends of method selection in nanocapsule research (data source from [5])... [Pg.253]

The control chart is set up to answer the question of whether the data are in statistical control, that is, whether the data may be retarded as random samples from a single population of data. Because of this feature of testing for randomness, the control chart may be useful in searching out systematic sources of error in laboratory research data as well as in evaluating plant-production or control-analysis data. ... [Pg.211]

The major water desalination processes that ate currendy in use or in advanced research stages are described herein. Information on detailed modeling can be found in the Hterature cited. The major texts on water desalination written since the 1980s are those by Spiegler and Laird (47), Khan (48), which contains many practical design aspects, Lior (49) on the measurements and control aspects, Heitman (40) on pretreatment and chemistry aspects, and Spiegler and El-Sayed (50), an overview primer. Extensive data sources are provided in References 39 and 51. [Pg.242]

A client can research the above topics by using publicly available data sources or commercial data collection services to obtain details about a site. The commercial data collection services can compile public records of remediation activities, citations, or fines from governmental entities and other data of interest for any property. Examples of these types of firms are Vistainfo, lAO Environmental Services and Lexis/Nexis. [Pg.28]

I will refer to the analyses based on these sources collectively as the fact book coding and analysis (FCA) portion of the Evergreen studies. It is believed that these analyses represent the largest exploration of text-based secondary source information concerning the causes of firm performance to date. Furthermore, they are based on data sources that have generally not been drawn upon by organizational researchers because of the formidable methodological issues that have to be confronted and addressed for their utilization. [Pg.91]

Weight percent of opal in surface marine sediments (generally 0 to 5 cm). White areas indicate no data. Source From Seiter, K., et al. (2004). Deep-Sea Research I 51, 2001-2026. [Pg.415]

Zonally averaged, 10-y temperature trend in °C/y for 1993 to 2003 caloulated using a ieast squares fit from in situ data. Source After Wiiiis, J. K., et al. (2004). Journal of Geophysioal Research 109, Cl 2036. (See companion website for coior version.)... [Pg.749]

For other data sources, see Fang, T.-H., et al. (2006). Marine Environmental Research 61, 224-243. [Pg.813]

Fig. 4.9 Scheme showing possible sources and pathways for the occurrence of pharmaceutical residues in the aquatic environment. Reprinted from Heberer T (2002) Occurrence, fate, and removal of pharmaceutical residues in the aquatic environment a review of recent research data. Toxicology Lettersl31 5-17. Copyright 2002 with permission of Elsevier... [Pg.87]

In 2002, the European Exposure Factors (ExpoFacts) database started as a 2-year project funded by CEFIC-LRI (European Chemical Industry Council, Long Range Research Initiative) to create a European database of factors affecting exposure to environmental contaminants. The aim was to create a public access data source, similar to the US-EPA Exposure Factors Handbook (US-EPA 1997), which has been widely used by European researchers, but with European data. Since 2006, the project is hosted by the European Commission s Joint Research Centre (JRC 2007). [Pg.325]

A key issue will be the integration and federation of various data sources that are already available. One can imagine a data environment where all data from toxicoge-nomics to every biochemical assay around a compound is streamlined and easily accessible for every researcher. Having this kind of global molecule profile will also make it possible to describe the differences between compounds in detail and finally lead to novel ideas on how to improve lead compounds to ultimately make better drugs. [Pg.316]

Problem of Verification. Much of the data used in this study were gathered by interviews with R D personnel from the sample firms. These individuals provided both subjective and objective information about their companies and the manner in which environmental protection regulations impact their R D activities. Given the size and complexity of these sample firms, this data were difficult to verify. However, to help substantiate the validity of the data provided, the researchers analyzed the responses for consistencies or possible contradictions. Comparisons were made between individual responses and data gathered from trade journals, annual reports, and other secondary data sources. Further, in selected instances, the researchers made plant tours to personally observe the manner in which the companies had been affected. [Pg.75]

The core of the EntityDictionaryDao is in the retrieve...() methods. Here we assume the entity dictionaries are stored in a relational database. They can also be accessed from other types of data sources, such as web service, XML, and flat files. The point is to transform them into something that can be accessed easily and quickly by CRS. Take a closer look at the retrievePersonnel() method. Like most other retrieve...() methods, retrievePersonnel() returns a Map. What is in the Map depends on what kind of lookups the clients want to use to access the personnel dictionary. In the context of CRS, the personnel data can be accessed by its entirety, the research site where the person is located, person id, person s full name, or person s username. Therefore, the Map that retrievePersonnel() returns has four Collections—an entire personnel list, a site-people map, a person id-person map, a person s full name-person map, and a username-person map. [Pg.155]

Also, many research papers and data sources express these emissions differently as ... [Pg.376]

There are advantages and disadvantages to using different data sources. Primary data collection, in which data are collected solely for purposes of the project, are the best sources of data because the researcher can request and generate exactly the information that... [Pg.477]

A secondary data source, such as a hospital or HMO database, can be a useful resource. Secondary data sources are particularly useful when gathering economic data because there is usually a direct correlation with how much an insurance company is charged and how much the product or service actually costs to provide. A researcher or clinician can also examine the data on a large pool of patients to see if an intervention actually affects a population of patients. A disadvantage of data such as these is that they are completely anonymous. There are no patient identifiers, and the data cannot be linked to individual patients. As a result, if one sample of patients patronizing one pharmacy is provided an intervention and another sample of patients is not, there is no way to detect economic change through the secondary data source. [Pg.478]

Other research activities related to residential exposure assessment currently being sponsored by the USEPA include the National Human Exposure Assessment Survey (NHEXAS) (website http //www.epa.gov/heasd/edrb/nhexas.htm). In addition, the USEPA concluded a Co-operative Agreement, referred to as the Residential Exposure Assessment Project (REAP) with the Society for Risk Analysis (SRA) and the International Society of Exposure Analysis (ISEA) which resulted in a reference textbook (Baker et al, 2001) describing relevant methodologies, data sources and research needs for residential exposure assessment. The REAP and other efforts complement other USEPA initiatives, such as the development of the Series 875 guidelines, and will facilitate a sharing of information and other resources between the USEPA, other Federal and State agencies, industry, academia and other interested parties. [Pg.150]

How can pharmaceutical companies convert the results of this productivity into knowledge Data need to be captured, processed, and interpreted for immediate use, as well as stored and managed to support future product development. The value of data increases when all researchers are able to access, share, and leverage each other s knowledge. Software/databases that can bridge all instruments, data sources, and information centers to meet these challenges head on, is encouraged. [Pg.517]

A more precautionary approach should be informed by the most appropriate science , which can be understood as a framework for choosing methods and tools chosen to fit the nature and complexity of the problem (Kriebel et al., 2003). Critical to this framework are the flexibility to integrate a variety of research methods and data sources into the problem evaluation, and to consult with many constituencies to understand the diversity of views on aproblem and seek input on alternative solutions. Appropriate science is solutions-based, focused on broadly understanding risks, but also on finding ways to prevent them in the first place. Under this approach, the limitations of science to fully characterize complex risks are openly acknowledged, making it more difficult to use incomplete knowledge to justify preventive actions. [Pg.50]


See other pages where Research data sources is mentioned: [Pg.274]    [Pg.656]    [Pg.534]    [Pg.534]    [Pg.477]    [Pg.360]    [Pg.452]    [Pg.704]    [Pg.172]    [Pg.25]    [Pg.622]    [Pg.363]    [Pg.377]    [Pg.452]    [Pg.732]    [Pg.82]    [Pg.40]    [Pg.515]    [Pg.359]    [Pg.598]    [Pg.9]    [Pg.342]    [Pg.224]    [Pg.402]    [Pg.384]   
See also in sourсe #XX -- [ Pg.409 ]




SEARCH



Data sources

© 2024 chempedia.info