Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data input

Input data for the most detailed soil model include parameters describing atmospheric deposition, precipitation, evapotranspiration, litterfall, foliar uptake, root uptake, weathering, adsorption and complexation of Pb, Cd, Cu, Zn, Ni, Cr and Hg. The input data mentioned above vary as a function of location (receptor area) and receptor (the combination of land and soil type) as shown in Table 12. [Pg.524]

The receptors of interest are soils of agricultural (arable lands, grasslands) and non-agricultural (forests, steppes, heath lands, savanna, etc) ecosystems. In non-agricultural ecosystems, the atmospheric deposition is the only input of heavy metals. Regarding the Forest ecosystems, a distinction should at least be made between Coniferous and Deciduous Forest ecosystems. When the detailed information on the [Pg.524]

Input data Location Land use Soil type [Pg.525]

In order to obtain data for all receptors within all receptor areas (grids), a first good approach is to interpret and extrapolate data by deriving relationships (transfer functions) between the data mentioned before and basic land and climate characteristics, such as land use, soil type, elevation, precipitation, temperature, etc. A summarizing overview of the data acquisition approach is given in Table 13. [Pg.525]

Open models of total heterogeneous equilibrium require additional input data of host rocks and their minerals properties, which do not [Pg.565]

Complex-formation (hydration, hydrolysis, association, dissociation) Concentration of basis components C, r m,i Full stability constants, K. c.) [Pg.565]

The geological submodel includes additional parameters, which do not change with time and characterize the structure, properties and composition of the geological medium, namely  [Pg.566]

outlines of the forecast object, which determine its bounds, configuration, area or volume  [Pg.566]

topographic data, which characterize the surface topography  [Pg.566]

The inputs into the system are the pH and the concentration of enzyme, measured across all forms, both active and inactive. From these pieces of data, the system is required to provide an estimate of the reaction rate. Let us assume that the total concentration of enzyme is 3.5 mmol dnr3 and that the pH is 5.7 and use these values to estimate the rate of reaction. [Pg.252]


Subroutine VLDTA2. VLDTA2 loads the binary vapor-liquid equilibrium data to be correlated. If the data are in units other than those used internally, the correct conversions are made here. This subroutine also reads the estimated standard deviations for the measured variables and the initial parameter estimates. All input data are printed for verification. [Pg.217]

LOADS ouPE COMPONENT AND BINARY DATA FOP USE IN THF VARIOUS CORRELATIONS FOR LIQUID AND VAPOP PHASE NONIDEALITIES, THEN DOCUMENTS THE INPUT DATA. [Pg.232]

THE SUBROUTINE ACCEPTS BOTH A LIQUID FEED OF COMPOSITION XF AT TEMPERATURE TL(K) AND A VAPOR FEED OF COMPOSITION YF AT TVVAPOR FRACTION OF THE FEED BEING VF (MOL BASIS). FDR AN ISOTHERMAL FLASH THE TEMPERATURE T(K) MUST ALSO BE SUPPLIED. THE SUBROUTINE DETERMINES THE V/F RATIO A, THE LIQUID AND VAPOR PHASE COMPOSITIONS X ANO Y, AND FOR AN ADIABATIC FLASHf THE TEMPERATURE T(K). THE EQUILIBRIUM RATIOS K ARE ALSO PROVIDED. IT NORMALLY RETURNS ERF=0 BUT IF COMPONENT COMBINATIONS LACKING DATA ARE INVOLVED IT RETURNS ERF=lf ANO IF NO SOLUTION IS FOUND IT RETURNS ERF -2. FOR FLASH T.LT.TB OR T.GT.TD FLASH RETURNS ERF=3 OR 4 RESPECTIVELY, AND FOR BAD INPUT DATA IT RETURNS ERF=5. [Pg.322]

ERROR RETURN FOR DISCREPANCY IN INPUT DATA FILE 900 ERIN>5... [Pg.343]

The econom/c mode/for evaluation of investment (or divestment) opportunities is normally constructed on a computer, using the techniques to be introduced in this section. The uncertainties in the input data and assumptions are handled by establishing a base case (often using the best guess values of the variables) and then performing sensitivities on a limited number of key variables. [Pg.304]

In order to test the economic performance of the project to variations in the base case estimates for the input data, sensitivity analysis is performed. This shows how robust the project is to variations in one or more parameters, and also highlights which of the inputs the project economics is more sensitive to. These inputs can then be addressed more specifically. For example if the project economics is highly sensitive to a delay in first production, then the scheduling should be more critically reviewed. [Pg.325]

A novel optimization approach based on the Newton-Kantorovich iterative scheme applied to the Riccati equation describing the reflection from the inhomogeneous half-space was proposed recently [7]. The method works well with complicated highly contrasted dielectric profiles and retains stability with respect to the noise in the input data. However, this algorithm like others needs the measurement data to be given in a broad frequency band. In this work, the method is improved to be valid for the input data obtained in an essentially restricted frequency band, i.e. when both low and high frequency data are not available. This... [Pg.127]

The described approach is suitable for the reconstruction of complicated dielectric profiles of high contrast and demonstrates good stability with respect to the noise in the input data. However, the convergence and the stability of the solution deteriorate if the low-frequency information is lacking. Thus, the method needs to be modified before using in praetiee with real microwave and millimeter wave sourees and antennas, whieh are usually essentially band-limited elements. [Pg.129]

In this section, two illustrative numerical results, obtained by means of the described reconstruction algorithm, are presented. Input data are calculated in the frequency range of 26 to 38 GHz using matrix formulas [8], describing the reflection of a normally incident plane wave from the multilayered half-space. [Pg.130]

Fig.2. Reconstruction of three-layered profile using exact and noisy input data. Fig.2. Reconstruction of three-layered profile using exact and noisy input data.
As we have mentioned, the particular characterization task considered in this work is to determine attenuation in composite materials. At our hand we have a data acquisition system that can provide us with data from both PE and TT testing. The approach is to treat the attenuation problem as a multivariable regression problem where our target values, y , are the measured attenuation values (at different locations n) and where our input data are the (preprocessed) PE data vectors, u . The problem is to find a function iy = /(ii ), such that i), za jy, based on measured data, the so called training data. [Pg.887]

Because of the double sound path involved in PE measurements of the back wall echo, we approximate the corresponding attenuation at a certain frequency to be twice as large as the attenuation that would be obtained by an ordinary TT measurement. We propose to use the logarithm of the absolute value of the Fourier transform of the back wall echo as input data, i.e... [Pg.889]

The back wall echoes were sampled at 100 MHz and the length of these were 70 samples, yielding a size of the input data vectors, ) = 35. An example of such an echo is shown in Figure 2 together with its log spectral amplitude. [Pg.890]

In Figure 3 we see how the logarithm of the spectral amplitude effects the estimation results. For each component in input data vector, u, we have defined the feature relevance, Fn d), as... [Pg.890]

It would also be of interest to investigate if the attenuation estimates can be further improved by extending our input data vectors. Since attenuation (and porosity) is spatially correlated, we should expect improvements when including data from A-.scans in a neighbourhood around the point of interest. This is also a topic for future work. [Pg.893]

A year later, a novel method of encoding chemical structures via typewriter input (punched paper tape) was described by Feldmann [42]. The constructed typewriter had a special character set and recorded on the paper tape the character struck and the position (coordinates) of the character on the page. These input data made it possible to produce tabular representations of the structure. [Pg.44]

The ciphered code is indicated with a defined length, i.e., a fixed hit/byte length. A hash code of 32 bits could have 2 (or 4 294 976 296) possible values, whereas one of 64 bits could have 2 values, However, due to tbe fixed length, several diverse data entries could assign the same hash code ( address collision ), The probability of collision rises if the number of input data is increased in relation to the range of values (bit length). In fact, the limits of hash coding are reached with about 10 000 compounds with 32 bits and over 100 million with 64 bits, to avoid collisions in databases [97. ... [Pg.73]

The data analysis module of ELECTRAS is twofold. One part was designed for general statistical data analysis of numerical data. The second part offers a module For analyzing chemical data. The difference between the two modules is that the module for mere statistics applies the stati.stical methods or rieural networks directly to the input data while the module for chemical data analysis also contains methods for the calculation ol descriptors for chemical structures (cl. Chapter 8) Descriptors, and thus structure codes, are calculated for the input structures and then the statistical methods and neural networks can be applied to the codes. [Pg.450]

Just like humans, ANNs learn from examples. The examples are delivered as input data. The learning process of an ANN is called training. In the human brain, the synaptic connections, and thus the connections between the neurons. [Pg.454]

In misiipcrviscd learning, the network tries to group the input data on the basis of similarities between theses data. Those data points which arc similar to each other arc allocated to the same neuron or to eloscly adjacent neurons. [Pg.455]

In this illustration, a Kohonen network has a cubic structure where the neurons are columns arranged in a two-dimensional system, e.g., in a square of nx I neurons. The number of weights of each neuron corresponds to the dimension of the input data. If the input for the network is a set of m-dimensional vectors, the architecture of the network is x 1 x m-dimensional. Figure 9-18 plots the architecture of a Kohonen network. [Pg.456]

An input vector is fed into the network and that neuron is determined whose weights are most similar to the input data vector. [Pg.456]

This is done by calculating the Euclidean distance between the input data vector Xc and the weight vectors Wj of all neurons ... [Pg.457]

The weights of the winning neuron are funher adapted to the input data. The neurons within a certain distance surrounding the winning neuron are also adapted Their weight adaptation is performed such that the closer a neuron is to the winning neuron the more its weights will be adapted. [Pg.457]

The Kohonen network adapts its values only with respect to the input values and thus reflects the input data. This approach is unsupervised learning as the adaptation is done with respect merely to the data describing the individual objects. [Pg.458]

Training cycles One training cycle is completed when all the input data have once been fed into the network. [Pg.464]

Breindl et. al. published a model based on semi-empirical quantum mechanical descriptors and back-propagation neural networks [14]. The training data set consisted of 1085 compounds, and 36 descriptors were derived from AMI and PM3 calculations describing electronic and spatial effects. The best results with a standard deviation of 0.41 were obtained with the AMl-based descriptors and a net architecture 16-25-1, corresponding to 451 adjustable parameters and a ratio of 2.17 to the number of input data. For a test data set a standard deviation of 0.53 was reported, which is quite close to the training model. [Pg.494]


See other pages where Data input is mentioned: [Pg.325]    [Pg.327]    [Pg.331]    [Pg.339]    [Pg.341]    [Pg.341]    [Pg.344]    [Pg.344]    [Pg.346]    [Pg.130]    [Pg.465]    [Pg.690]    [Pg.739]    [Pg.887]    [Pg.888]    [Pg.893]    [Pg.1030]    [Pg.513]    [Pg.441]    [Pg.441]    [Pg.455]    [Pg.156]    [Pg.192]    [Pg.209]   
See also in sourсe #XX -- [ Pg.87 ]

See also in sourсe #XX -- [ Pg.87 ]

See also in sourсe #XX -- [ Pg.52 ]

See also in sourсe #XX -- [ Pg.61 ]

See also in sourсe #XX -- [ Pg.131 , Pg.548 , Pg.565 , Pg.566 , Pg.569 , Pg.571 , Pg.576 ]




SEARCH



Additional input data

Aspen Plus simulating input data

Availability of Software and Data Input

Computer modeling input data problem with

Computer simulations input data

Data analysis input mapping

Data analysis input-output mapping

Data input tracking

Data input verification

Design input data

Diagnostic system data inputs

DryLab data input

Function input data type

General input data for the MOREHyS model

Getting input data for the calculations

Industrial data input gas temperatures

Input Data File

Input analysis, process data

Input analysis, process data definition

Input analysis, process data example

Input analysis, process data filter

Input analysis, process data loadings

Input analysis, process data multivariate methods

Input analysis, process data steps

Input analysis, process data univariate methods

Input data EXAMS

Input data problem

Input of Instrument Data into Excel

Input-output analysis, process data

Input-output analysis, process data regression

Multiple Input Data Acquisition System

Normalization software typical data inputs

Piping input data

Preparing input data

Preprocessing of Input Data

Project input data

Receptor input data

Requirements on Input Data

Rock-mass input data

Suitability of Input Data

© 2024 chempedia.info