Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Refined polygon

One application of the rules (a step) leads to the construction of a refined polygon. When we are talking about later steps the input to that step is called the old polygon and the output from it the new polygon. [Pg.49]

By looking at the extent of influence of one control point after 0,l,2,oo refinements, in the cubic B-spline scheme we can see that the refined polygons converge towards the basis function, and the last non-zero entry converges towards the end of the support region. [Pg.66]

The interpolation degree can also be expressed in terms of the presentation of alternate terms of the product of the unit row eigenvector with the mask as a polynomial in S2. For non-primal schemes this approach has to be taken, since for such a scheme interpolation only means that the limit curve interpolates the data. Vertices of refined polygons do not coincide with original vertices. [Pg.145]

We therefore have an important property, that of step-independence, to consider. A scheme is step-independent if the original polygon and the n-times refined polygon have the same limit curve. I.e., if for all values of n they have the same limit curve without the implementation knowing the value of n. [Pg.160]

A univariate subdivision scheme is a set of rules by which a denser polygon is defined in terms of a sparser one. The same set of rules can then be applied again to make an even denser one, and this can be repeated indefinitely to make such a dense polygon that it looks like a curve. In principle an infinite number of such refinements would indeed give a continuous curve, and it is possible to deduce from the rules some properties of that curve without actually taking an infinite number of steps. [Pg.47]

This first example has approximately twice as many vertices in the new polygon as in the old. We call it a binary scheme. If there had been three times as many it would have been a ternary scheme, and such generalisations will be discussed in a few pages time. In principle at each refinement we can multiply the number of vertices by whatever we choose, and this number is called the arity and denoted by the letter a. It is also called the dilation factor, which stems from generating function usage. [Pg.50]

Both of the above examples approximately double the number of vertices in the polygon with each step of refinement. They are binary schemes. It is also possible to have schemes in which the number of vertices trebles or quadruples or is multiplied by a still higher factor. As mentioned above, we call that factor the arity, so that binary schemes have an arity of 2, ternary of 3, quaternary of 4 etc. Some of the mathematics applies to all arities, and in such cases we will denote the arity by the letter a. [Pg.52]

Because each vertex of the refined control polygon is a weighted mean of vertices of the original, the construction of a refined control polygon can be expressed in the form... [Pg.81]

What the choice of a diagonal does is to imply a labelling, giving a correspondence between a sequence of points of the old polygon and a sequence of the refined one. In particular it implies a mark point which is an abscissa value which maps into itself under the map from old abscissa values to new ones. In the case of a primal binary scheme, the mark point is at a point of both new and old polygons. In the case of a dual scheme the mark point is at a mid-edge in both old and new. [Pg.82]

The sequence of 6x used for taking this limit is conveniently the sequence of polygon edges at successive refinements of the original polygon. [Pg.95]

Note that this doubling of the density is not the refinement of the subdivision step, but a doubling of the density of the original polygon. A doubling of the amount of work done in collecting data. [Pg.123]

For example, we show here the first three refinements of cardinal data using the mask whose generating function is 2((1 + z)/2)2((l + z2)/2)2, and the argument above says that this has the same limit curve as applying the mask 2((1 + z)/2)4 to the polygon with vertices [l,2,l]/4. [Pg.134]

We consider first how to implement the refinement process itself, then how to draw the curve defined for a given scheme and a given initial polygon, and then how to compute the primitive operations used within, for example, a Computer Aided Design software system. [Pg.165]

The simplest way of doing this is, of course, to apply enough refinements and then just send the edges of the polygon to the routines which do the actual drawing. Simplicity of implementation is important, and this approach is recommended for any first implementation of subdivision software. [Pg.171]

This approach can give a smooth-looking curve with many fewer spans (and so many fewer refinement steps) than the polygon approach. [Pg.172]

However, the B-splines do not converge to the limit curve any faster than the polygon. The order is quadratic in both cases. If actual accuracy of rendering is important, rather than just beauty, there is another way of making a smooth curve out of cubics which is significantly more accurate for a given number of refinement steps. [Pg.172]

A non-stationary scheme does not have the necessary eigenvectors to apply the above directly to the original polygon. However, in cases where the scheme converges adequately fast towards its own limit, the eigenvectors of the limit scheme can be used with good accuracy after a relatively small number of refinements. How many such refinements are needed has to be determined for each scheme individually. [Pg.172]

The number of refinements is first worked out from the required precision and the initial control polygon. These refinements are then carried out, but only in the smallest possible region around the place where the evaluation is to be made. Doing it everywhere requires excessive computation and storage space. The number of control points needed is only the number required for the evaluation of the points and derivatives at the end of the required span. [Pg.173]

This is a problem much more typical than just evaluating at a known abscissa. There are two approaches that we can use. The first is to scan along the polygon to find the region likely to be relevant. Then that region is refined as... [Pg.173]

At each subsequent refinement step, a further shortening takes place, and the limit curve is significantly shorter in parameter space than the original polygon. [Pg.176]

Because the second and third approaches rely on the modification of the polygon and the shortening effect during subdivision cancelling each other out, the first approach is numerically preferable. The second involves best separation between the end-condition code and the actual refinement, and so is preferable from the point of view of software robustness. [Pg.178]

The idea of adjusting the original control polygon before starting any refinement was introduced in the previous chapter in the context of end conditions, but it can be used more widely. In particular we can often use an approximating subdivision scheme to interpolate a set of given points. [Pg.181]


See other pages where Refined polygon is mentioned: [Pg.78]    [Pg.83]    [Pg.120]    [Pg.168]    [Pg.78]    [Pg.83]    [Pg.120]    [Pg.168]    [Pg.186]    [Pg.14]    [Pg.16]    [Pg.115]    [Pg.134]    [Pg.136]    [Pg.162]    [Pg.162]    [Pg.167]    [Pg.171]    [Pg.189]    [Pg.508]    [Pg.52]    [Pg.1240]    [Pg.319]    [Pg.105]    [Pg.4814]    [Pg.267]    [Pg.130]    [Pg.39]    [Pg.25]    [Pg.40]    [Pg.41]   
See also in sourсe #XX -- [ Pg.49 ]




SEARCH



Polygonization

© 2024 chempedia.info