Computational Statistics with R

Derek S. Young , in Handbook of Statistics, 2014

1 Introduction

Statistical intervals provide a way to quantify the uncertainty about a scalar quantity of a population based on a random sample of data. For example, confidence intervals are used to obtain bounds for an unknown scalar population parameter of interest, such as the mean or variance. Prediction intervals provide bounds on one or more future observations from the sampled population. The multivariate analogues to these intervals are confidence regions and prediction regions, respectively. A third type of interval is a tolerance interval. Tolerance intervals provide bounds to capture at least a specified proportion (P) of the sampled population with a given confidence level (γ). The quantity P is referred to as the content of the tolerance interval, while the confidence level γ reflects the sampling variability. The chosen levels are typically reflected by writing that one seeks to calculate a "P/γ tolerance interval." Moreover, the levels of P and γ are between 0 and 1 and typically chosen from the set {0.90,   0.95,   0.99}.

Tolerance intervals have been studied quite extensively in the literature, with some of the earliest works being Wilks (1941, 1942) and Wald (1943). Jílek (1981) and Jílek and Ackermann (1989) provide extensive bibliographies on tolerance intervals and regions, while a thorough literature review is available in Patel (1986). A few texts also treat the topic of tolerance intervals. Guttman (1970) and Krishnamoorthy and Mathew (2009) are texts devoted to the theoretical development and application of tolerance intervals and regions. The text by Hahn and Meeker (1991) also handles the topic, but in the broader context of statistical intervals.

Closed-form solutions to tolerance intervals exist for very few settings. As such, approximations and numerical methods often need to be employed. Computing tolerance intervals and regions in R (R Development Core Team, 2014) is quite easy with the tolerance add-on package (Young, 2010). The tolerance package has functions for computing tolerance intervals for numerous univariate parametric distributions, the nonparametric setting, and some regression settings. It also has functions for constructing multivariate tolerance regions. After installing the package, one can type the following for a listing of all functions:

> help(package="tolerance")

The focus of this chapter is to demonstrate some of these functions applied to real datasets. But first, we provide formal definitions of one- and two-sided tolerance intervals.

1.1 Formal Definition

In this section, we provide a basic definition of one- and two-sided tolerance intervals. A more rigorous development of the definitions can be found in Krishnamoorthy and Mathew (2009).

Consider the random variable X with x the corresponding realization. Let X = (X 1, X 2,…, X n ) denote a random sample of size n from the distribution F X (⋅). A P/γ two-sided tolerance interval, say (L(X), U(X)), depends on the random sample X, and satisfies the following:

(1) P X P X L ( X ) X U ( X ) | X P = γ .

The quantities L(X) and U(X) are referred to as tolerance limits. Limits for one-sided upper and lower tolerance intervals satisfy similar expressions, namely

(2) P X P X X U ( X ) | X P = γ

and

(3) P X P X L ( X ) X | X P = γ ,

respectively. Note that the quantities to determine one-sided tolerance limits are equivalent to confidence intervals on the respective quantiles.

For the two-sided tolerance interval defined in Eq. (1), note that many such limits could be constructed from a given sample. Thus, one may desire to compute a central (or equal-tailed) tolerance interval. Let q P denote the Pth quantile of F X (x). A P/γ equal-tailed tolerance interval, written as (L e (X), U e (X)), satisfies the following:

(4) P X L e ( X ) q ( 1 P ) / 2 and q ( 1 + P ) / 2 U e ( X ) = γ .

As noted in Krishnamoorthy and Mathew (2009), equal-tailed tolerance intervals are often used to calculate reference limits, which are limits used by clinicians to identify the expected measurement range from a reference population. See Harris and Boyd (1995), for a detailed treatment of reference limits and regions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444634313000085

Structural Elements

François Axisa , Philippe Trompette , in Modelling of Mechanical Systems, 2005

6.3.1.2 Natural modes of vibration and buckling load

A closed-form solution does not exist for the natural modes of vibration of a rectangular plate with various standard boundary conditions such as those defined by the relations[6.73] to [6.77]. It is thus found convenient to discuss the modal properties of rectangular plates by restricting first the analysis to the particular case of four hinged edges, for which the analytical solution is easily derived. In subsection 6.3.1.4, an approximate solution based on the Rayleigh–Ritz method will be presented to deal with other boundary conditions.

Let us consider a rectangular plate with hinged edges and subjected to a compressive force − F xx (0). The modal equation is thus written as:

[6.94] D 4 Z x 4 + 2 4 Z x 2 y 2 + 4 Z y 4 + F x x 0 2 Z x 2 ω 2 ρ h Z = 0

all edges: Z  =   0; lateral edges: ∂2 Z/∂x 2  =   0; longitudinal edges: ∂2 Z/∂y 2  =   0

The system [6.94] can be solved by separating the variables, so Z(x, y)   = A(x) B(y), which gives:

1 A d 4 A d x 4 + 2 1 A d 2 A d x 2 1 B d 2 B d y 2 + 1 B d 4 B d y 4 F x x 0 1 A d 2 A d x 2 = ρ h D ω 2 A x x = 0 , L = 0 ; B y y = 0 , L = 0 ; d 2 A d x 2 x = 0 , L = 0 ; d 2 B d y 2 y = 0 , L = 0

A priori, a difficulty arises here, since the second term is still an x and y function. However, by looking to the prestressed term, it is recognized that the method can be applied, because the condition:

1 A d 2 A d x 2 = k x 2

where k x 2 is a constant implies necessarily that:

1 B d 2 B d y 2 = k y 2

where k y 2 is another constant.

The general solution of equation[6.94] is thus found to be of the type:

A x = a e k x x + b e k x x ; B y = a e k y y + b e k y y

By using the hinged edge conditions, they take the form:

A n x = α n sin n π x L x ; B m y = β m sin m π y L y , n , m = 1 , 2 , 3 , ,

which leads immediately to the normalized mode shapes:

[6.95] φ nm x y = sin n π x L x sin m π y L y , n , m = 1 , 2 , 3 , ,

and to the related natural pulsations:

[6.96] ω nm = 1 ρ h D n π L x 2 + m π L y 2 2 F x x 0 n π L x 2 1 / 2

From [6.96] it can be immediately concluded that the mode indexed by n and m buckles as soon as the compressive load exceeds the critical value:

[6.97] F n , m c L y m π 2 = D n m η + m η n 2 where η = L x L y

Obviously, if the compression load is increased from zero, the buckling load corresponds to the lowest modal critical load. In contrast with the beam case, the buckling load is found to depend upon the aspect ratio η of the plate and does not necessarily correspond to the first n (or m) mode, as illustrated in Figure 6.10, which refers to m  =   1 and where Lx is varied and Ly left constant. The buckling mode shape is clearly dependent on η. In this simple example, it is possible to find the less stable mode by cancelling the derivative of [6.97] with respect to n (or m); for instance if η  =   5, the plate will buckle according to the m  =   1, n  =   4 mode, see Figure 6.11. Such features are clearly a consequence of the 2D nature of the problem.

Figure 6.10. Buckling load as function of the plate aspect ratio and of the mode order

Figure 6.11. Mode shape: n  =   4, m  =   1

Thermal buckling is of practical interest in many applications. Let us consider the case of a square plate subjected to a uniform temperature increase Δθ. The formula[6.96] can be easily adapted to the present problem by using the results established in Chapter 5, subsection 5.3.5.2. We obtain:

[6.98] ω nm = E π 2 n 2 + m 2 ρ 1 v L 2 π 2 n 2 + m 2 h 2 12 1 v L 2 α Δ θ 1 / 2

Here the plate is found to buckle according to the first mode n  = m  =   1. It corresponds to the critical temperature increase:

[6.99] Δ θ C = 1 6 α 1 + v π h L 2

Figure 6.12 refers to a steel plate L  =   1   m. The natural frequency of the (1,1) mode is plotted versus Δθ together with the variation of the critical temperature increase as a function of the plate thickness.

Figure 6.12. Thermal buckling of a square steel plate

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874705105800088

The Scattering Problem

Giorgio Franceschetti , Daniele Riccio , in Scattering, Natural Surfaces, and Fractals, 2007

1.7 Analytic versus Numerical Evaluation of the Scattered Field

An exact closed-form solution for the electromagnetic field scattered from natural surfaces does not exist: any approach relies on some approximations. The corresponding solution is approximate as well. In spite of these limitations, classical analytic methods often shed light on the scattering mechanism and sometimes provide the rationale for interpreting the scattered data; however, they are rarely satisfactory with respect to the applications reported in Section 1.3. Then new scattering methods are continuously proposed, with the aim of improving their applicability; but usually small improvements in classical approaches are paid for by a large increase in methods complexity.

In this book, analytic methods to evaluate the scattered field in closed form, tailored to the fractal models, are presented. These methods are accommodated to the fractal models, discussing the involved approximations: each method allows expressing the scattered field as a function of the surface fractal and electromagnetic parameters, as well as the sensor geometric and electromagnetic parameters. A complete list of these parameters is now presented in detail: this is crucial to identifying the potentiality inherent to any scattering model.

Surface parameters are in order. In Chapter 3, it is shown that fractal models involve two fractal parameters; for instance, fractal dimension and topothesy, or alternatively, Hurst exponent and surface-increments standard deviation can be used. Finally, if spectral representations are considered, spectral amplitude and slope are used. The electromagnetic parameters, permittivity and conductivity of the homogeneous medium below the surface are grouped in the complex dielectric constant in the usual nonmagnetic-material case. Coming to the sensor parameters, the narrowband bistatic radar is considered: the usual parameters are the geometric coordinates of the transmitter and the receiver as well as the radar carrier frequency.

A comment on classical approaches is in order. In Chapter 2, it is shown that classical models for the scattering surface involve at least two parameters: usually the surface standard deviation and the correlation length. However, the two parameters lead to models that very poorly account for natural-surfaces properties, and their limitation can be mitigated by increasing the parameters' number. This (marginal) improvement in the model is paid for by a significant complication in the evaluation of the scattered field.

Classical procedures to evaluate the scattered electromagnetic field are available in several excellent books. In some cases, these analytic classical procedures are also briefly referred to, and this is for two reasons. First, to underline why they cannot be applied to the fractal surfaces; and second, to provide a comparison with the results based on fractal geometry.

Numerical solutions to the problem are also available. In this case, the scattered field is not obtained in a closed form, and most of the motivation discussed in Section 1.3 cannot be fulfilled. However, numerical procedures can be used to test the analytic ones. It is important to underline that also the numerical procedures lead to approximate solutions to the scattering problem. Hence, the numerical approach also needs some theoretical discussions to provide its assessment with respect to the actual data.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122656552500015

Surface Fractal Models

Giorgio Franceschetti , Daniele Riccio , in Scattering, Natural Surfaces, and Fractals, 2007

3.8 A Chosen Reference Fractal Surface for the Scattering Problem

In the following chapters, closed-form solutions to the scattering problem are derived. Any solution makes use of a fractal model to represent the natural-surfaces geometric properties, and allows evaluating the scattered electromagnetic field directly in terms of the surface fractal parameters and the illumination conditions. In those chapters, some comments are devoted to explaining the influence of each single fractal parameter on the scattered field; sometimes this is supported by inspection of the scattering diagrams. It is therefore convenient to introduce a reference surface to be employed to parametrically study the influence of each fractal parameter on the scattered field: this analysis is implemented by changing the fractal parameters with respect to those of the reference surface.

The selected fractal and illumination parameters identify a reference case somehow typical of the Earth environment as illuminated at microwave frequencies. In particular, all pertinent parameters selected for the reference case are reported in Table 3.2.

TABLE 3.2. Parameters relevant to the illumination conditions and to the reference surface considered in the following chapters to study the influence of the fractal parameters onto the scattered field.

Incidence angle ( ϑ i ) 45°
Illuminated area (X, Y) (1m, 1m)
Electromagnetic wavelength (λ) 0.1 cm
Hurst exponent (H) 0.8
Tone wave-number spacing coefficient (ν) e
Overall amplitude-scaling factor (B) 0.01 m

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122656552500039

Numerical Methods for Roots of Polynomials - Part II

J.M. McNamee , V.Y. Pan , in Studies in Computational Mathematics, 2013

Abstract

We deal here with low-degree polynomials, mostly closed-form solutions. We describe early and modern solutions of the quadratic, and potential errors in these. Again we give the early history of the cubic, and details of Cardan's solution and Vieta's trigonometric approach. We consider the discriminant, which decides what type of roots the cubic has. Then we describe several ways (both old and new) of solving the quartic, most of which involve first solving a "resolvent" cubic.

The quintic cannot in general be solved by radicals, but can be solved in terms of elliptic or related functions. We describe an algorithm due to Kiepert, which transforms the quintic into a form having no x 4 or x 3 term; then into a form where the coefficients depend on a single parameter; and later another similar form. This last form can be solved in terms of Weierstrass elliptic and theta functions, and finally the various transformations reversed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444527301000060

GLOBAL OPTIMIZATION OF HIGH-PERFORMANCE CONCENTRATORS

Narkis Shatz , John C. Bortz , in Nonimaging Optics, 2005

11.1 INTRODUCTION

Many nonimaging optical design problems encountered in practice have no known closed-form solutions. When this is the case, it is possible to obtain highperformance designs by means of global optimization. When applied to the design problem, this process is referred to as inverse engineering. This computationally intensive numerical design approach sequentially modifies the reflective and/or refractive surfaces of the optical system within a given parameterization scheme and constraint set until performance objective global optimality, evaluated upon a system radiometric model, is achieved. Global optimization can be used to determine reflector and lens configurations that achieve maximal flux transfer to a given target or that produce a desired radiometric distribution, such as an irradiance or intensity pattern.

This chapter provides an overview of the application of inverse engineering to the problem of nonimaging optical design. We begin with a brief summary of the behavior of nonimaging optical systems in terms of properties of mappings. An understanding of this behavior is central to the design problem, since it affects the limits on system performance, the choice of parametrization schemes, and the choice of the class of global-optimization algorithms to be used. We review the various factors affecting the performance of nonimaging optical systems, and present generalized formulations of étendue limits for use with inhomogeneous sources and/or targets. Following this preparatory material, we review the formalism of inverse engineering and include references to global-optimization algorithms that are applicable in the domain of nonimaging optical design. Finally, we provide five case studies of globally optimized designs, including three designs that use symmetry breaking to overcome the limitations imposed by skewness mismatches between the source and target.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780127597515500117

Statistical Methods for Physical Science

Harry F. Martz , Ray A. Waller , in Methods in Experimental Physics, 1994

14.2.4 Conjugate Priors

It is often difficult to perform the required analysis necessary to obtain a closed form solution for the posterior distribution P ( A i | D ) or π ( θ | t ) . Numerical approximations and computer simulations are sometimes used to provide approximate answers. However, so-called conjugate families of distributions are an exception to the general case. Loosely stated, conjugate families of distributions are those distributions for which the prior and posterior distributions are members of the same family. For a more precise definition and discussion, see [3]. A discussion and further references are available in [3]. When conjugate priors exist for a Bayesian analysis, it is usually trivial to obtain moments (or other summary measures of interest) from the posterior distribution, as they are often a simple function of the prior moments and the data. The following example illustrates a common conjugate family of distributions.

Consider a gamma prior distribution on λ, the unknown rate of occurrence of events in a Poisson process, where the Poisson data consist of observing x events in a given period of time.t. Bayes's theorem gives a gamma posterior distribution on λ as follows:

Gamma prior: π(λ; α, β) = β α Γ(α) λ α 1 e βλ , λ, α, β > 0 Poisson data: p(x; λ, t)= e -λt ( λt ) x /x! . x=0,1, ; λ, t > 0 Gamma posterior: π(λ | x;α,β, t )= ( t +β) x+α Γ( x +α) λ x+α-1 e -(β+r)λ , λ > 0

For convenience, we denote a gamma distribution on λ with parameters α and β as Gamma ( λ ; α , β ) . The moments of the gamma prior are functions of the parameters α and β; for example, the mean is α / β . Similarly, the parameters of the gamma posterior distribution are ( x + α ) and ( β + t ) and the posterior mean is ( x + α ) / ( β + t ) . Note that the posterior parameters are a function of the prior parameters α and β and the data x and t.

It may be shown that the beta family of distributions is the conjugate family for binomial sample data, while the gamma family is conjugate for exponential data [4].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0076695X08602641

Handbook of Statistics

David H. Wolpert , ... Dev G. Rajnarayan , in Handbook of Statistics, 2013

1.3 Relevant literature and roadmap of this paper

In Wolpert (2004a) can be found pedagogical examples where there are closed-form solutions for the q produced by an elementary delayed sampling PC optimization algorithm. Also presented there is a proof that in the infinitesimal limit, many techniques for updating q become identical; these techniques all become a variant of Evolutionary Game theory's replicator dynamics in that limit. See Wolpert and Bieniawski (2004a) for an explicit formulation of how to apply delayed sampling PC to scenarios where the underlying variable is the trajectory of a multi-dimensional variable through time, i.e., to a policy-optimization scenario. Related connections between game theory, statistical physics, information theory, and PC are discussed in Wolpert (2004b).

See Bieniawski et al. (2004, 2005), Macready et al. (2004), Fan Lee and Wolpert (2004), Wolpert and Bieniawski (2004a,b), and Antoine et al. (2004) for other work on delayed sampling PC, including both software and hardware experiments, for both optimization and control. In particular, see Wolpert and Lee (2004), Wolpert (2004), and Wolpert and Bieniawski (2004) for work showing, respectively, how to use delayed sampling PC to improve Metropolis-Hastings sampling, how to relate delayed sampling PC to the mechanism design work in Wolpert et al. (1999), Wolpert and Tumer (2001, 2002), Wolpert (2003), and how to extend it to continuous move spaces and time-extended strategies.

There are other probability-based approaches to optimization and control, including (Meginniss, 1976; Fudenberg and Levine, 1998; Shamma and Arslan, 2004; Jaynes, 1957; De Bonet, 1997; Rubinstein and Kroese, 2004; Sabes and Jordan, 1995). See (Wolpert, 2004b) for a discussion that relates delayed sampling PC (without using that name) to many of these. See Wolpert et al. (2004) and Wolpert and Tumer (2001) for earlier, less formal work related to delayed sampling PC.

Finally, see Wolpert et al. (2006) for an overview of delayed sampling PC, and for many extensions of the basic delayed sampling PC algorithms, including extensions for the case where we have constraints on X , and extensions that overcome the restrictions imposed by product distributions, while still retaining the distributed nature of the algorithm.

Since immediate sampling is a more recent body of techniques, fewer papers have been produced so far on it. See Rajnarayan et al. (2011, 2007, 2008) and Wolpert and Rajnarayan (2007).

In Section 2 we present a quick summary of the theory of delayed sampling PC. Section 3 then presents some implementation details and experiments. In Section 4 we present a quick summary of the theory of immediate sampling PC. Section 5 then presents some implementation details and experiments.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444538598000047

Systems of nonlinear differential equations

Henry J. Ricardo , in A Modern Introduction to Differential Equations (Third Edition), 2021

Summary

Nonlinear differential equations and systems of nonlinear equations are rarely handled satisfactorily by finding closed-form solutions. In particular, we can't analyze the stability of systems of nonlinear equations as easily as we analyzed the stability of linear systems in Chapter 6. The modern study of nonlinear phenomena relies heavily on the qualitative methods pioneered by H. Poincaré and others at the end of the nineteenth century and in the beginning of the twentieth century. Current technology implements the power of these qualitative techniques.

One of the differences between linear and nonlinear equations is that a nonlinear equation may have more than one equilibrium solution. Another difference is that a solution of a nonlinear equation may "blow up in finite time"—that is, they become unbounded as t approaches some finite value. A third difference is that a nonlinear equation or system may be extremely sensitive to initial conditions. A slight change in an initial value may lead to drastic changes in the behavior of the solution or solutions.

A point ( a , b ) is an equilibrium point of the general nonlinear autonomous system

x ˙ = F ( x , y ) y ˙ = G ( x , y )

if F ( a , b ) = 0 = G ( a , b ) . If the origin is an equilibrium point, and the functions F and G are "nice" enough, we may be able to write our system in the form

x ˙ = a x + b y + f ( x , y ) y ˙ = c x + d y + g ( x , y ) ,

where f and g are nonlinear functions and a = F x ( 0 , 0 ) , b = F y ( 0 , 0 ) , c = G x ( 0 , 0 ) , and d = G y ( 0 , 0 ) . More generally, if ( a , b ) ( 0 , 0 ) is an equilibrium point for the system, we can rewrite the system as

x ˙ = A ( x a ) + B ( y b ) + f ( x , y ) y ˙ = C ( x a ) + D ( y b ) + g ( x , y ) ,

where f and g are nonlinear and A = F x ( a , b ) , B = F y ( a , b ) , C = G x ( a , b ) , and D = G y ( a , b ) .

Another way to look at this general situation is to realize that we are translating the equilibrium point ( a , b ) to the origin by using the change of variables u = x a and v = y b . Of course, this means that x = u + a and y = v + b , so we can rewrite the last system as

u ˙ = A u + B v + f ( u , v ) v ˙ = C u + D v + g ( u , v ) ,

which has ( 0 , 0 ) as an equilibrium point. Note that this says that any equilibrium point ( a , b ) ( 0 , 0 ) can be transformed to the origin for the purpose of analyzing the stability of the system.

A nonlinear autonomous system

x ˙ = a x + b y + f ( x , y ) y ˙ = c x + d y + g ( x , y ) ,

where a d b c 0 , lim ( x , y ) ( 0 , 0 ) ( f ( x , y ) x 2 + y 2 ) = lim ( x , y ) ( 0 , 0 ) ( g ( x , y ) x 2 + y 2 ) = 0 and where the origin is an equilibrium point, is called an almost linear system; the reduced system

x ˙ = a x + b y y ˙ = c x + d y

is called the associated linear system (or linear approximation) about the origin.

The important Hartman–Grobman theorem is a qualitative result describing how the equilibrium points of the nonlinear and linear systems are related according to the nature of eigenvalues of the associated linear system. When the linearized system has zero as an eigenvalue or when the linearized system has pure imaginary eigenvalues (so that the equilibrium solution is a center), further analysis is necessary to determine the nature of the equilibrium points.

The Lotka–Volterra equations, the undamped pendulum, and the van der Pol equation provide important examples of nonlinear systems and their analyses. In particular, the van der Pol oscillator exhibits uniquely nonlinear behavior in having a stable limit cycle, an isolated closed trajectory that (in this case) serves as an asymptotic limit for all other trajectories as t . Some limit cycles, called unstable limit cycles, repel nearby trajectories. Finally, if trajectories near a limit cycle approach it from one side while being repelled from the other side, the cycle is called semistable.

The concept of a bifurcation that was analyzed in Section 2.7 for first-order equations can be extended to two-dimensional systems. Examples of transcritical, pitchfork, and saddle-node bifurcations are provided. A Hopf bifurcation occurs when an equilibrium solution loses its stability and a limit cycle appears as a parameter varies. Supercritical and subcritical Hopf bifurcations are discussed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182178000142

Computer-Generated Proofs of Mathematical Theorems

David M. Bressoud , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

III.D Petkovšek and Others

In his Ph.D. thesis of 1991, Marko Petkovšek showed how to find a closed form solution—or to show that such a solution does not exist—for any recursive formula of the form

j = 0 J a j ( n ) f ( n + j ) = g ( n ) ,

in which g(n) and the a j (n) are polynomials in n. By closed form, we mean a linear combination of a fixed number of hypergeometric terms.

Combined with the WZ method, Petkovšek's algorithm implies that in theory if not in practice, given any summation of proper hypergeometric terms, there is a completely automated computer procedure that will either find a closed form for the summation or prove that no such closed from exists. A full account of the WZ method and Petkovšek's algorithm is given in the book A  = B by Petkovšek, Wilf, and Zeilberger.

Others have worked on implementing and extending the ideas of the WZ method. One of the centers for this work has been a group headed by Peter Paule at the University of Linz in Austria. Ira Gessel has been at the forefront of those who have used this algorithm to implement computer searches that both discovered and proved a large number of new identities for hypergeometric series.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105008814