Help


[permalink] [id link]
+
Page "Astronomical seeing" ¶ 80
from Wikipedia
Edit
Promote Demote Fragment Fix

Some Related Sentences

For and Gaussian
For a Gaussian response system ( or a simple RC roll off ), the rise time is approximated by:
For general matrices, Gaussian elimination is usually considered to be stable in practice if you use partial pivoting as described below, even though there are examples for which it is unstable.
For example, if one assumes that data arise from a univariate Gaussian distribution, then one has assumed a Gaussian model:.
For example, a mixture of Gaussians with one Gaussian at each data point is dense is the space of distributions.
For example, a common choice for A < sub > o </ sub > is a Gaussian wave packet:
For example, a Gaussian wavefunction ψ might take the form:
For an ideal single-mode Gaussian beam, the D4σ, D86 and 1 / e < sup > 2 </ sup > width measurements would give the same value.
For both families, the lowest-order solution describes a Gaussian beam, while higher-order solutions describe higher-order transverse modes in an optical resonator.
For a Gaussian beam propagating in free space, the spot size ( radius ) w ( z ) will be at a minimum value w < sub > 0 </ sub > at one place along the beam axis, known as the beam waist.
For a Gaussian beam, the BPP is the product of the beam's divergence and waist size.
For a Gaussian intensity ( i. e. power density, W / m < sup > 2 </ sup >) distribution in a single-mode optical fiber, the mode field diameter is that at which the electric and magnetic field strengths are reduced to 1 / e of their maximum values, i. e., the diameter at which power density is reduced to 1 / e < sup > 2 </ sup > of the maximum power density, because the power density is proportional to the square of the field strength.
For large numbers the Poisson distribution approaches a normal distribution, typically making shot noise in actual observations indistinguishable from true Gaussian noise except when the elementary events ( photons, electrons, etc.
For example, the inverse of cumulative Gaussian distribution
For sufficiently nice prior probabilities, the Bernstein-von Mises theorem gives that in the limit of infinite trials and the posterior converges to a Gaussian distribution independent of the initial prior under some conditions firstly outlined and rigorously proven by Joseph Leo Doob in 1948, namely if the random variable in consideration has a finite probability space.
For any affine transformation of the Gaussian plane, z mapping to a z + b, a ≠ 0, a triangle is transformed but does not change its shape.
For a Gaussian distribution this is the best unbiased estimator ( that is, it has the lowest MSE among all unbiased estimators ), but not, say, for a uniform distribution.
For a Gaussian distribution, where, this means the MSE is minimized when dividing the sum by, whereas for a Bernoulli distribution with p = 1 / 2 ( a coin flip ),, the MSE is minimized for.
For example, for a laser producing pulses with a Gaussian temporal shape, the minimum possible pulse duration Δt is given by
For the HeNe laser with a 1. 5 GHz spectral width, the shortest Gaussian pulse consistent with this spectral width would be around 300 picoseconds ; for the 128 THz bandwidth Ti: sapphire laser, this spectral width would be only 3. 4 femtoseconds.
For large α, the shape of the Kaiser window ( in both time and frequency domain ) tends to a Gaussian curve.
For example, the Gaussian curvature of a cylindrical tube is zero, the same as for the " unrolled " tube ( which is flat ).
* For an orthogonal parametrization ( i. e., ), Gaussian curvature is:
* For a surface described as graph of a function, Gaussian curvature is:

For and random
For every real number x, the cumulative distribution function of a real-valued random variable X is given by
For each puzzler, one correct answer is chosen at random, with the winner receiving a $ 26 gift certificate to the Car Talk store, referred to as the " Shameless Commerce Division ".
For certain types of point sets, such as a uniform random distribution, by intelligently picking the splitting lines the expected time can be reduced to O ( n log log n ) while still maintaining worst-case performance.
For the quantitative analysis, ten articles were selected at random – circumcision, Charles Drew, Galileo, Philip Glass, heart disease, IQ, panda bear, sexual harassment, Shroud of Turin and Uzbekistan – and letter grades of A – D or F were awarded in four categories: coverage, accuracy, clarity, and recency.
For elliptic-curve-based protocols, it is assumed that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible.
For a random sample the harmonic mean is calculated as above.
For example, consider a model which gives the probability density function of observable random variable X as a function of a parameter θ.
For example, children should hold a greater love for their parents than for random strangers.
For example, it is common for digital balances to exhibit random error in their least significant digit.
For example, seven shuffles of a new deck leaves an 81 % probability of winning New Age Solitaire where the probability is 50 % with a uniform random deck ( Mann, especially section 10 ).
For example, one can assign a random number to each card, and then sort the cards in order of their random numbers.
Informally speaking, the prime number theorem states that if a random integer is selected in the range of zero to some large integer N, the probability that the selected integer is prime is about 1 / ln ( N ), where ln ( N ) is the natural logarithm of N. For example, among the positive integers up to and including N = 10 < sup > 3 </ sup > about one in seven numbers is prime, whereas up to and including N = 10 < sup > 10 </ sup > about one in 23 numbers is prime ( where ln ( 10 < sup > 3 </ sup >)= 6. 90775528. and ln ( 10 < sup > 10 </ sup >)= 23. 0258509 ).
For example, the notion of gauge invariance forms the basis of the well-known Mattis spin glasses, which are systems with the usual spin degrees of freedom for i = 1 ,..., N, with the special fixed " random " couplings Here the ε < sub > i </ sub > and ε < sub > k </ sub > quantities can independently and " randomly " take the values ± 1, which corresponds to a most-simple gauge transformation This means that thermodynamic expectation values of measurable quantities, e. g. of the energy are invariant.
#* For security purposes, the integers p and q should be chosen at random, and should be of similar bit-length.
For a continuous random variable, the probability of any specific value is zero, whereas the probability of some infinite set of values ( such as an interval of non-zero length ) may be positive.
For the most part, statistical inference makes propositions about populations, using data drawn from the population of interest via some form of random sampling.
For example, one may assume that the distribution of population values is truly Normal, with unknown mean and variance, and that datasets are generated by ' simple ' random sampling.
For example, every continuous probability distribution has a median, which may be estimated using the sample median or the Hodges-Lehmann-Sen estimator, which has good properties when the data arise from simple random sampling.
For n independent and identically distributed continuous random variables X < sub > 1 </ sub >, X < sub > 2 </ sub >, ..., X < sub > n </ sub > with cumulative distribution function G ( x ) and probability density function g ( x ) the range of the X < sub > i </ sub > is the range of a sample of size n from a population with distribution function G ( x ).
For n nonidentically distributed independent continuous random variables X < sub > 1 </ sub >, X < sub > 2 </ sub >, ..., X < sub > n </ sub > with cumulative distribution functions G < sub > 1 </ sub >( x ), G < sub > 2 </ sub >( x ), ..., G < sub > n </ sub >( x ) and probability density functions g < sub > 1 </ sub >( x ), g < sub > 2 </ sub >( x ), ..., g < sub > n </ sub >( x ), the range has cumulative distribution function
For n independent and identically distributed discrete random variables X < sub > 1 </ sub >, X < sub > 2 </ sub >, ..., X < sub > n </ sub > with cumulative distribution function G ( x ) and probability mass function g ( x ) the range of the X < sub > i </ sub > is the range of a sample of size n from a population with distribution function G ( x ).
For the univariate case, Indeed, for random variables X and Y, the expectation of their product is an inner product.
For every finite subset, the k-tuple is a random variable taking values in.
For example, a pianist might simply sit and start playing chords, melodies, or random notes that come to mind in order to find some inspiration, then build on the discovered lines to add depth.

0.266 seconds.