Uncertainty Propagation for Common Operations (fiducia.stats)¶
Created on Tue Jul 21 12:11:12 2020
Common statistic operations
@author: Myles T. Brophy
Functions¶
|
Propagates variance (\(\sigma^2\)) through Simpson’s rule numerical integration. |
|
Error propogation for Trapezoidal rule integration using uniform or non-uniform grids. |
|
Propogates uncertainty for the gradient operator of an array of a given step size. |
|
Propogate uncertainty for the dot product of matrix a and 1D vector b. |
|
Propagate uncertainty for linear interpolation. |
Derivations for propagating uncertainty for some common operations.
Weighted Summation¶
Gien a vector \(X_i\) with N elements
A simplified version of this can be written as
Trap Rule Variance¶
For \(N\) steps of \(\Delta x_{k} = x_{k+1} - x_{k}\) where \(f(x_k) = y_k\), where each \(y_k\) is independent, an integral can be approximated as
To find the variance in the general case, use the last 3 equations.
The two first \(\operatorname{Var}\) terms are simple
where \(\sigma_k\) is the uncertainty of \(y_k\). Now for the Covariance of the two summations
Combine all this and the fourth equation to get
In the simple case of a uniform grid, where all \(\Delta x_k = \Delta x\) this can be expanded to
Gradient Variance¶
From the numpy.grad documentation, the gradient for discrete step sizes is approximated by
with the gradient at the first and the last data point being
for a list of \(x_i\) data points and \(h_d = x_{i+1} - x_i = h_i\) and \(h_s = x_i - x_{i-1} = h_{i-1}\). From this we can say that \(f(x_i) = y_i\) and \(f(x_i + h_i) = y_{i+1}\) and \(f(x_i - h_s) = y_{i-1}\). Using the variance of weighted sums where the weights are the \(h\) terms we get
where \(\sigma_i\) corresponds to the uncertainty in \(y_i\). Keep in mind that there are \(N\) \(y\) values and \(N-1\) \(h\) values because \(h_i\) is the difference between the \(N\) data points. Note that unlike in the Trap rule integration, the covariant term is 0 because no repeated uncertainty terms appear in the sum. At the borders \(i =0,N\) the variance is
Dot Product Variance¶
The dot product of vectors \(X,Y\) is defined as
Uncertainty propagation when multiplying two variables, \(f(u,v) = a u v\), with \(a\) as a constant, is given by
For the dot product we assume there is no covariance between \(X\) and \(Y\) because they are independent. This gets us
operatorname{Var}(X cdot Y) = sum_{i=1}^N (x_i sigma_{y_i})^2 + (y_i sigma_{x_i})^2
Linear interpolation¶
Linear interpolation of a point \(x_0 < x < x_1\) is given by
First thing we can do, to make this calculation easier, is assume that there is no uncertainty in the \(x_i\) terms. This is a short cut, but it`s all that’s required for the application in which this linear interpolation is being implemented. Starting with the numerator, we can use the weighted summation rules to say
where there is no covariant term, because we assume \(y_0\) and \(y_1\) and independent. Then, including the denominator as a constant (because we assume all \(x\) values have no uncertainty, we get a variance of