Approximation error
The uncertainty or numerical error is a measure of the adjustment or calculation of a magnitude with respect to the real or theoretical value that said magnitude has. An important aspect of the approximation errors is their numerical stability. Said stability refers to how within a numerical analysis algorithm the approximation error is propagated within the algorithm itself.
The concept of error is consubstantial with numerical calculation. In all problems it is essential to keep track of the errors made in order to estimate the degree of approximation of the solution obtained.
Types of errors
The errors associated with all numerical calculations have their origin in two main factors:
Inherent in the formulation of the problem
errorrabsorlutor=日本語valorrreal− − valorraprorximador日本語{displaystyle {rm {absolute}{absolute}{valuereal}-{valorapproximate}{fit}}}}}}
Absolute error by definition is always positive, as the absolute value of the difference between the real value and the approximate value is taken.
But the difference between the actual value and the approximate or measured (usually called real error), can be positive if the error is by default, or negative if the error is by excess.
errorrrelativor=errorrabsorlutorvalorrreal{displaystyle {rm {error relative={frac {error absolute}{valor real}}}}}}}}}
If the absolute error is εIt's said that ε It's an absolute mistake. So the relative ε is:
ε ε valorrreal=ε ε measurement value{displaystyle {frac {varepsilon }{rm {valorreal}}}={frac {varepsilon }{text{measurer}}}}}}}
One source of this type of error has its origin in the imprecision of physical data: physical constants and empirical data. This is the case of errors in the measurement of empirical data and they are generally random in nature, their analytical treatment is essential to contrast the result obtained computationally.
Consequence of the method used to find the solution to the problem
Regarding the second type of error (computational error), there are three main sources:
1. Mistakes in carrying out operations (bulk errors). This source of error is well known to anyone who has performed calculations by hand or using a calculator. The use of computers has greatly reduced the likelihood of such errors occurring. However, the probability that the programmer makes one of these errors (correctly calculating the wrong result) is not negligible. Furthermore, the presence of undetected bugs in the compiler or system software is not unusual. When it is not possible to verify that the computed solution is reasonably correct, the probability that a bulk error has been made cannot be ignored. However, this is not the source of error that will worry us the most.
2. The error caused by solving the problem not as formulated, but by some kind of approximation. It is generally caused by the substitution of an infinity (summation or integration) or an infinitesimal (differentiation) by a finite approximation. Some examples are:
- The calculation of an elementary function (e.g., breast x) employing only n terms of the infinites that constitute Taylor's serial expansion.
- Approximation of the integral of a function for a finite sum of the values of the function, such as the one used in the rule of the trapeze.
- Resolution of a differential equation replacing those derived by an approximation (finite differences).
- Solution of the equation f(x) = 0 by the method of Newton-Raphson: iterative process that generally converges only when the number of iterations tends to infinity.
We will call this error, in all its forms, a truncation error, since it results from truncating an infinite process to obtain a finite process. Obviously, we are interested in estimating, or at least limiting, this error in any numerical procedure.
3. Lastly, the other major source of error is the one that originates from the fact that arithmetic calculations cannot be performed with unlimited precision. Many numbers require infinitely many decimal places to be represented correctly, however, to operate with them it is necessary to round them. Even when a number can be represented exactly, some arithmetic operations can lead to errors (division can produce numbers that need to be rounded, and multiplication can produce more digits than can be stored). The error that is introduced when rounding a number is called the rounding error.
Physical experiments and approximation errors
The measurements of the different physical magnitudes that intervene in a given experience, whether they have been obtained directly or through their relationship by means of a formula with other magnitudes directly, can never be exact. Due to the limited precision that every measurement instrument has, as well as other external factors, it is necessary to accept the fact that it is not possible to know the exact value of a magnitude, there will always be an error, no matter how small. Therefore, any numerical result obtained experimentally must always be presented accompanied by a number that indicates how far said result can be from the exact value. This is a margin or range of error.