Volume 7, Issue 4
Linear Regression to Minimize the Total Error of the Numerical Differentiation

Jengnan Tzeng

East Asian J. Appl. Math., 7 (2017), pp. 810-826.

Published online: 2018-02

Export citation
  • Abstract

It is well known that numerical derivative contains two types of errors. One is truncation error and the other is rounding error. By evaluating variables with rounding error, together with step size and the unknown coefficient of the truncation error, the total error can be determined. We also know that the step size affects the truncation error very much, especially when the step size is large. On the other hand, rounding error will dominate numerical error when the step size is too small. Thus, to choose a suitable step size is an important task in computing the numerical differentiation. If we want to reach an accuracy result of the numerical difference, we had better estimate the best step size. We can use Taylor Expression to analyze the order of truncation error, which is usually expressed by the big O notation, that is, $E(h)=Ch^k$. Since the leading coefficient $C$ contains the factor $f^{(k)}(ξ)$ for high order $k$ and unknown $ξ$, the truncation error is often estimated by a roughly upper bound. If we try to estimate the high order difference $f^{(k)}(ξ)$, this term usually contains larger error. Hence, the uncertainty of $ξ$ and the rounding errors hinder a possible accurate numerical derivative.
We will introduce the statistical process into the traditional numerical difference. The new method estimates truncation error and rounding error at the same time for a given step size. When we estimate these two types of error successfully, we can reach much better modified results. We also propose a genetic approach to reach a confident numerical derivative.

  • Keywords

Truncation error, leading coefficient, asymptotic constant, rounding error.

  • AMS Subject Headings

65M10, 78A48

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{EAJAM-7-810, author = {}, title = {Linear Regression to Minimize the Total Error of the Numerical Differentiation}, journal = {East Asian Journal on Applied Mathematics}, year = {2018}, volume = {7}, number = {4}, pages = {810--826}, abstract = {

It is well known that numerical derivative contains two types of errors. One is truncation error and the other is rounding error. By evaluating variables with rounding error, together with step size and the unknown coefficient of the truncation error, the total error can be determined. We also know that the step size affects the truncation error very much, especially when the step size is large. On the other hand, rounding error will dominate numerical error when the step size is too small. Thus, to choose a suitable step size is an important task in computing the numerical differentiation. If we want to reach an accuracy result of the numerical difference, we had better estimate the best step size. We can use Taylor Expression to analyze the order of truncation error, which is usually expressed by the big O notation, that is, $E(h)=Ch^k$. Since the leading coefficient $C$ contains the factor $f^{(k)}(ξ)$ for high order $k$ and unknown $ξ$, the truncation error is often estimated by a roughly upper bound. If we try to estimate the high order difference $f^{(k)}(ξ)$, this term usually contains larger error. Hence, the uncertainty of $ξ$ and the rounding errors hinder a possible accurate numerical derivative.
We will introduce the statistical process into the traditional numerical difference. The new method estimates truncation error and rounding error at the same time for a given step size. When we estimate these two types of error successfully, we can reach much better modified results. We also propose a genetic approach to reach a confident numerical derivative.

}, issn = {2079-7370}, doi = {https://doi.org/10.4208/eajam.161016.300517a}, url = {http://global-sci.org/intro/article_detail/eajam/10722.html} }
TY - JOUR T1 - Linear Regression to Minimize the Total Error of the Numerical Differentiation JO - East Asian Journal on Applied Mathematics VL - 4 SP - 810 EP - 826 PY - 2018 DA - 2018/02 SN - 7 DO - http://doi.org/10.4208/eajam.161016.300517a UR - https://global-sci.org/intro/article_detail/eajam/10722.html KW - Truncation error, leading coefficient, asymptotic constant, rounding error. AB -

It is well known that numerical derivative contains two types of errors. One is truncation error and the other is rounding error. By evaluating variables with rounding error, together with step size and the unknown coefficient of the truncation error, the total error can be determined. We also know that the step size affects the truncation error very much, especially when the step size is large. On the other hand, rounding error will dominate numerical error when the step size is too small. Thus, to choose a suitable step size is an important task in computing the numerical differentiation. If we want to reach an accuracy result of the numerical difference, we had better estimate the best step size. We can use Taylor Expression to analyze the order of truncation error, which is usually expressed by the big O notation, that is, $E(h)=Ch^k$. Since the leading coefficient $C$ contains the factor $f^{(k)}(ξ)$ for high order $k$ and unknown $ξ$, the truncation error is often estimated by a roughly upper bound. If we try to estimate the high order difference $f^{(k)}(ξ)$, this term usually contains larger error. Hence, the uncertainty of $ξ$ and the rounding errors hinder a possible accurate numerical derivative.
We will introduce the statistical process into the traditional numerical difference. The new method estimates truncation error and rounding error at the same time for a given step size. When we estimate these two types of error successfully, we can reach much better modified results. We also propose a genetic approach to reach a confident numerical derivative.

Jengnan Tzeng. (2020). Linear Regression to Minimize the Total Error of the Numerical Differentiation. East Asian Journal on Applied Mathematics. 7 (4). 810-826. doi:10.4208/eajam.161016.300517a
Copy to clipboard
The citation has been copied to your clipboard