Integrate supports a variety of numerical integration techniques: - Newton-Cotes methods:
- Rectangle Rule.
- Trapezoidal Rule.
- Simpson's Rule.
- Newton's 3/8 Rule.
- Gauss quadrature methods: - Gauss-Legendre.
- Gauss-Laguerre.
- Gauss-Hermite.
- Gauss-Chebyshev First Kind.
- Gauss-Chebyshev Second Kind.
- Adaptive Methods: - Adaptive Simpson's method
- Romberg’s method.Nyquist limit, but for numerical integration?
BTW, that is not a serious suggestion; it is just that Wolfram Language (aka Mathematica) has both `Integrate` and `NIntegrate` for symbolic and numeric integration, respectively.
Additionally, compile time floating-point evaluation is limited. When I looked around recently, I didn't see a rust equivalent of gcem; any kind of transcendental function evaluation (which finding Gaussian quadrature points absolutely would require) would not allow compile-time evaluation.
this method is much faster and simpler.
> integrate(dnorm, -Inf, +Inf)
1 with absolute error < 9.4e-05
Can we do the same in this library?For what it's worth,
use integrate::adaptive_quadrature::simpson::adaptive_simpson_method;
use statrs::distribution::{Continuous, Normal};
fn dnorm(x: f64) -> f64 {
Normal::new(0.0, 1.0).unwrap().pdf(x)
}
fn main() {
let result = adaptive_simpson_method(dnorm, -100.0, 100.0, 1e-2, 1e-8);
println!("Result: {:?}", result);
}
prints Result: Ok(1.000000000053865)It does seem to be a usability hazard that the function being integrated is defined as a fn, rather than a Fn, as you can't pass closures that capture variables, requiring the weird dnorm definition
https://reference.wolfram.com/language/tutorial/NIntegrateOv...
use integrate::{
gauss_quadrature::hermite::gauss_hermite_rule,
};
use statrs::distribution::{Continuous, Normal};
fn dnorm(x: f64) -> f64 {
Normal::new(0.0, 1.0).unwrap().pdf(x)* x.powi(2).exp()
}
fn main() {
let n: usize = 170;
let result = gauss_hermite_rule(dnorm, n);
println!("Result: {:?}", result);
}
I got Result: 1.0000000183827922.Or, probably, dnorm is a probability distribution which includes a likeliness function, and a cumulative likeliness function, etc. I bet it doesn't work on arbitrary functions.
I didn't check, but I wonder if it is not better to do x = a + (i+0.5)*h... My reasoning is that if "i" is very big, then i * h can be much bigger than h/2 and thus h/2 may be eaten by numerical precision when added to i*h... And then you have to convince the optimizer to do it the way you want. But well, it's late...
There are common library extensions for that.
ymmv
Or zmmv if avx-512 is supported?You can email me if you want to, I'll be happy to help.