Feynman vs. Computer
60 points
by cgdl
8 hours ago
| 8 comments
| entropicthoughts.com
| HN
JKCalhoun
6 hours ago
[-]
As a hobbyist, I'm playing with analog computer circuits right now. If you can match your curve with a similar voltage profile, a simple analog integrator (an op-amp with a capacitor connected in feedback) will also give you the area under the curve (also as a voltage of course).

Analog circuits (and op-amps just generally) are surprising cool. I know, kind of off on a tangent here but I have integration on the brain lately. You say "4 lines of Python", and I say "1 op-amp".)

reply
dreamcompiler
6 hours ago
[-]
Yep. This is also how you solve differential equations with analog computers. (You need to recast them as integral equations because real-world differentiators are not well-behaved, but it still works.)

https://i4cy.com/analog_computing/

reply
ogogmad
4 hours ago
[-]
How does this compare to the Picard-Lindelof theorem and the technique of Picard iteration?
reply
addaon
5 hours ago
[-]
One of my favorite circuits from Korn & Korn [0] is an implementation of an arbitrary function of a single variable. Take an oscilloscope-style display tube. Put your input on the X axis as a deflection voltage. Close a feedback loop on the Y axis with a photodiode, and use the Y axis deflection voltage as your output. Cut your function of one variable out of cardboard and tape to the front of the tube.

[0] https://www.amazon.com/Electronic-Analog-Computers-D-c/dp/B0...

reply
nakamoto_damacy
16 minutes ago
[-]
Speaking of Analog computation:

A single artificial neuron could be implemented as:

Weighted Sum

Using a summing amplifier:

net = Σ_i (Rf/Ri * xi)

Where resistor ratios set the synaptic weights.

Activation Function

Common op-amp activation circuits:

Saturating function: via op-amp with clipping diodes → approximated sigmoid

Hard limiter: comparator behavior for step activation

Tanh-like response: differential pair circuits

Learning

Early analog systems often lacked on-device learning; weights were manually set with potentiometers or stored using:

Memristive elements (recent)

Floating-gate MOSFETs

Programmable resistor networks

reply
bananaflag
6 hours ago
[-]
> I hear that in electronics and quantum dynamics, there are sometimes integrals whose value is not a number, but a function, and knowing that function is important in order to know how the thing it’s modeling behaves in interactions with other things.

I'd be interested in this. So finding classical closed form solutions is the actual thing desired there?

reply
morcus
6 hours ago
[-]
I think what the author was alluding to was the path integral formulation [of quantum mechanics] which was advanced in large part by Feynman.

It's not that finding closed form solutions is what matters (I don't think most path integrals would have closed form solutions), but that the integration is done over the space of functions, not over Euclidian space (or a manifold in Euclidian space, etc...)

reply
8bitsrule
2 hours ago
[-]
Cool how the computer versions seem to work well as long as re-normalization isn't involved.
reply
Animats
5 hours ago
[-]
Good numerical integration is easy, because summing smooths out noise. Good numerical differentiation is hard, because noise is amplified.

Conversely, good symbolic integration is hard, because you can get stuck and have to try another route through a combinatoric maze. Good symbolic differentiation is easy, because just applying the next obvious operation usually converges.

Huh.

Mandatory XKCD: [1]

[1] https://xkcd.com/2117/

reply
kkylin
4 hours ago
[-]
That's exactly right. A couple more things:

- Differenting a function composed of simpler pieces always "converges" (the process terminates). One just applies the chain rule. Among other things, this is why automatic differentiation is a thing.

- If you have an analytic function (a function expressible locally as a power series), a surprisingly useful trick is to turn differentiation into integration via the Cauchy integral formula. Provided a good contour can be found, this gives a nice way to evaluate derivatives numerically.

reply
messe
6 hours ago
[-]
An integral trick I picked up from a lecturer at university: if you know the result has to be of the form ax^n for some a that's probably rational and some integer n but you're feeling really lazy and/or it's annoying to simplify (even for mathematica), just plug in a transcendental value for x like Zeta[3].

Then just divide by powers of that irrational number until you have something that looks rational. That'll give you a and n. It's more or less numerical dimensional analysis.

It's not that useful for complicated integrals, but when you're feeling lazy it's a fucking godsend to know what the answer should be before you've proven it.

EDIT: s/irrational/transcendental/

reply
eig
7 hours ago
[-]
What is the advantage of this Monte Carlo approach over a typical numerical integration method (like Runge-Kutta)?
reply
kens
6 hours ago
[-]
I was wondering the same thing, but near the end, the article discusses using statistical techniques to determine the standard error. In other words, you can easily get an idea of the accuracy of the result, which is harder with typical numerical integration techniques.
reply
ogogmad
4 hours ago
[-]
Numerical integration using interval arithmetic gets you the same thing but in a completely rigorous way.
reply
edschofield
5 hours ago
[-]
Numerical integration methods suffer from the “curse of dimensionality”: they require exponentially more points in higher dimensions. Monte Carlo integration methods have an error that is independent of dimension, so they scale much better.

See, for example, https://ww3.math.ucla.edu/camreport/cam98-19.pdf

reply
a-dub
4 hours ago
[-]
as i understand: numerical methods -> smooth out noise from sampling/floating point error/etc for methods that are analytically inspired that are computationally efficient where monte carlo -> computationally expensive brute force random sampling where you can improve accuracy by throwing more compute at the problem.
reply
MengerSponge
6 hours ago
[-]
Typical numerical methods are faster and way cheaper for the same level of accuracy in 1D, but it's trivial to integrate over a surface, volume, hypervolume, etc. with Monte Carlo methods.
reply
adrianN
6 hours ago
[-]
At least if you can sample the relevant space reasonably accurately, otherwise it becomes really slow.
reply
jgalt212
6 hours ago
[-]
The writer would have been well served to discuss why he chose Monte Carlo over than summing up all the small trapezoids.
reply
ForOldHack
1 hour ago
[-]
I would bet on Feynman any day of the week. Numerical methods came up in 'Hidden Figures' and her solution was to use Euler to move from a elliptical orbit to a parabolic descent.
reply
ogogmad
4 hours ago
[-]
The usage of confidence intervals here reminds me of the clearest way to see that integration is a computable operator, to the same degree that a function like sin() or sqrt() is computable. It's true thanks to a natural combination of (i) interval arithmetic and (ii) the "Darboux integral" approach to defining integration. So, intervals can do magic.
reply