Probability and statistics as fields with an exploratory medium

From Issawiki
Revision as of 09:14, 31 March 2020 by Issa (talk | contribs)
Jump to: navigation, search

Something I've noticed about probability and statistics is that when I'm working on something (most recently, I was thinking about [1]) and I'm not quite sure what the answer is, I can often code up a simple python program to test my ideas/check my work. Yes, there are even more fancy things like probabilistic programming, but so far I haven't even needed that. Just simple monte carlo style sampling + computing values of things gets me quite far as an "exploratory medium" or sanity checker.

Obviously, programming has something similar, where you get immediate feedback about syntax errors, bugs (e.g. failed assertions), etc.

Other fields of math don't seem to have such a good medium. For example, you can draw things on a piece of paper while learning about sequences or metric spaces, and this is often helpful. But there is no feedback, no intelligent process that takes your work and says "you failed" or "looks good to me". To get that feedback, you have to come up with your own examples to test your ideas, you have to manually check consistency with a theorem's assumptions, and so on. When I'm doing math, I often have multiple high level strategies/ideas about what to do, but then my low-level "crunching ability" isn't so good, so then I end up spending a lot of time fumbling through the details.[notes 1] Of course all of this is "good for your soul", but I do feel like, in the same way that a programmer has access to code completion and syntax checking and so on, that a mathematician would also benefit from some sort of assistant that could automate some of this straightforward/grunt work.
Cite error: <ref> tags exist for a group named "notes", but no corresponding <references group="notes"/> tag was found, or a closing </ref> is missing