Inside Upscale

Friday, December 1, 2006

Probability/Archive1

The first paragraph is pure poetry.
Some of the second paragraph is not accurate. Probability 0 is not the
same as impossible, and probability 1 is not the same as certainty.
I fixed the page, then somehow, all the changes got lost. grrr don't
have the heart to go back through it right now.

No mention of subjective probability? It's been too long since I've studied this, so I won't change this article, but clearly we need to say something about that. Someone's also clearly got to write about the different philosophical theories of what probability is.

I've often heard "0" probability described as "impossibility" and "1" as "certainty." It is of no explanatory advantage at all to say that "number 0" means "probability 0." The reader already knew ''that.'' :-) Mosquito ringtone User:LMS/LMS



That's the whole problem. What the reader "knows", versus what the reader thinks he or
she knows, it not the same here. (I just read your peer-review comments, so you can't
slither away on a "common sense" argument.) The problem really is in the fact that
probability is based on the notion of measure theory.

In a nutshell, the probability of any particular event, including
taking the values 0 or 1, is 0. That is,
it forms a set of no measure. For a concrete example, integrating f between a and
a equals 0. Which is how random variables assign values, through the action of
the associated integral. So the "certain"
and "impossible" events have no measure, and we cannot speak of them within
probability theory. If this seems strange, it's related to the notion of
denumerability (countability). Denumerable sets have measure zero. The
easiest example of a denumerable set is the set of all rational numbers.


Ok, I just reread the main text. I concede that I did not make the point clear, and it could,
should, and probably will be redone by someone smarter than me by the time I peruse
down the relevant texts and ponder it a bit. If not, I will be back after I think
about it some more.


Well, at least one reader now knows he doesn't know anything about probability, a nice improvement over merely thinking it. ;-)

Perhaps you are not understanding the point of my comments, so let me try to be clearer. When speaking of ''subjective probability,'' surely "1" could be interpreted as "certainty." See [http://www.google.com/search?hl=ru&safe=off&q=%22subjective+probability%22+certainty]. You might deny that there is such a thing as subjective probability (that it is properly called a type of probability), but that's another kettle of fish; if you deny it in the article, you do not take the Sabrina Martins :neutral point of view/neutral point of view. Moreover, you should ''explain'' the fact that very often, "0" is taken to be impossibility and "1" certainty. Anyway, do please make your point clearer in the text! If not you, who? If not now, when? :-) Also, what's this about "slithering away" on a "common sense" argument? What do you mean? :-) Slitheringly, Nextel ringtones User:LMS/LMS


Yes, I think the overly careful avoidance of "certainty" and "impossibility" here is more confusing than useful. How about something along the lines of "probability 0 is generally understood to represent 'impossibility', while 1 is understood to represent 'certainty'". That makes it clear what the intended meaning is in common use, while still leaving open the possibility of more advanced interpretations that other articles can cover. Furthermore, the common usage is not at all "incorrect" as you implyit's completely correct in most of the contexts in which probability is used. It I am "drawing dead" in a poker game, that means I have calculated the probability of winning as 0, which means ''I'm going to lose, with absolute certainty.'' Even if there are contexts in which probability theory is useful with other definitions of what 0 and 1 mean, that doesn't change their definition in the far more common contexts our readers are likely to care about. Abbey Diaz User:Lee Daniel Crocker/LDC


''Note: Most mathematicians would claim that 1/2 = 0.5 and that no computation is needed to convert them. Likewise, 50% is merely notational abbreviation for 50/100 and likewise needs no conversion to be a real number TedDunning''

Of course, but I'm not writing for mathematicians. To ordinary human beings, converting from one notation to another is indeed a "computation", although typing "1÷2=" into a calculator is a pretty simple one. I think it's important to show laymen the different ways they might see probabilities expressed and how they relate. That information is more knowledge about language than about math, but again I see no reason why a page about "probability" should be limited to strict mathematics. Perhaps there is a better way to word the above to be more rigorous and also useful to a lay audience, but I can't think of it off hand. Free ringtones User:Lee Daniel Crocker/LDC



So the main page was changed back. Fine. wiki wiki wiki

The most I will concede is Bremaud's definition (P. Bremaud, An Introduction to
Probabilistic Modeling, p.4). The event E such that Prob(E) = 1 is the "almost certain"
event, the E' such that P(E') = 0 is the "almost impossible" event. I stress the
technical aspect of these definitions: "almost" refers to the fact that these
events both occur (as does any other event) with probability 0, that is, with
no measure. This stuff is really sticky, no doubt.

I agree with LDC that the page should be accessible to lay audiences, but let's not insult
their intelligence. Just because the distinctions are subtle (subtile heh) is no
reason to hide them, and these fine points really are part of the story of how and why
probability works, in both the pure and applied sense.

Further complicating everything is the distinction between the continuous and
discrete aspects of probability. I notice that Probability is (lately) classified
as discrete math, but thats not entirely correct, and is a topic for a different
day. :)

Oh yeah... common sense would say that P(E) = 0 means impossible, etc., but
that doesn't lead to useful definitions for a theory of probability (if it did,
the introductory texts would teach it that way instead of studiously avoiding
the topic). I am really hoping someone currently teaching a grad class in this
will set us all straight.



I just want to point out that if you dogmatically state ''what'' probability is and how probability claims are to be interpreted, as though this were "known by scientists," you fail to do justice to the ''fact'' that there are academics, from a wide variety of fields, who dispute about the very questions on which you are dogmatic. Moreover, an encyclopedia article called "probability" should do justice to all sides of this disputation. Anyway, I totally agree with your last sentence! Majo Mills User:LMS/LMS



The beauty of it all is that the dogma is "mathematical truth". One starts with definitions,
then develops a theory standing on those definitions. I hold very tightly to this dogma.
There may be academics from a wide variety of fields that may dispute the origin,
foundations and claims of probability theory, but I rather suspect none of these
academicians are practicing, contemporary mathematicians (unless they be constructivists...
and even they would likely uphold the discrete aspect. But lets not go there :).


Lest this not be construed as a neutral point of view, open any text on the
foundations of probability and measure theory. That's where I learned it.


My entire point really is that the current theory of probability confounds
common sense in some aspects. The places where this happens ("impossibility" etc)
are a result of the construction of the theory. We should embrace this,
not gloass over it.

Perhaps I should stop defending and ask some of my own questions, to wit: what does "impossible" mean, and how can we say with any "certainty" (whatever that means) what is or is not impossible? "Almost everywhere", "almost impossible" and "almost certain" have precise mathematical definitions in probability. Attributing any other meaning is philosophy, not mathematics.




The question of what "certainty" means is a philosophical one, indeed it is the whole subject of Mosquito ringtone :epistemology/epistemology, and is irrelevant here. The question of how to ''apply'' the mathematics of probability to real-world situations is also a philosophical one, and is the same as the question of how to apply scientific findings to real life. That, too, is a subject entirely irrelevant here. Putting links to articles about those philosophical questions ''is'' appropriate here, so feel free to do so. But this article is about probability itself as a subject. The mathematics of probabilitywhich, like all mathematics, exist entire independently of any interpretation or application thereofassigns the number "1" to mean "certainty", ''by definition'', and "0" to mean "impossibility", by definition, without taking any philosophical position on what those terms mean. The ordinary interpretation of those terms to ordinary circumstances of life (like rolling dice) is entirely obvious, useful, and clear. The fact that some philosophers argue about it is a subject for some other article; I, and our readers, are well-served in their ordinary lives by the simple understanding that the probability of drawing the 17 of hearts from a deck of cards is 0, and the probability of drawing a card with two sides is 1. If you want to enlighten them with some deeper understanding, write about it and put link here. Until then, let's keep the text here practical and useful. Sabrina Martins User:Lee Daniel Crocker/LDC



Lee, you are speaking for all of our readers?
Well, then, by all means I bow to your authority.
My participation in this conversation is necessarily
over.


Oh, grow up. I am expressing my opinion about what would be useful here; if you disagree, express yours. If you think I'm full of crap, say so, but explain ''why''. A thick skin is a useful tool for this place. LDC


Here is a practical example that shows how that "almost" does have an effect on reality, it's neither epistemology nor metaphysics... Draw a number between 1 and 10. What is the probability of drawing one particular number? 1/10? Wrong. Because I was talking about an irrational number between 1 and 10! So, you could have drawn 4.1234234156839201248324234... How many of these numbers are there? Infinite. So the probability that you draw exactly the one I wrote is zero, i.e. it's almost impossible. But it's still possible! Thus P=0 means "almost impossible" rather than impossible. The misconception that P=0 means impossibility comes from the "rule" that P(empty set)=0, which means that an impossible event has zero probability, i.e. the inverse implication. The latter is true, the former is not. At least for infinite sets, no matter whether they are countable or not.

That depends on whether your definition of "=" implies that 0 = 1/∞ (or more exactly, 1/ℵ1 since we're talking reals). When dealing with infinitesimals, it may not be appropriate to define it that way. And it ''is'' just a matter of definition, as is all of mathematics. LDC

Maybe this example is a tiny bit more down to Earth: say you flip a fair coin repeatedly, keep doing it forever. How likely is it that you get Tail all the way through, forever? Pretty unlikely. In fact, the probability can be shown to be 0. But truly ''impossible'' it is not. It's just not going to happen. And if you don't like zeros all the way through: how likely is it that you flip the binary expansion of π (say Tail = 0, Head = 1)? Again, the probability is 0. In fact, ''every'' sequence you produce this way has probability zero, even though one of them will happen.

However, these effects only show up if you do "artificial things" such as flipping a coin forever or picking a random real number. For everyday probabilities, which are always discrete probabilities, the notions of zero probability and impossibility are indeed identical. Nextel ringtones user:AxelBoldt/AxelBoldt




I'd like the article to emphasize that mathematical probability has ''absolutely nothing'' to do with the intuitive notions of probability people use in everyday life. In everyday life, a set of events each have a probability of happening and one of those events happens.

In mathematics, the notion of only one of those events occuring is sheer nonsense. (Mathematics is incapable of distinguishing between "can" happen and "does" happen so if only one event "does" happen then that's because it has probability 1.)

In everyday life, it takes many iterations of an experiment, each with its singular outcome, to be able to reconstruct a ''guess'' about the probability. In mathematics, all of the outcomes happen and the probability distribution is exact.

This is not a trivial issue but has enormous impact on people's (even many physicists') incomprehension of probability as it is used in physics. In particular, the notion of "probability" used by the Copenhagen interpretation of QM is literal nonsense. It's an intuition about the everyday world which people have transported into physics without justification; an intuition with ''no mathematical basis''. The only kind of interpretation of QM compatible with mathematical probability is the one given by Many-Worlds. Again, this is not a trivial issue.

By the way, is it only my impression or have people stopped teaching the mathematical definition of probability in stats courses? The only textbook I found which contained a formal definition of probability (as opposed to relying on people's intuition) was pretty old. I base my understanding of probability on the formal definition, of course.

:Regular stats courses usually don't give the sigma-algebra defintion of probability, but courses called "Probability and Statistics" or "Stochastics" do. I disagree with your statement that mathematical probability has nothing to do with everyday notions of probability. It's an axiomatic system, and as such devoid of meaning: you just prove what the axioms allow you to prove. But of course the whole point of the axiomatic system is that there are ''interpretations'' and ''models'' of the axiomatic system in the real world. For instance, the long-term relative frequencies often used in everyday life are a model of the probability axioms (or so we believe: this is a statement outside of mathematics and cannot be proven); the probability axioms were constructed to model intuitive notions of probability. So whatever you prove from the axioms of probability theory theory will then apply to the long-term relative frequency notion of probability as well. So a question such as "if you flip a fair coin 5 times, how likely is it that you get at least 4 heads", initially using the intuitive probability notion, can be translated in the language of mathematical probability and solved there. If mathematical probability and everyday probability had nothing to do with each other, then the latter would be useless. Abbey Diaz user:AxelBoldt/AxelBoldt, Wednesday, May 22, 2002


I suggest that the discussion above, which appears to be inactive, be archived. I'll do so in a week or two unless I hear otherwise. Cingular Ringtones Wile E. Heresiarch/Wile E. Heresiarch 13:48, 3 Jan 2004

This article needs some work. Different topics are mixed together and there are some inaccuracies. The edits listed below (second list) are intended to separate the article into four parts: (1) concept of probability, (2) an overview of formalization (let biceps time probability theory handle it mostly), (3) interpretations of formal theory, and (4) applications. If there is interest, I'll carry out the edits on a trial version so that you can see what the effect of such edits would be. bargaining additionally Wile E. Heresiarch/Wile E. Heresiarch 13:48, 3 Jan 2004

* para 1: probable == "likely to..." is problematic what is "likely" ? This will lead to a circular definition.

* para 1: "probability theory" doesn't, and can't, say anything about what words mean; "probability theory" is out of place in para 1.

* para 2: what mathematicians think probability is, is not too important. Disclaimer: I have an undergraduate math degree.

* para 2: that probabilities are numbers (such as zero or one) is adopted as an axiom in order to construct a mathematical theory of probability; the bit about zero impossible and one certain seems out of place in para 2.

* para 3: appears to be a link collection; also contains a misrepresentation of "statistics".

* "Probability in mathematics" identifying probability with coin flipping is extremely limiting. The law of large numbers is a theorem. The limit definition is a feature of a particular school of thought (frequentism). I like "Rosencrantz & Guildenstern" but the coin flipping scenario from R&G doesn't shed any light on "Probability in mathematics. The mention of Bayesianism was clearly an afterthought.

* "Formalization" section recapitulates content of darwin told probability theory article; doesn't mention Cox formulation as alternative to Kolmogorov formulation.

* "Representation and interpretation of probability values" doesn't mention schools of interpretation; has stuff like "To use the probability in math we must perform the division and express it as 0.5." bizarre; odds discussion good idea.

* "Distributions" definition is incorrect; doesn't state any connection to probability formalization; doesn't mention important continuous distributions

* "Remarks on probability calculations" "The difficulty of probability calculations lie in..." this statement describes one very specific kind of difficulty. para 2, "To learn more..." wednesday thielen probability axioms article doesn't have any computational stuff, and careful watch Bayes' theorem is pretty light on computation.

* Quotations don't have citations of the works in which they originally appeared.

Proposed edits to address issues above:

* para 1: describe "probability" in terms of expectation or betting.

* para 1: strike mention of "probability theory" there are links to people similarly probability theory later on.

* para 2: strike sentence 1 and modify remainder of paragraph accordingly.

* para 2: strike last sentence and move discussion of numerical values of "impossible" and "certain" to a math-oriented part.

* para 3: strike this para; put link collection at end of introductory section.

* "Probability in mathematics" strike this section and move some pieces of it elsewhere: move coin flipping & law of large numbers to an application section, move limit definition to discussion of frequentism, strike R&G, mention Bayesianism under a section on interpretation.

* "Formalization" strike para 1; rephrase existing discussion as brief overview of Kolgorov formulation and let forego their probability theory article do most of the work; mention density & distribution functions and connect those with Kolmogorov formulation; mention Cox formulation and give cross ref.

* "Representation and interpretation of probability values" mention schools of interpretation (Bayesian, frequentist) here; strike "1/2 == 0.5" stuff.

* "Distributions" restate definition distribution as model; state connection to formal theory cdf measure, pdf basis for constructing cdf; mention Gaussian, t, chi-square, gamma distributions.

* "Remarks on probability calculations" generalize remark about numbers of possible events, etc., to general modeling considerations; expand remark about Bayes' theorem to mention general strategy (integration) and potential problems.

* "See also" add links to gambling and decision theory there should probably be an article devoted entirely to gambling calculations, but that's beyond the scope of this article.


Law of Large Numbers

First, good job on the entry to all....

Now, the Law of Large Numbers; though many texts butcher this deliberately to avoid tedious explanations to the average freshman, the law of large numbers is not the limit stated in this entry. Upon reflection, one can even see that the limit stated isn't well-defined. Fortunately, you present Stoppard's scene that so poignantly illustrates the issues involved with using the law of large numbers to interpret probabilities; the exact issue of it not being a guarantee of convergence. I have an edit which I present for discussion:

...
As ''N'' gets larger and larger, we ''expect'' that in our example the ratio ''N''H/''N'' will get closer and closer to the probability of a single coin flip being heads. Most casual observers are even willing to ''define'' the probability Pr(''H'') of flipping heads as the utterly uncontroversial mathematical limit, as ''N'' approaches infinity, of this sequence of ratios:

:\Pr(H) = \lim_

In actual practice, of course, we cannot flip a coin an infinite number of times; so in general, this formula most accurately applies to situations in which we have already assigned an ''a priori'' probability to a particular outcome (in this case, our ''assumption'' that the coin was a "fair" coin). Furthermore, mathematically speaking, the above limit is not well-defined; the law of large numbers is a little more convoluted and dependent upon already having some definition of probability. The theorem states that, given Pr(''H'') and any arbitrarily small probability ε and difference δ, there exists some number ''n'' such that for all ''N'' > ''n'',

:Pr\left( \left/\Pr(H) - \right/>\delta \right)

In other words, by saying that "the probability of heads is 1/2", this law asserts that if we flip our coin often enough, it becomes more and more likely that the number of heads over the number of total flips will become arbitrarily close to 1/2. Unfortunately, this theorem says that the ratio will probably get close to the stated probability, and provides no guarantees of convergence.

This aspect of the law of large numbers is sometimes troubling when applied to real world situations.
...

toward noticeably Tlee/Tlee 03:44, 13 Apr 2004


:Since ''N''''H''/''N'' is just the sample mean of a substances the Bernoulli distribution/Bernoulli random variable, the circuit regular Law of large numbers#The strong law/strong law of large numbers should guarantee the convergence of ''N''''H''/''N'' to the mean, Pr(''H''). That is, convergence will occur israeli air almost surely, or equivalently

::\Pr\left( \lim_ = \Pr(H) \right) = 1.

:Nonetheless, I agree that the way probability is `defined' in the current version of the article needs some refinement, and other than the above comment, I like what you've got, Tlee. I'd say just go ahead and make the edit!

:clergy because Bjcairns/Ben Cairns 23:47, 15 Apr 2004