# Probability Theory

• Subject

• Pages 8
• Words 1763
• Views 37
• Can’t find relevant credible information

A branch of mathematics concerned with the analysis of random phenomena. The outcome of a random event cannot be determined before it occurs, but it may be any one of several possible outcomes. The actual outcome is considered to be determined by chance. The word probability has several meanings in ordinary conversation. Two of these are particularly important for the development and applications of the mathematical theory of probability. One is the interpretation of probabilities as relative frequencies, for which simple games involving coins, cards, dice, and roulette wheels provide examples. It is the likeliness of an event happening based on all the possible outcomes. The ratio for the probability of an event ‘P’ occurring is P (event) = number of favorable outcomes divided by number of possible outcomes. Example: A coin is tossed on a standard 8? 8 chessboard. What is the theoretical probability that the coin lands on a black square? Choices: A. 0. 5 B. 0. 25 C. 0. 42 D. 0. 6

A Solution: Step 1: Theoretical probability = number of favorable outcomes/number of possible outcomes.

Step 2: The probability of the coin lands on the black square is 32. Step 3: Total number of outcomes = 64. Step 4: P (event) = Step 5: == 0. 5 Step 6: The theoretical probability that the coin lands on a black square is 0. 5.

## Permutation and Combination

The various ways in which objects from a set may be selected, generally without replacement, to form subsets. This selection of subsets is called a permutation when the order of selection is a factor, a combination when order is not a factor.

By considering the ratio of the number of desired subsets to the number of all possible subsets for many games of chance in the 17th century. The French mathematicians Blaise Pascal and Pierre de Fermat gave impetus to the development of combinatorics and probability theory.

• Permutation: Permutation means the arrangement of things. The word arrangement is used, if the order of things is considered.
• Combination: Combination means selection of things. The word selection is used, when the order of things has no importance.

### Example of Permutation

Suppose we have to form a number of consisting of three digits using the digits 1,2,3,4, to form this number the digits have to be arranged. Different numbers will get formed depending upon the order in which we arrange the digits. 2. How many different signals can be made by 5 flags from 8-flags of different colors? Ans. Number of ways taking 5 flags out of 8-flage = 8P5 =   8! / (8-5)! = 8 x 7 x 6 x 5 x 4 = 6720 3. Q. How many words can be made by using the letters of the word “SIMPLETON” taken all at a time? Ans. There are ‘9’ different letters of the word “SIMPLETON”

Number of Permutations taking all the letters at a time = 9P9 = 9! = 362880. Example of combination: 1. Now suppose that we have to make a team of 11 players out of 20 players, because the order of players in the team will not result in a change in the team. No matter in which order we list out the players the team will remain the same! For a different team to be formed at least one player will have to be changed. 2. Find the number of different choices that can be made from 3 apples, 4 bananas and 5 mangoes, if at least one fruit is to be chosen. Ans: Number of ways of selecting apples = (3+1) = 4 ways.

Number of ways of selecting bananas = (4+1) = 5 ways. Number of ways of selecting mangoes = (5+1) = 6 ways. Total number of ways of selecting fruits = 4 x 5 x 6 But this includes, when no fruits i. e. zero fruits is selected =&gt; Number of ways of selecting at least one fruit = (4x5x6) -1 = 119 B.

## Types of Probability

### Classical theory of probability

The classical approach to probability is to count the number of favorable outcomes, the number of total outcomes (outcomes are assumed to be mutually exclusive and equiprobable), and express the probability as a ratio of these two numbers.

Here, “favorable” refers not to any subjective value given to the outcomes, but is rather the classical terminology used to indicate that an outcome belongs to a given event of interest. What is meant by this will be made clear by an example, and formalized with the introduction of axiomatic probability theory? The classical definition of probability| If the number of outcomes belonging to an event  is, and the total number of outcomes is, then the probability of event  is defined as. | Example: A standard deck of cards (without jokers) has 52 cards.

If we randomly draw a card from the deck, we can think of each card as a possible outcome. Therefore, there are 52 total outcomes. We can now look at various events and calculate their probabilities:  Out of the 52 cards, there are 13 clubs. Therefore, if the event of interest is drawing a Club, there are 13 favorable outcomes, and the probability of this event is. There are 4 kings (one of each suit). The probability of drawing a king is. What is the probability of drawing a king OR a club? This example is slightly more complicated.

We cannot simply add together the number of outcomes for each event separately () as this inadvertently counts one of the outcomes twice (the king of clubs). The correct answer is from where this is essentially. Classical probability suffers from a serious limitation. The definition of probability implicitly defines all outcomes to be equiprobable. While this might be useful for drawing cards, rolling dice, or pulling balls from urns, it offers no method for dealing with outcomes with unequal probabilities. This limitation can even lead to mistaken statements about probabilities.

An often given example goes like this: I could be hit by a meteor tomorrow. There are two possible outcomes: I will be hit, or I will not be hit. Therefore, the probability I will be hit by a meteor tomorrow is. Of course, the problem here is not with the classical theory, merely the attempted application of the theory to a situation to which it is not well adapted. This limitation does not, however, mean that the classical theory of probability is useless. At many points in the development of the axiomatic approach to probability, classical theory is an important guiding actor.

## Emperial or Statistical Probability or Frequency of occurrence

This approach to probability is well-suited to a wide range of scientific disciplines. It is based on the idea that the underlying probability of an event can be measured by repeated trials.

Let be the number of times an event occurs after trials. We define the probability of event as| * It is of course impossible to conduct an infinite number of trials.

However, it usually suffices to conduct a large number of trials, where the standard of large depends on the probability being measured and how accurate a measurement we need. * A note on this definition of probability: How do we know the sequence  in the limit will converge to the same result every time, or that it will converge at all? The unfortunate answer is that we don’t. To see this, consider an experiment consisting of flipping a coin an infinite number of times. We are interested in the probability of heads coming up. Imagine the result is the following sequence: HTHHTTHHHHTTTTHHHHHHHHTTTTTTTTHHHHHHHHHHHHHHHHTTTTTTTTTTTTTTTT…

With each run of heads and tails being followed by another run twice as long. For this example, the sequence oscillates between roughly and doesn’t converge. We might expect such sequences to be unlikely, and we would be right. It will be shown later that the probability of such a run is 0, as is a sequence that converges to anything other than the underlying probability of the event. However, such examples make it clear that the limit in the definition above does not express convergence in the more familiar sense, but rather some kind of convergence in probability.

The problem of formulating exactly what this means belongs to axiomatic probability theory. Subjective probability theory * Axiomatic probability theory, although it is often frightening to beginners, is the most general approach to probability, and has been employed in tackling some of the more difficult problems in probability. We start with a set of axioms, which serve to define a probability space. Although these axioms may not be immediately intuitive, be assured that the development is guided by the more familiar classical probability theory. * Let S be the sample space of a random experiment.

The probability P is a real valued function whose domain is the power set of S and range is the interval [0,1] satisfying the following axioms: (i) For any event E, P (E) ? 0 (ii) P (S) = 1 (iii) If E and F are mutually exclusive events, then P(E ? F) = P(E) + P(F). It follows from (iii) that P (? ) = 0. To prove this, we take F = ? and note that E and ? are disjoint events. Therefore, from axiom (iii), we get P (E ? ?) = P (E) + P (? ) or P (E) = P (E) + P (? ) i. e. P (? ) = 0. Let S be a sample space containing outcomes ? 1 , ? 2 ,… ,? n , i. e. , S = {? 1, ? 2, … , ? } It follows from the axiomatic definition of probability that: (i) 0 ? P (? i) ? 1 for each ? i ? S (ii) P (? 1) + P (? 2) + … + P (? n) = 1 (iii) For any event A, P (A) = ? P (? i), ? i ? A.

## Hypothesis Testing

Hypothesis testing is the use of statistics to determine the probability that a given hypothesis is true. The usual process of hypothesis testing consists of four steps.

1. Formulate the null hypothesis; null hypothesis is a statistical hypothesis that is tested for possible rejection under the assumption that it is true (usually those observations are the result of chance). The concept was introduced by R.A. Fisher. The hypothesis contrary to the null hypothesis, usually that the observations are the result of a real effect, is known as the alternative hypothesis. The alternative hypothesis is the hypothesis used in hypothesis testing that is contrary to the null hypothesis. It is usually taken to be that the observations are the result of a real effect (with some amount of chance variation superposed). (Commonly, that the observations are the result of pure chance) and the alternative hypothesis  (commonly, that the observations show a real effect combined with a component of chance variation).
2. Identify a test statistic that can be used to assess the truth of the null hypothesis.
3. Compute the P-value, which is the probability that a test statistic at least as significant as the one observed would be obtained assuming that the hypothesis was true. The smaller the -value, the stronger the evidence against the null hypothesis.
4. Compare the -value to an acceptable significance value  (sometimes called an alpha value). If, that the observed effect is statistically significant, the null hypothesis is ruled out, and the alternative hypothesis is valid.