Current Affairs JEE Main & Advanced

This is the most popular co-ordinate system.   Axis of \[x\] : The line \[XOX'\] is called axis of \[x\].   Axis of \[y\] : The line \[YOY'\] is called axis of \[y\].   Co-ordinate axes : \[x\] axis and \[y\] axis together are called axis of  co-ordinates or axes of reference.   Origin : The point \['O'\] is called the origin of co-ordinates or the origin.   Let \[OL=x\] and \[OM=y\] which are respectively called the abscissa (or x-coordinate) and the ordinate (or y-coordinate). The       co-ordinate of P are \[(x,\,\,y)\].     Here, co-ordinates of the origin is (0, 0). The y co-ordinates of every point on x-axis is zero.   The x co-ordinates of every point on y-axis is zero.   Oblique axes : If both the axes are not perpendicular then they are called as oblique axes.

Co-ordinates of a point are the real variables associated in an order to a point to describe its location in some space. Here the space is the two dimensional plane.     The two lines \[XOX'\] and \[YOY'\] divide the plane in four quadrants. \[XOY,\text{ }YOX',\text{ }X'OY',\text{ }Y'OX\] are respectively called the first, the second, the third and the fourth quadrants. We assume the directions of \[OX,\,OY\] as positive while the directions of \[OX',\text{ }OY'\] as negative.

(1) Geometrical method for probability : When the number of points in the sample space is infinite, it becomes difficult to apply classical definition of probability. For instance, if we are interested to find the probability that a point selected at random from the interval [1, 6] lies either in the interval [1, 2] or [5, 6], we cannot apply the classical definition of probability. In this case we define the probability as follows:     \[P\{x\in A\}=\frac{\text{Measure of region }A}{\text{Measure of the sample space }S}\]     where measure stands for length, area or volume depending upon whether S is a one-dimensional, two-dimensional or three-dimensional region.     (2) Probability distribution : Let S be a sample space. A random variable X is a function from the set S to R, the set of real numbers.     For example, the sample space for a throw of a pair of dice is     \[S=\begin{matrix} \{11, & 12, & \cdots , & 16  \\ 21, & 22, & \cdots , & 26  \\ \vdots  & \vdots  & \ddots  & \vdots   \\ 61, & 62, & \cdots , & 66\}  \\ \end{matrix}\]       Let X be the sum of numbers on the dice. Then \[X(12)=3,\,X(43)=7\], etc. Also, \[\{X=7\}\] is the event \[\{61,\text{ }52,\text{ }43,\text{ }34,\text{ }25,\text{ }16\}\]. In general, if X is a random variable defined on the sample space S and r is a real number, then \[\{X=r\}\] is an event.     If the random variable X takes n distinct values \[{{x}_{1}},\,{{x}_{2}},\,....,\,{{x}_{n}}\], then \[\{X={{x}_{1}}\}\], \[\{X={{x}_{2}}\},\,....,\,\{X={{x}_{n}}\}\] are mutually exclusive and exhaustive events.             Now, since \[(X={{x}_{i}})\] is an event, we can talk of \[P(X={{x}_{i}})\]. If \[P(X={{x}_{i}})={{P}_{i}}\,(1\le i\le n)\], then the system of numbers.     \[\left( \begin{matrix}{{x}_{1}} & {{x}_{2}} & \cdots  & {{x}_{n}}  \\{{p}_{1}} & {{p}_{2}} & \cdots  & {{p}_{n}}  \\\end{matrix} \right)\]  is said to be the probability distribution of the random variable X.     The expectation (mean) of the random variable X is defined as \[E(X)=\sum\limits_{i=1}^{n}{{{p}_{i}}{{x}_{i}}}\]and the variance of X is defined as \[\operatorname{var}(X)=\sum\limits_{i=1}^{n}{{{p}_{i}}{{({{x}_{i}}-E(X))}^{2}}}=\sum\limits_{i=1}^{n}{{{p}_{i}}x_{i}^{2}-{{(E(X))}^{2}}}\] .     (3) Binomial probability distribution : A random variable X which takes values \[0,\text{ }1,\text{ }2,\text{ }\ldots ,n\] is said to follow binomial distribution if its probability distribution function is given by \[P(X=r)={{\,}^{n}}{{C}_{r}}{{p}^{r}}{{q}^{n-r}},\,r=0,\,1,\,2,\,.....,\,n\] where \[p,\,\,q>0\] such that \[p+q=1\].     The notation \[X\tilde{\ }B\,(n,\,\,p)\] is generally used to denote that the random variable X follows binomial distribution with parameters n and p.     We have \[P(X=0)+P(X=1)+...+P(X=n)\].     \[={{\,}^{n}}{{C}_{0}}{{p}^{0}}{{q}^{n-0}}+{{\,}^{n}}{{C}_{1}}{{p}^{1}}{{q}^{n-1}}+...+{{\,}^{n}}{{C}_{n}}{{p}^{n}}{{q}^{n-n}}={{(q+p)}^{n}}={{1}^{n}}=1\]     Now probability of     (a) Occurrence of the event exactly r times  \[P(X=r)={{\,}^{n}}{{C}_{r}}{{q}^{n-r}}{{p}^{r}}\].     (b) Occurrence of the event at least r times \[P(X\ge r)={{\,}^{n}}{{C}_{r}}{{q}^{n-r}}{{p}^{r}}+...+{{p}^{n}}=\sum\limits_{X=r}^{n}{^{n}{{C}_{X}}{{p}^{X}}{{q}^{n-X}}}\].     (c) Occurrence of the event at the most r times \[P(0\le X\le r)={{q}^{n}}+{{\,}^{n}}{{C}_{1}}{{q}^{n-1}}p+...+{{\,}^{n}}{{C}_{r}}{{q}^{n-r}}{{p}^{r}}=\sum\limits_{X=0}^{r}{{{p}^{X}}{{q}^{n-X}}}\].    
  • If the probability of happening of an event in one trial be p, then the probability of successive happening of that more...

(1) The law of total probability : Let S be the sample space and let \[{{E}_{1}},\,{{E}_{2}},\,.....{{E}_{n}}\] be n mutually exclusive and exhaustive events associated with a random experiment. If A is any event which occurs with \[{{E}_{1}}\] or \[{{E}_{2}}\] or …or  \[{{E}_{n}},\] then     \[P(A)=P({{E}_{1}})\,P(A/{{E}_{1}})+P({{E}_{2}})\,P(A/{{E}_{2}})+...+P({{E}_{n}})\,P(A/{{E}_{n}})\].     (2) Baye’s rule : Let S be a sample space and \[{{E}_{1}},\,{{E}_{2}},\,.....{{E}_{n}}\] be n mutually exclusive events such that \[\bigcup\limits_{i=1}^{n}{{{E}_{i}}}=S\] and \[P({{E}_{i}})>0\] for \[i=\text{ }1,\text{ }2,\text{ }\ldots \ldots ,n\]. We can think of (\[{{E}_{i}}'s\] as the causes that lead to the outcome of an experiment. The probabilities \[P({{E}_{j}}),\,\,i=1,\,\,2,\,\,...,\,n\]  are called  prior probabilities. Suppose the experiment results in an outcome of event A, where \[P(A)>0\]. We have to find the probability that the observed event A was due to cause \[{{E}_{i}},\] that is, we seek the conditional probability \[P({{E}_{i}}/A)\]. These probabilities are called posterior probabilities, given by Baye’s rule as \[P({{E}_{i}}/A)=\frac{P({{E}_{i}}).P(A/{{E}_{i}})}{\sum\limits_{k=1}^{n}{P({{E}_{k}})\,P(A/{{E}_{k}})}}\].

Let A and B be two events associated with a random experiment. Then, the probability of occurrence of A under the condition that B has already occurred and \[P(B)\ne 0,\] is called the conditional probability and it is denoted by \[P(A/B)\].     Thus, \[P(A/B)=\] Probability of occurrence of A, given that B has already happened.     \[=\frac{P(A\cap B)}{P(B)}=\frac{n(A\cap B)}{n(B)}\].     Similarly, \[P(B/A)=\] Probability of occurrence of B, given that A has already happened.     \[=\frac{P(A\cap B)}{P(A)}=\frac{n(A\cap B)}{n(A)}\].    
  • Sometimes, \[P(A/B)\] is also used to denote the probability of occurrence of A when B Similarly, \[P(B/A)\] is used to denote the probability of occurrence of B when A occurs.
    (1) Multiplication theorems on probability     (i) If A and B are two events associated with a random experiment, then\[P(A\cap B)=P(A)\,.\,P(B/A)\], if \[P(A)\ne 0\] or \[P(A\cap B)=P(B)\,.\,P(A/B)\], if \[P(B)\ne 0\].     (ii) Extension of multiplication theorem : If \[{{A}_{1}},\,{{A}_{2}},\,....,\,{{A}_{n}}\] are \[n\] events related to a random experiment, then \[P({{A}_{1}}\cap {{A}_{2}}\cap {{A}_{3}}\cap ....\cap {{A}_{n}})=P({{A}_{1}})P({{A}_{2}}/{{A}_{1}})P({{A}_{3}}/{{A}_{1}}\cap {{A}_{2}})\]\[....P({{A}_{n}}/{{A}_{1}}\cap {{A}_{2}}\cap ...\cap {{A}_{n-1}})\],     where \[P({{A}_{i}}/{{A}_{1}}\cap {{A}_{2}}\cap ...\cap {{A}_{i-1}})\] represents the conditional probability of the event \[{{A}_{i}}\], given that the events \[{{A}_{1}},\,{{A}_{2}},\,.....,\,{{A}_{i-1}}\] have already happened.     (iii) Multiplication theorems for independent events : If A and B are independent events associated with a random experiment, then \[P(A\cap B)=P(A)\,.\,P(B)\] i.e., the probability of simultaneous occurrence of two independent events is equal to the product of their probabilities. By multiplication theorem, we have \[P(A\cap B)=P(A)\,.\,P(B/A)\]. Since A and B are independent events, therefore \[P(B/A)=P(B)\]. Hence, \[P(A\cap B)=P(A)\,.\,P(B)\].     (iv) Extension of multiplication theorem for independent events : If \[{{A}_{1}},\,{{A}_{2}},\,....,\,{{A}_{n}}\] are independent events associated with a random experiment, then     \[P({{A}_{1}}\cap {{A}_{2}}\cap {{A}_{3}}\cap ...\cap {{A}_{n}})=P({{A}_{1}})P({{A}_{2}})...P({{A}_{n}})\].     By multiplication theorem, we have     \[P({{A}_{1}}\cap {{A}_{2}}\cap {{A}_{3}}\cap ...\cap {{A}_{n}})=P({{A}_{1}})P({{A}_{2}}/{{A}_{1}})P({{A}_{3}}/{{A}_{1}}\cap {{A}_{2}})\]\[...P({{A}_{n}}/{{A}_{1}}\cap {{A}_{2}}\cap ...\cap {{A}_{n-1}})\]     Since \[{{A}_{1}},\,{{A}_{2}},\,....,\,{{A}_{n-1}},\,{{A}_{n}}\] are independent events, therefore     \[P({{A}_{2}}/{{A}_{1}})=P({{A}_{2}}),\,P({{A}_{3}}/{{A}_{1}}\cap {{A}_{2}})=P({{A}_{3}}),\,....,\]\[\,P({{A}_{n}}/{{A}_{1}}\cap {{A}_{2}}\cap ...\cap {{A}_{n-1}})=P({{A}_{n}})\]     Hence, \[P({{A}_{1}}\cap {{A}_{2}}\cap ...\cap {{A}_{n}})=P({{A}_{1}})P({{A}_{2}})....P({{A}_{n}})\].     (2) Probability of at least one of the n independent events : If \[{{p}_{1}},\,{{p}_{2}},\,{{p}_{3}},\,........,\,{{p}_{n}}\] be the probabilities of happening of n independent events \[{{A}_{1}},\,{{A}_{2}},\,{{A}_{3}},\,........,\,{{A}_{n}}\] respectively, then     (i) Probability of happening none of them     \[=P({{\bar{A}}_{1}}\cap {{\bar{A}}_{2}}\cap {{\bar{A}}_{3}}......\cap {{\bar{A}}_{n}})=P({{\bar{A}}_{1}}).P({{\bar{A}}_{2}}).P({{\bar{A}}_{3}}).....P({{\bar{A}}_{n}})=(1-{{p}_{1}})(1-{{p}_{2}})(1-{{p}_{3}})....(1-{{p}_{n}})\].   \[=(1-{{p}_{1}})(1-{{p}_{2}})(1-{{p}_{3}})....(1-{{p}_{n}})\]     (ii) Probability of happening at least one of them     \[=P({{A}_{1}}\cup {{A}_{2}}\cup {{A}_{3}}....\cup {{A}_{n}})=1-P({{\bar{A}}_{1}})P({{\bar{A}}_{2}})P({{\bar{A}}_{3}})....P({{\bar{A}}_{n}})=1-(1-{{p}_{1}})(1-{{p}_{2}})(1-{{p}_{3}})...(1-{{p}_{n}})\].\[=1-(1-{{p}_{1}})(1-{{p}_{2}})(1-{{p}_{3}})...(1-{{p}_{n}})\]     (iii) Probability of happening of first event and not happening of the remaining \[=P({{A}_{1}})P({{\bar{A}}_{2}})P({{\bar{A}}_{3}}).....P({{\bar{A}}_{n}})\]     \[={{p}_{1}}(1-{{p}_{2}})(1-{{p}_{3}}).......(1-{{p}_{n}})\]

Notations : (i) \[P(A+B)\text{ or }P(A\cup B)=\] Probability of happening of A or B     = Probability of happening of the events A or B or both     = Probability of occurrence of at least one event A or B     (ii) \[P(AB)\] or \[P(A\cap B)=\] Probability of happening of events A and B together.     (1) When events are not mutually exclusive : If A and B are two events which are not mutually exclusive, then     \[P(A\cup B)=P(A)+P(B)-P(A\cap B)\]     or  \[P(A+B)=P(A)+P(B)-P(AB)\]     For any three events A, B, C     \[P(A\cup B\cup C)=P(A)+P(B)+P(C)-P(A\cap B)\]\[-P(B\cap C)-P(C\cap A)+P(A\cap B\cap C)\]     or \[P(A+B+C)=P(A)+P(B)+P(C)-P(AB)-P(BC)\]\[-P(CA)+P(ABC)\]     (2) When events are mutually exclusive : If A and B are mutually exclusive events, then \[n(A\cap B)=0\] \[\Rightarrow \] \[P(A\cap B)=0\]     \[\therefore \]\[P(A\cup B)=P(A)+P(B)\].     For any three events A, B, C which are mutually exclusive,     \[P(A\cap B)=P(B\cap C)=P(C\cap A)=P(A\cap B\cap C)=0\]     \[\therefore \]\[P(A\cup B\cup C)=P(A)+P(B)+P(C)\].     The probability of happening of any one of several mutually exclusive events is equal to the sum of their probabilities, i.e. if \[{{A}_{1}},\,{{A}_{2}}.....{{A}_{n}}\] are mutually exclusive events, then     \[P({{A}_{1}}+{{A}_{2}}+...+{{A}_{n}})=P({{A}_{1}})+P({{A}_{2}})+.....+P({{A}_{n}})\]     i.e. \[P(\sum{{{A}_{i}}})=\sum{P({{A}_{i}})}\].     (3) When events are independent : If A and B are independent events, then \[P(A\cap B)=P(A).P(B)\]     \[\therefore \]  \[P(A\cup B)=P(A)+P(B)-P(A).P(B)\].     (4) Some other theorems     (i) Let A and B be two events associated with a random experiment, then     (a) \[P(\bar{A}\cap B)=P(B)-P(A\cap B)\]              (b) \[P(A\cap \bar{B})=P(A)-P(A\cap B)\]     If \[B\subset A,\] then     (a) \[P(A\cap \bar{B})=P(A)-P(B)\]         (b) \[P(B)\le P(A)\]     Similarly if \[A\subset B,\] then     (a) \[(\bar{A}\cap B)=P(B)-P(A)\]           (b) \[P(A)\le P(B)\]    
  • Probability of occurrence of neither A nor B is
    \[P(\bar{A}\cap \bar{B})=P(\overline{A\cup B})=1-P(A\cup B)\]     (ii) Generalization of the addition theorem : If \[{{A}_{1}},\,{{A}_{2}},.....,\,{{A}_{n}}\] are \[n\] events associated with a random experiment, then \[P\left( \bigcup\limits_{i=1}^{n}{{{A}_{i}}} \right)=\sum\limits_{i=1}^{n}{P({{A}_{i}})}-\sum\limits_{\begin{smallmatrix} i,\,j=1 \\\,i\ne j\end{smallmatrix}}^{n}{P({{A}_{i}}\cap {{A}_{j}})}+\sum\limits_{\begin{smallmatrix} i,\,j,\,k=1 \\\,i\ne j\ne k\end{smallmatrix}}^{n}{P({{A}_{i}}\cap {{A}_{j}}\cap {{A}_{k}})}+\]\[...+{{(-1)}^{n-1}}P({{A}_{1}}\cap {{A}_{2}}\cap .....\cap {{A}_{n}})\].     If all the events \[{{A}_{i}}(i=1,\,2...,\,n)\] are mutually exclusive, then \[P\,\,\left( \bigcup\limits_{i=1}^{n}{{{A}_{i}}} \right)=\sum\limits_{i=1}^{n}{P({{A}_{i}})}\]     i.e. \[P({{A}_{1}}\cup {{A}_{2}}\cup ....\cup {{A}_{n}})=P({{A}_{1}})+P({{A}_{2}})+....+P({{A}_{n}})\].     (iii) Booley’s inequality : If \[{{A}_{1}},\,{{A}_{2}},\,....{{A}_{n}}\] are n events associated with a random experiment, then     (a) \[P\left( \bigcap\limits_{i=1}^{n}{{{A}_{i}}} \right)\ge \sum\limits_{i=1}^{n}{P({{A}_{i}})-(n-1)}\]             (b) \[P\left( \bigcup\limits_{i=1}^{n}{{{A}_{i}}} \right)\le \sum\limits_{i=1}^{n}{P({{A}_{i}})}\]     These results can be easily established by using the Principle of mathematical induction.

As a result of an experiment if \[''a''\] of the outcomes are favourable to an event E and \[''b''\] of the outcomes are against it, then we say that odds are a to b in favour of E or odds are b to a against E.     Thus odds in favour of an event E     \[=\frac{\text{Number of favourable cases}}{\text{Number of unfavourable cases}}=\frac{a}{b}=\frac{a/(a+b)}{b/(a+b)}=\frac{P(E)}{P(\bar{E})}\].     Similarly, odds against an event E     \[=\frac{\text{Number of unfavourable cases}}{\text{Number of favourable cases}}=\frac{b}{a}=\frac{P(\bar{E})}{P(E)}\].

(1) Problems based on combination or selection : To solve such kind of problems, we use \[^{n}{{C}_{r}}=\frac{n\,!}{r!(n-r)!}\].     (2) Problems based on permutation or arrangement : To solve such kind of problems, we use \[^{n}{{P}_{r}}=\frac{n\,!}{(n-r)!}\].

If a random experiment results in n mutually exclusive, equally likely and exhaustive outcomes, out of which m are favourable to the occurrence of an event A, then the probability of occurrence of A is given by     \[P(A)=\frac{m}{n}=\frac{\text{Number of outcomes favourable to }A}{\text{Number of total outcomes}}\]     It is obvious that \[0\le m\le n\]. If an event A is certain to happen, then \[m=n,\] thus \[P(A)=1\].     If A is impossible to happen, then \[m=0\] and so \[P(A)=0\]. Hence we conclude that \[0\le P(A)\le 1\].     Further, if \[\bar{A}\] denotes negative of A i.e. event that A doesn’t happen, then for above cases m, n; we shall have      \[P(\bar{A})=\frac{n-m}{n}=1-\frac{m}{n}=1-P(A)\] ,\[\therefore \]  \[P(A)+P(\bar{A})=1\].     Notations : For two events A and B,     (i) \[A'\] or \[\bar{A}\] or \[{{A}^{C}}\] stands for the non-occurrence or negation of A.     (ii) \[A\cup B\] stands for the occurrence of at least one of A and B.     (iii) \[A\cap B\] stands for the simultaneous occurrence of A and B.     (iv) \[A'\cap B'\] stands for the non-occurrence of both A and B.     (v) \[A\subseteq B\] stands for “the occurrence of A implies occurrence of B”.

(1) Sample space : The set of all possible outcomes of a trial (random experiment) is called its sample space. It is generally denoted by S and each outcome of the trial is said to be a sample point.     (2) Event : An event is a subset of a sample space.     (i) Simple event : An event containing only a single sample point is called an elementary or simple event.     (ii) Compound events : Events obtained by combining together two or more elementary events are known as the compound events or decomposable events.     (iii) Equally likely events : Events are equally likely if there is no reason for an event to occur in preference to any other event.     (iv) Mutually exclusive or disjoint events : Events are said to be mutually exclusive or disjoint or incompatible if the occurrence of any one of them prevents the occurrence of all the others.      (v) Mutually non-exclusive events : The events which are not mutually exclusive are known as compatible events or mutually non exclusive events.     (vi) Independent events : Events are said to be independent if the happening (or non-happening) of one event is not affected by the happening (or non-happening) of others.     (vii) Dependent events : Two or more events are said to be dependent if the happening of one event affects (partially or totally) other event.     (3) Exhaustive number of cases : The total number of possible outcomes of a random experiment in a trial is known as the exhaustive number of cases.     (4) Favourable number of cases : The number of cases favourable to an event in a trial is the total number of elementary events such that the occurrence of any one of them ensures the happening of the event.     (5) Mutually exclusive and exhaustive system of events :  Let S be the sample space associated with a random experiment. Let \[{{A}_{1}},{{A}_{2}},\text{ }\ldots ..{{A}_{n}}\] be subsets of S such that     (i) \[{{A}_{i}}\cap {{A}_{j}}=\varphi \] for \[i\ne \,j\] and (ii) \[{{A}_{1}}\cup {{A}_{2}}\cup ....\cup {{A}_{n}}=S\]     Then the collection of events \[{{A}_{1}},\,{{A}_{2}},\,.....,\,{{A}_{n}}\] is said to form a mutually exclusive and exhaustive system of events.     If \[{{E}_{1}},\,{{E}_{2}},\,.....,\,{{E}_{n}}\] are elementary events associated with a random experiment, then     (i) \[{{E}_{i}}\cap {{E}_{j}}=\varphi \] for \[i\ne \,j\] and (ii) \[{{E}_{1}}\cup {{E}_{2}}\cup ....\cup {{E}_{n}}=S\]      So, the collection of elementary events associated with a random experiment always form a system of mutually exclusive and exhaustive system of events.     In this system, \[P({{A}_{1}}\cup {{A}_{2}}.......\cup {{A}_{n}})\]     \[=P({{A}_{1}})+P({{A}_{2}})+.....+P({{A}_{n}})=1\].


You need to login to perform this action.
You will be redirected in 3 sec spinner