Current Affairs JEE Main & Advanced

(1) Linearly independent vectors : A set of non-zero vectors \[{{\mathbf{a}}_{1}},\,{{\mathbf{a}}_{2}},.....{{\mathbf{a}}_{n}}\] is said to be linearly independent, if  \[{{x}_{1}}{{\mathbf{a}}_{1}}+{{x}_{2}}{{\mathbf{a}}_{2}}+.....+{{x}_{n}}{{\mathbf{a}}_{n}}=\mathbf{0}\Rightarrow {{x}_{1}}={{x}_{2}}=.....={{x}_{n}}=0\].     (2) Linearly dependent vectors : A set of vectors \[{{\mathbf{a}}_{1}},\,{{\mathbf{a}}_{2}},.....{{\mathbf{a}}_{n}}\]  is said to be linearly dependent if there exist scalars \[{{x}_{1}},\,{{x}_{2}},......,{{x}_{n}}\] not all zero such that \[{{x}_{1}}{{\mathbf{a}}_{1}}+{{x}_{2}}{{\mathbf{a}}_{2}}+.....+{{x}_{n}}{{\mathbf{a}}_{n}}=\mathbf{0}\]     Three vectors \[\mathbf{a}={{a}_{1}}\mathbf{i}+{{a}_{2}}\mathbf{j}+{{a}_{3}}\mathbf{k}\],  \[\mathbf{b}={{b}_{1}}\mathbf{i}+{{b}_{2}}\mathbf{j}+{{b}_{3}}\mathbf{k}\]  and  \[\mathbf{c}={{c}_{1}}\mathbf{i}+{{c}_{2}}\mathbf{j}+{{c}_{3}}\mathbf{k}\] will be linearly dependent vectors iff \[\left| \,\begin{matrix} {{a}_{1}} & {{a}_{2}} & {{a}_{3}}  \\ {{b}_{1}} & {{b}_{2}} & {{b}_{3}}  \\ {{c}_{1}} & {{c}_{2}} & {{c}_{3}}  \\ \end{matrix}\, \right|\,\,=\,\,0\].     Properties of linearly independent and dependent vectors     (i) Two non-zero, non-collinear vectors are linearly independent.     (ii) Any two collinear vectors are linearly dependent.     (iii) Any three non-coplanar vectors are linearly independent.     (iv) Any three coplanar vectors are linearly dependent.     (v) Any four vectors in 3-dimensional space are linearly dependent.

A vector \[\mathbf{r}\] is said to be a linear combination of vectors \[\mathbf{a},\,\mathbf{b},\,\mathbf{c}.....\] etc, if there exist scalars \[x,y,z\] etc., such that \[\mathbf{r}=x\mathbf{a}+y\mathbf{b}+z\mathbf{c}+....\]   Examples : Vectors \[{{\mathbf{r}}_{\text{1}}}=2\mathbf{a}+\mathbf{b}+3\mathbf{c},\,{{\mathbf{r}}_{2}}=\mathbf{a}+3\mathbf{b}+\sqrt{2}\mathbf{c}\] are linear combinations of the vectors \[\mathbf{a},\,\mathbf{b},\,\mathbf{c}\].               (1) Collinear and Non-collinear vectors : Let \[\mathbf{a}\] and \[\mathbf{b}\] be two collinear vectors and let \[\mathbf{x}\] be the unit vector in the direction of \[\mathbf{a}\]. Then the unit vector in the direction of \[\mathbf{b}\] is \[\mathbf{x}\] or \[-\mathbf{x}\] according as \[\mathbf{a}\] and \[\mathbf{b}\] are like or unlike parallel vectors. Now, \[\mathbf{a}=|\mathbf{a}|\mathbf{\hat{x}}\]\[\mathbf{a}\times (\mathbf{b}\times \mathbf{c})\ne (\mathbf{a}\times \mathbf{b})\times \mathbf{c}\] and \[\mathbf{b}=\pm |\mathbf{b}|\mathbf{\hat{x}}\].   \[\therefore \]\[\mathbf{a}=\,\left( \frac{|\mathbf{a}|}{|\mathbf{b}|} \right)\,|\mathbf{b}|\mathbf{\hat{x}}\] \[\Rightarrow \] \[\mathbf{a}=\left( \pm \frac{|\mathbf{a}|}{|\mathbf{b}|} \right)\,\mathbf{b}\]\[\Rightarrow \] \[\mathbf{a}=\lambda \mathbf{b}\],  where \[\lambda =\pm \frac{|\mathbf{a}|}{|\mathbf{b}|}\]. Thus, if \[\mathbf{a},\,\mathbf{b}\] are collinear vectors, then \[\mathbf{a}=\lambda \,\mathbf{b}\] or \[\mathbf{b}=\lambda \,\mathbf{a}\] for some scalar \[\lambda \].   (2) Relation between two parallel vectors   (i) If \[\mathbf{a}\] and \[\mathbf{b}\] be two parallel vectors, then there exists a scalar k such that \[\mathbf{a}=k\,\mathbf{b}\] i.e., there exist two non-zero scalar quantities \[x\] and \[y\] so that \[x\,\mathbf{a}+y\,\mathbf{b}=\mathbf{0}\].   If \[\mathbf{a}\] and  \[\mathbf{b}\] be two non-zero, non-parallel vectors then \[x\mathbf{a}+y\mathbf{b}=\mathbf{0}\] \[\Rightarrow \] \[x=0\] and \[y=0\].   Obviously  \[x\mathbf{a}+y\mathbf{b}=\mathbf{0}\] \[\Rightarrow \]\[\left\{ \begin{matrix} \mathbf{a}=\mathbf{0},\,\mathbf{b}=\mathbf{0}  \\ \text{or}  \\ x=\text{0,}\,y=\text{0}  \\ \text{or}  \\ \mathbf{a}||\mathbf{b}  \\ \end{matrix} \right.\]   (ii) If \[\mathbf{a}={{a}_{1}}\mathbf{i}+{{a}_{2}}\mathbf{j}+{{a}_{3}}\mathbf{k}\] and \[\mathbf{b}={{b}_{1}}\mathbf{i}+{{b}_{2}}\mathbf{j}+{{b}_{3}}\mathbf{k}\] then from the property of parallel vectors, we have \[\mathbf{a}||\mathbf{b}\Rightarrow \frac{{{a}_{1}}}{{{b}_{1}}}=\frac{{{a}_{2}}}{{{b}_{2}}}=\frac{{{a}_{3}}}{{{b}_{3}}}\].   (3) Test of collinearity of three points : Three points with position vectors \[\mathbf{a},\,\mathbf{b,}\,\mathbf{c}\] are collinear iff there exist scalars \[x,y,z\] not all zero such that \[x\mathbf{a}+y\mathbf{b}+z\mathbf{c}=\mathbf{0}\], where \[x+y+z=0\]. If \[\mathbf{a}={{a}_{1}}\mathbf{i}+{{a}_{2}}\mathbf{j}\], \[\mathbf{b}={{b}_{1}}\mathbf{i}+{{b}_{2}}\mathbf{j}\] and \[\mathbf{c}={{c}_{1}}\mathbf{i}+{{c}_{2}}\mathbf{j}\], then the points with position vector \[\mathbf{a},\,\mathbf{b},\,\mathbf{c}\] will be collinear iff \[\left| \,\begin{matrix} {{a}_{1}} & {{a}_{2}} & 1  \\ {{b}_{1}} & {{b}_{2}} & 1  \\ {{c}_{1}} & {{c}_{2}} & 1  \\ \end{matrix}\, \right|\,=0\].     (4) Test of coplanarity of three vectors : Let \[\mathbf{a}\] and \[\mathbf{b}\] two given non-zero non-collinear vectors. Then any vectors \[\mathbf{r}\] coplanar with \[\mathbf{a}\] and \[\mathbf{b}\] can be uniquely expressed as \[\mathbf{r}=x\mathbf{a}+y\mathbf{b}\] for some scalars \[x\] and \[y\].     (5) Test of coplanarity of Four points : Four points with position vectors \[\mathbf{a},\,\mathbf{b},\,\mathbf{c},\,\mathbf{d}\] are coplanar iff there exist scalars \[x,y,z,u\] not all zero such that \[x\mathbf{a}+y\mathbf{b}+z\mathbf{c}+u\mathbf{d}=\mathbf{0}\], where \[x+y+z+u=0\].   Four points with position vectors   \[\mathbf{a}={{a}_{1}}\mathbf{i}+{{a}_{2}}\mathbf{j}+{{a}_{3}}\mathbf{k}\],\[\mathbf{b}={{b}_{1}}\mathbf{i}+{{b}_{2}}\mathbf{j}+{{b}_{3}}\mathbf{k}\], \[\mathbf{c}={{c}_{1}}\mathbf{i}+{{c}_{2}}\mathbf{j}+{{c}_{3}}\mathbf{k}\], \[\mathbf{d}={{d}_{1}}\mathbf{i}+{{d}_{2}}\mathbf{j}+{{d}_{3}}\mathbf{k}\] will be coplanar, iff \[\left| \,\begin{matrix} {{a}_{1}} & {{a}_{2}} & {{a}_{3}} & 1  \\ {{b}_{1}} & {{b}_{2}} & {{b}_{3}} & 1  \\ {{c}_{1}} & {{c}_{2}} & {{c}_{3}} & 1  \\ {{d}_{1}} & {{d}_{2}} & {{d}_{3}} & 1  \\ \end{matrix}\, \right|\,=0\].

If a point \[O\] is fixed as the origin in space (or plane) and \[P\] is any point, then \[\overrightarrow{OP}\] is called the position vector of \[P\] with respect to \[O\].       If we say that P is the point \[\mathbf{r}\], then we mean that the position vector of \[P\] is \[\mathbf{r}\] with respect to some origin \[O\].     (1) \[\overrightarrow{AB}\] in terms of the position vectors of points A and B : If \[\mathbf{a}\] and b are position vectors of points A and B respectively. Then, \[\overrightarrow{OA}=\mathbf{a},\,\overrightarrow{OB}=\mathbf{b}\]     \[\therefore \] \[\overrightarrow{AB}\] =  (Position vector of B) – (Position vector of A)     \[=\overrightarrow{OB}-\overrightarrow{OA}=\mathbf{b}-\mathbf{a}\]     (2) Position vector of a dividing point : The position vectors of the points dividing the line \[AB\]in the ratio m : n internally or externally are \[\frac{m\mathbf{b}+n\mathbf{a}}{m+n}\] or \[\frac{m\mathbf{b}-n\mathbf{a}}{m-n}\].  

(1) Addition of vectors     (i) Triangle law of addition : If  in a \[\Delta ABC\], \[\overrightarrow{AB}=\mathbf{a}\] \[\overrightarrow{BC}=\mathbf{b}\]  and \[\overrightarrow{AC}=\mathbf{c}\], then \[\overrightarrow{AB}+\overrightarrow{BC}=\overrightarrow{AC}\] i.e., \[\mathbf{a+b+c}\].           (ii) Parallelogram law of addition : If in a parallelogram \[OACB,\] \[\overrightarrow{OA}=\mathbf{a},\,\overrightarrow{OB}=\mathbf{b}\] and \[\overrightarrow{OC}\,=\mathbf{c}\]         Then \[\overrightarrow{OA}+\overrightarrow{OB}=\overrightarrow{OC}\] i.e., \[\mathbf{a}+\mathbf{b}=\mathbf{c}\ \], where OC is a diagonal of the parallelogram OABC.     (iii) Addition in component form : If the vectors are defined in terms of \[\mathbf{i,}\,\,\,\mathbf{j}\] and \[\mathbf{k,}\] i.e.,  if \[\mathbf{a}={{a}_{1}}\mathbf{i}+{{a}_{2}}\mathbf{j}+{{a}_{3}}\mathbf{k}\] and \[\mathbf{b}={{b}_{1}}\mathbf{i}+{{b}_{2}}\mathbf{j}+{{b}_{3}}\mathbf{k}\], then their sum is defined as \[\mathbf{a}+\mathbf{b}=({{a}_{1}}+{{b}_{1}})\mathbf{i}+({{a}_{2}}+{{b}_{2}})\mathbf{j}+({{a}_{3}}+{{b}_{3}})\mathbf{k}\].      Properties of vector addition : Vector addition has the following properties.     (a) Binary operation : The sum of two vectors is always a vector.     (b) Commutativity : For any two vectors \[\mathbf{a}\] and \[\mathbf{b}\],  \[\mathbf{a}+\mathbf{b}=\mathbf{b}+\mathbf{a}\].     (c) Associativity : For any three vectors \[\mathbf{a},\,\mathbf{b}\] and \[\mathbf{c}\], \[\mathbf{a}+(\mathbf{b}+\mathbf{c})=(\mathbf{a}+\mathbf{b})+\mathbf{c}\].     (d) Identity : Zero vector is the identity for addition. For any vector \[\mathbf{a},\,\,\mathbf{0}+\mathbf{a}=\mathbf{a}=\mathbf{a}+\mathbf{0}\]     (e) Additive inverse : For every vector \[\mathbf{a}\] its negative vector \[-\mathbf{a}\] exists such that \[\mathbf{a}+(-\mathbf{a})=(-\mathbf{a})+\mathbf{a}=\mathbf{0}\] i.e., \[(-\mathbf{a})\] is the additive inverse of the vector \[\mathbf{a}\].      (2) Subtraction of vectors : If  \[\mathbf{a}\]and \[\mathbf{b}\] are two vectors, then their subtraction \[\mathbf{a}-\mathbf{b}\] is defined as\[\mathbf{a}-\mathbf{b}=\mathbf{a}+(-\mathbf{b})\]  where \[-\mathbf{b}\] is the negative of \[\mathbf{b}\] having magnitude equal to that of \[\mathbf{b}\] and direction opposite to \[\mathbf{b}\]. If \[\mathbf{a}={{a}_{1}}\mathbf{i}+{{a}_{2}}\mathbf{j}+{{a}_{3}}\mathbf{k}\],  \[\mathbf{b}={{b}_{1}}\mathbf{i}+{{b}_{2}}\mathbf{j}+{{b}_{3}}\mathbf{k}\]     Then \[\mathbf{a}-\mathbf{b}=({{a}_{1}}-{{b}_{1}})\mathbf{i}+({{a}_{2}}-{{b}_{2}})\mathbf{j}+({{a}_{3}}-{{b}_{3}})\mathbf{k}\].         Properties of vector subtraction       (i) \[\mathbf{a}-\mathbf{b}\ne \mathbf{b}-\mathbf{a}\]     (ii) \[(\mathbf{a}-\mathbf{b})-\mathbf{c}\ne \mathbf{a}-(\mathbf{b}-\mathbf{c})\]   (iii) Since any one side of a triangle is less than the sum and greater than the difference of the other two sides, so for any two vectors \[a\] and \[b,\] we have     (a) \[|\mathbf{a}+\mathbf{b}|\,\le \,|\mathbf{a}|+|\mathbf{b}|\]                                    (b) \[|\mathbf{a}+\mathbf{b}|\,\ge \,|\mathbf{a}|-|\mathbf{b}|\]     (c) \[|\mathbf{a}-\mathbf{b}|\,\le \,|\mathbf{a}|+|\mathbf{b}|\]                   (d) \[|\mathbf{a}-\mathbf{b}|\,\ge \,|\mathbf{a}|\,-\,|\mathbf{b}|\]     (3) Multiplication of a vector by a scalar : If \[\mathbf{a}\] is a vector and \[m\] is a scalar (i.e., a real number) then \[m\mathbf{a}\] is a vector whose magnitude is \[m\] times that of \[\mathbf{a}\] and whose direction is the same as that of \[\mathbf{a}\], if m is positive and opposite to that of \[\mathbf{a}\], if m is negative.     Properties of Multiplication of vectors by a scalar : The following are properties of multiplication of vectors by scalars, for  vectors \[\mathbf{a},\,\mathbf{b}\] and scalars \[m,\,\,n\].                                               (i) \[m(-\mathbf{a})=(-m)\,\mathbf{a}=-(m\mathbf{a})\]      (ii) \[(-m)\,(-\mathbf{a})=m\mathbf{a}\]                (iii) \[m\,(n\mathbf{a})=(mn)\,\mathbf{a}=n(m\mathbf{a})\]               (iv) \[(m+n)\,\mathbf{a}=m\mathbf{a}+n\mathbf{a}\]     (v) \[m\,(\mathbf{a}+\mathbf{b})=m\mathbf{a}+m\mathbf{b}\]     (4) Resultant of two forces     Let \[\overrightarrow{P}\] and \[\overrightarrow{Q}\] be two forces and \[\overrightarrow{R}\] be the resultant of these two forces then, \[\overrightarrow{R}\,=\overrightarrow{P}+\overrightarrow{Q}\]         \[|\overrightarrow{R}|=R=\sqrt{{{P}^{2}}+{{Q}^{2}}+2PQ\,\cos \theta }\]     where \[|\overrightarrow{P}|=P,\,|\overrightarrow{Q}|=Q,\]      Also, \[\tan \alpha =\frac{Q\sin \theta }{P+Q\cos \theta }\]     Deduction : When \[|\overrightarrow{P}|=|\overrightarrow{Q}|\], more...

(1) Zero or null vector : A vector whose magnitude is zero is called zero or null vector and it is represented by \[\overrightarrow{O}\].     (2) Unit vector : A vector whose modulus is unity, is called a unit vector. The unit vector in the direction of a vector \[\mathbf{a}\] is denoted by\[\mathbf{\hat{a}}\], read as \[a\text{ }cap\]. Thus, \[|\mathbf{\hat{a}}|\ =1\].     \[\mathbf{\hat{a}}=\frac{\mathbf{a}}{|\mathbf{a}|}=\frac{\text{Vector }a\text{ }}{\text{Magnitude of }a}\]                   (3) Like and unlike vectors : Vectors are said to be like when they have the same sense of direction and unlike when they have opposite directions.     (4) Collinear or parallel vectors : Vectors having the same or parallel supports are called collinear or parallel vectors.     (5) Co-initial vectors : Vectors having the same initial point are called co-initial vectors.     (6) Coplanar vectors : A system of vectors is said to be coplanar, if their supports are parallel to the same plane.     Two vectors having the same initial point are always coplanar but such three or more vectors may or may not be coplanar.     (7) Coterminous vectors : Vectors having the same terminal point are called coterminous vectors.     (8) Negative of a vector : The vector which has the same magnitude as the vector \[\mathbf{a}\] but opposite direction, is called the negative of \[\mathbf{a}\] and is denoted by \[-\mathbf{a}\]. Thus, if \[\overrightarrow{PQ}=\mathbf{a}\], then \[\overrightarrow{QP}=-\mathbf{a}\].     (9) Reciprocal of a vector : A vector having the same direction as that of a given vector \[\mathbf{a}\] but magnitude equal to the reciprocal of the given vector is known as the reciprocal of \[\mathbf{a}\] and is denoted by \[{{\mathbf{a}}^{-1}}\]. Thus, if \[|\mathbf{a}|\,=\mathbf{a},|{{\mathbf{a}}^{-1}}|\,=\frac{1}{\mathbf{a}}\].     (10) Localized and free vectors : A vector which is drawn parallel to a given vector through a specified point in space is called a localized vector. For example, a force acting on a rigid body is a localized vector as its effect depends on the line of action of the force. If the value of a vector depends only on its length and direction and is independent of its position in the space, it is called a free vector.     (11) Position vectors : The vector \[\overrightarrow{OA}\]which represents the position of the point A with respect to a fixed point O (called origin) is called position vector of the point A. If \[(x,y,z)\] are co-ordinates of the point A, then \[\overrightarrow{OA}=x\mathbf{i}+y\mathbf{j}+z\mathbf{k}\].     (12) Equality of vectors : Two vectors \[\mathbf{a}\] and \[\mathbf{b}\] are said to be equal, if (i) \[|\mathbf{a}|=|\mathbf{b}|\] (ii) They have the same or parallel support and (iii) The same sense.

Geometrically a vector is represent by a line segment. For example,\[\mathbf{a}=\overrightarrow{AB}\]. Here A is called the initial point and B, the terminal point or tip.     Magnitude or modulus of \[\mathbf{a}\] is expressed as   \[|\mathbf{a}|=|\overrightarrow{AB}|=AB\].

Vectors represent one of the most important mathematical systems, which is used to handle certain types of problems in Geometry, Mechanics and other branches of Applied Mathematics, Physics and Engineering.   Scalar and vector quantities : Those quantities which have only magnitude and which are not related to any fixed direction in space are called scalar quantities, or briefly scalars. Examples: Mass, Volume, Density, Work, Temperature etc. Those quantities which have both magnitude and direction, are called vectors. Displacement, velocity, acceleration, momentum, weight, force are examples of vector quantities.

An asymptote to a curve is a straight line, at a finite distance from the origin, to which the tangent to a curve tends as the point of contact goes to infinity.     The equations of two asymptotes of the hyperbola \[\frac{{{x}^{2}}}{{{a}^{2}}}-\frac{{{y}^{2}}}{{{b}^{2}}}=1\] are \[{{x}^{2}}=4y\] or \[\frac{x}{a}\pm \frac{y}{b}=0\].     Some important points about asymptotes       (i) The combined equation of the asymptotes of the hyperbola \[\frac{{{x}^{2}}}{{{a}^{2}}}-\frac{{{y}^{2}}}{{{b}^{2}}}=1\] is \[\frac{{{x}^{2}}}{{{a}^{2}}}-\frac{{{y}^{2}}}{{{b}^{2}}}=0\].     (ii) When \[b=a\] i.e. the asymptotes of rectangular hyperbola \[{{x}^{2}}-{{y}^{2}}={{a}^{2}}\] are \[y=\pm x\], which are at right angles.     (iii) A hyperbola and its conjugate hyperbola have the same asymptotes.                 (iv) The equation of the pair of asymptotes differ the hyperbola and the conjugate hyperbola by the same constant only i.e., Hyperbola – Asymptotes = Asymptotes – Conjugated hyperbola or     \[\left( \frac{{{x}^{2}}}{{{a}^{2}}}-\frac{{{y}^{2}}}{{{b}^{2}}}-1 \right)-\left( \frac{{{x}^{2}}}{{{a}^{2}}}-\frac{{{y}^{2}}}{{{b}^{2}}} \right)=\left( \frac{{{x}^{2}}}{{{a}^{2}}}-\frac{{{y}^{2}}}{{{b}^{2}}} \right)-\left( \frac{{{x}^{2}}}{{{a}^{2}}}-\frac{{{y}^{2}}}{{{b}^{2}}}+1 \right)\].     (v) The asymptotes pass through the centre of the hyperbola.     (vi) The bisectors of the angles between the asymptotes are the coordinate axes.     (vii) The angle between the asymptotes of the hyperbola \[S=0\] i.e., \[\frac{{{x}^{2}}}{{{a}^{2}}}-\frac{{{y}^{2}}}{{{b}^{2}}}=1\] is \[2{{\tan }^{-1}}\frac{b}{a}\] or 2\[{{\sec }^{-1}}e\].     (viii) Asymptotes are equally inclined to the axes of the hyperbola.

  (1) Definition : A hyperbola whose asymptotes are at right angles to each other is called a rectangular hyperbola. The eccentricity of rectangular hyperbola is always \[\sqrt{2}\].     The general equation of second degree represents a rectangular hyperbola if \[\Delta \ne 0,\,\,{{h}^{2}}>ab\] and  coefficient of \[{{x}^{2}}+\] coefficient of \[{{y}^{2}}=0\].     (2) Parametric co-ordinates of a point on the hyperbola \[\mathbf{XY=}{{\mathbf{c}}^{\mathbf{2}}}\] : If \[t\] is non–zero variable, the coordinates of any point on the rectangular hyperbola \[xy={{c}^{2}}\] can be written as \[(ct,c/t)\]. The point \[(ct,c/t)\] on the hyperbola \[xy={{c}^{2}}\] is generally referred as the point \['t'\].     For rectangular hyperbola the coordinates of foci are \[(\pm a\sqrt{2},\,0)\] and directrices are \[x=\pm a\sqrt{2}\].     For rectangular hyperbola \[xy={{c}^{2}},\]  the coordinates of foci are \[{{a}^{2}}{{l}^{2}}+{{b}^{2}}{{m}^{2}}={{n}^{2}}\] and directrices are \[x+y=\pm c\sqrt{2}\].     (3) Equation of the chord joining points \[{{\mathbf{t}}_{\mathbf{1}}}\] and \[{{\mathbf{t}}_{\mathbf{2}}}\] : The equation of the chord joining two points \[\left( c{{t}_{1}},\frac{c}{{{t}_{1}}} \right)\,\text{and}\,\left( c{{t}_{2}},\frac{c}{{{t}_{2}}} \right)\]  on the hyperbola \[xy={{c}^{2}}\] is \[y-\frac{c}{{{t}_{1}}}=\frac{\frac{c}{{{t}_{2}}}-\frac{c}{{{t}_{1}}}}{c{{t}_{2}}-c{{t}_{1}}}(x-c{{t}_{1}})\]     \[\Rightarrow x+y\,{{t}_{1}}{{t}_{2}}=c\,({{t}_{1}}+{{t}_{2}})\].     (4) Equation of tangent in different forms     (i) Point form : The equation of tangent at \[({{x}_{1}},{{y}_{1}})\] to the hyperbola \[xy={{c}^{2}}\] is \[x{{y}_{1}}+y{{x}_{1}}=2{{c}^{2}}\] or \[\frac{x}{{{x}_{1}}}+\frac{y}{{{y}_{_{1}}}}=2\].     (ii) Parametric form :  The equation of the tangent at \[\left( ct,\frac{c}{t} \right)\] to the hyperbola \[xy={{c}^{2}}\] is \[\frac{x}{t}+yt=2c\].On replacing \[{{x}_{1}}\] by \[ct\] and \[{{y}_{1}}\] by \[\frac{c}{t}\] on the equation of the tangent at \[({{x}_{1}},{{y}_{1}})\]     i.e., \[x{{y}_{1}}+y{{x}_{1}}=2{{c}^{2}}\] we get \[\frac{x}{t}+yt=2c\].     Point of intersection of tangents at \['{{t}_{1}}'\] and \['{{t}_{2}}'\] is \[\left( \frac{2c{{t}_{1}}{{t}_{2}}}{{{t}_{1}}+{{t}_{2}}},\,\frac{2c}{{{t}_{1}}+{{t}_{2}}} \right)\].     (5) Equation of the normal in different forms :     (i) Point form : The equation of the normal at \[({{x}_{1}},{{y}_{1}})\] to the hyperbola \[xy={{c}^{2}}\] is \[x{{x}_{1}}-y{{y}_{1}}=x_{1}^{2}-y_{1}^{2}\].     (ii) Parametric form : The equation of the normal at \[\left( ct,\frac{c}{t} \right)\] to the hyperbola \[xy={{c}^{2}}\] is \[x{{t}^{3}}-yt-c{{t}^{4}}+c=0\].     On replacing \[{{x}_{1}}\] by \[ct\] and \[{{y}_{1}}\] by \[c/t\] in the equation.     We obtain \[x{{x}_{1}}-y{{y}_{1}}=x_{1}^{2}-y_{1}^{2},\]      \[xct-\frac{yc}{t}={{c}^{2}}{{t}^{2}}-\frac{{{c}^{2}}}{{{t}^{2}}}\Rightarrow x{{t}^{3}}-yt-c{{t}^{4}}+c=0\].     This equation is a fourth degree in \[t\]. So, in general four normals can be drawn from a point to the hyperbola \[xy={{c}^{2}}\], and point of intersection of normals at \[{{t}_{1}}\] and \[{{t}_{2}}\] is     \[\left( \frac{c\,\{{{t}_{1}}{{t}_{2}}(t_{1}^{2}+{{t}_{1}}{{t}_{2}}+t_{2}^{2})-1\}}{{{t}_{1}}{{t}_{2}}({{t}_{1}}+{{t}_{2}})},\,\,\frac{c\,\{t_{1}^{3}t_{2}^{3}+(t_{1}^{2}+{{t}_{1}}{{t}_{2}}+{{t}_{2}})\}}{{{t}_{1}}{{t}_{2}}({{t}_{1}}+{{t}_{2}})} \right)\].

Let \[\varphi (x)\] be the primitive or anti-derivative of a function \[f(x)\] defined on \[[a,\,\,b]\] i.e., \[\frac{d}{dx}[\varphi (x)]=f(x)\]. Then the definite integral of  \[f(x)\] over \[[a,\,\,b]\] is denoted by \[\int_{a}^{b}{f(x)dx}\] and is defined as \[[\varphi (b)-\varphi (a)]\] i.e., \[\int_{a}^{b}{f(x)dx=\varphi (b)-\varphi (a)}\]. This is also called Newton Leibnitz formula.   The numbers \[a\] and \[b\] are called the limits of integration, \['a'\] is called the lower limit and \['b'\] the upper limit. The interval \[[a,\,\,b]\]is called the interval of integration. The interval \[[a,\,\,b]\] is also known as range of integration. Every definite integral has a unique value.


You need to login to perform this action.
You will be redirected in 3 sec spinner