**Chapter 1: Relations and Functions**

❖

**Relations**

A relation

*R*from a set

*A*to a set

*B*is a subset of

*A*×

*B*obtained by describing a relationship between the first element

*a*and the second element

*b*of the ordered pairs in

*A*×

*B*. That is,

*R*⊆ {(

*a*,

*b*) ∈

*A*×

*B*,

*a*∈

*A*,

*b*∈

*B*}

•

**Domain of a relation**

The domain of a relation

*R*from set

*A*to set

*B*is the set of all first elements of the ordered pairs in

*R*.

•

**Range of a relation**

The range of a relation

*R*from set

*A*to set

*B*is the set of all second elements of the ordered pairs in

*R*. The whole set

*B*is called the co-domain of

*R*. Range ⊆ Co-domain

❖

**Types of relations**

•

**Empty relations**

A relation

*R*in a set

*A*is called an empty relation, if no element of

*A*is related to any element of

*A*. In this case,

*R*=

*ϕ*⊂

*A*×

*A*

**Example:**Consider a relation

*R*in set

*A*= {3, 4, 5} given by

*R*= {(

*a*,

*b*):

*a*< 25, where

^{b}*a*,

*b*∈

*A*}. It can be observed that no pair (

*a*,

*b*) satisfies this condition. Therefore,

*R*is an empty relation.

•

**Universal relations**

A relation

*R*in a set

*A*is called a universal relation, if each element of

*A*is related to every element of

*A*. In this case,

*R*=

*A*×

*A*

**Example:**Consider a relation

*R*in the set

*A*= {1, 3, 5, 7, 9} given by

*R*= {(

*a*,

*b*):

*a*+

*b*is an even number}.

Here, we may observe that all pairs (

*a*,

*b*) satisfy the condition

*R*. Therefore,

*R*is a universal relation.

•

**Note:**

**Both the empty and the universal relation are called trivial relations.**

•

**Reflexive relations**

A relation

*R*in a set

*A*is called reflexive, if (

*a*,

*a*) ∈

*R*for every

*a*∈

*R*.

**Example:**Consider a relation

*R*in the set

*A*, where

*A*= {2, 3, 4}, given by

*R*= {(

*a*,

*b*):

*a*= 4, 27 or 256}. Here, we may observe that R = {(2, 2), (3, 3), and (4, 4)}. Since each element of

^{b}*R*is related to itself (2 is related 2, 3 is related to 3, and 4 is related to 4),

*R*is a reflexive relation.

•

**Symmetric relations**

A relation

*R*in a set

*A*is called symmetric, if (

*a*

_{1},

*a*

_{2}) ∈

*R*⇒ (

*a*

_{2},

*a*

_{1}) ∈

*R*, ∀

*a*

_{1},

*a*

_{2}∈

*R*

**Example:**Consider a relation

*R*in the set

*A*, where

*A*is the set of natural numbers, given by

*R*= {(

*a*,

*b*): 2 ≤

*ab*< 20}. Here, it can be observed that (

*b*,

*a*) ∈

*R*since 2 ≤

*ba*< 20 [since for natural numbers

*a*and

*b*,

*ab*=

*ba*]

Therefore, the relation

*R*symmetric.

•

**Transitive relations**

A relation

*R*in a set

*A*is called transitive, if (

*a*

_{1},

*a*

_{2}) ∈

*R*and (

*a*

_{2},

*a*

_{3}) ∈

*R*⇒ (

*a*

_{1},

*a*

_{3}) ∈

*R*for all

*a*

_{1},

*a*

_{2},

*a*

_{3}∈

*A*

**Example:**Let us consider a relation

*R*in the set of all subsets with respect to a universal set

*U*given by

*R*= {(

*A*,

*B*):

*A*is a subset of

*B*}

Now, if

*A*,

*B*, and

*C*are three sets in

*R*, such that

*A*⊂

*B*and

*B*⊂

*C*, then we also have A ⊂

*C*. Therefore, the relation

*R*is a symmetric relation.

•

**Equivalence relations**

A relation

*R*in a set

*A*is said to be an equivalence relation, if

*R*is altogether reflexive, symmetric, and transitive.

❖

**Equivalence classes**

Given an arbitrary equivalence relation

*R*in an arbitrary set

*X*,

*R*divides

*X*into mutually disjoint subsets

*Ai*called partitions or subdivisions of

*X*satisfying:

• All elements of

*Ai*are related to each other, for all

*i*.

• No element of

*Ai*is related to any element of A

*j*,

*i*≠

*j*

• ⋃

*Aj*=

*X*and

*Ai*∩

*Aj*=

*ϕ*,

*i*≠

*j*

The subsets

*Ai*are called equivalence classes.

❖

**Functions**

A function

*f*from set

*X*to

*Y*is a specific type of relation in which every element

*x*of

*X*has one and only one image

*y*in set

*Y*. We write the function

*f*as

*f*:

*X*→

*Y*, where

*f*(

*x*) =

*y*

❖

**Types of functions**

•

**One-one or injective and many-one functions**

A function

*f*:

*X*→

*Y*is said to be one-one or injective, if the image of distinct elements of

*X*under

*f*are distinct. In other words, if

*x*

_{1},

*x*

_{2}∈

*X*and

*f*(

*x*

_{1}) =

*f*(

*x*

_{2}), then

*x*

_{1}=

*x*

_{2}.

If the function

*f*is not one-one, then

*f*is called a many-one function.

The one-one and many-one functions can be illustrated by the following figures:

•

**Onto (surjective) function**

A function

*f*:

*X*→

*Y*can be defined as an onto (surjective) function, if ∀

*y*∈

*Y*such that there exists

*x*∈

*X*such that

*f*(

*x*) =

*y*

The onto and many-one functions can be illustrated by the following figures:

•

**One-one and onto (bijective) functions**

A function

*f*:

*X*→

*Y*is said to be bijective, if it is both one-one and onto. A bijective function can be illustrated by the following figure:

❖

**Composite function**

Let

*f*:

*A*→

*B*and

*g*:

*B*→

*C*be two functions. The composition of

*f*and

*g*, i.e. g

*of*, is defined as a function from

*A*to

*C*given by

*gof*(

*x*) =

*g*(

*f*(

*x*)), ∀

*x*∈

*A*

❖

**Inverse of function**

• A function

*f*:

*X*→

*Y*is said to be

**invertible**, if there exists a function

*g*:

*Y*→

*X*such that

*g of*= I

*and*

_{X}*fog*= I

*. In this case,*

_{Y}*g*is called inverse of

*f*and is written as

*g*=

*f*

^{–1}

• A function

*f*is invertible, if and only if

*f*is bijective.

❖

**Binary operations**

A binary operation * on a set

*A*is a function * from

*A*×

*A*to

*A*

• An operation * on a set

*A*is commutative, if

*a**

*b*=

*b**

*a*∀

*a*,

*b*∈

*A*

• An operation * on a set

*A*is associative, if (

*a**

*b*) *

*c*=

*a** (

*b**

*c*) ∀

*a*,

*b*,

*c*∈

*A*

•

**Identity element**

An element

*e*∈

*A*is the identity element for binary operation *:

*A*×

*A*→

*A*, if

*a**

*e*=

*a*=

*e*×

*a*∀

*a*∈

*A*

•

**Inverse of the element**

An element

*a*∈

*A*is invertible for binary operation *:

*A*×

*A*→

*A*, if there exists

*b*∈

*A*such that

*a**

*b*=

*e*=

*b**

*a*, where

*e*is the identity for *. The element

*b*is called inverse of

*a*and is denoted by

*a*

^{–1}.

**Chapter 2: Inverse Trigonometric Functions**

❖ If sin

*y*=

*x*, then

*y*= sin

^{–1}

*x*(We read it as sine inverse

*x*)

Here, sin

^{–1}

*x*is an inverse trigonometric function. Similarly, the other inverse trigonometric functions are as follows:

• If cos

*y = x*, then

*y*= cos

^{–1}

*x*

• If tan

*y*=

*x*, then

*y*= tan

^{–1}

*x*

• If cot

*y = x,*then

*y*= cot

^{–1}

*x*

• If sec

*y = x*, then

*y*= sec

^{–1}

*x*

• If cosec

*y*=

*x*, then

*y*= cosec

^{–1}

*x*

❖ The domains and ranges (principle value branches) of inverse trigonometric functions can be shown in a table as follows:

Function |
Domain |
Range (Principle value branches) |

y = sin^{–1}x |
[–1, 1] | $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ |

y = cos^{–1}x |
[–1, 1] | [0, π] |

y = tan^{–1}x |
R |
$\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ |

y = cot^{–1}x |
R |
(0, π) |

y = sec^{–1}x |
R – (–1, 1) |
$\left[0,\pi \right]-\left\{\frac{\pi}{2}\right\}$ |

y = cosec^{–1}x |
R – (–1, 1) |
$\left[-\frac{\pi}{2},\frac{\pi}{2}\right]-\left\{0\right\}$ |

Note that

*y*= tan

^{–1}

*x*does not mean that

*y*= (tan

*x*)

^{–1}. This argument also holds true for the other inverse trigonometric functions.

❖ The principal value of an inverse trigonometric function can be defined as the value of inverse trigonometric functions, which lies in its principal branch.

❖

**Graphs of the six inverse trigonometric functions**

❖

**Properties of inverse trigonometric functions**

• The relation sin

*y*=

*x*⇒

*y*= sin

^{–1}

*x*gives sin (sin

^{–1}

*x*) =

*x*, where –1 ≤

*x*≤ 1; and sin

^{–1}(sin

*x*) =

*x*, where

*x*∈

**$\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$**

Similarly,

• cos (cos

^{–1}

*x*) =

*x*, –1 ≤

*x*≤ 1 and cos

^{–1}(cos

*x*) =

*x*,

*x*∈ [0, π]

• tan (tan

^{–1}

*x*) =

*x*,

*x*∈

**R**and tan

^{–1}(tan

*x*) =

*x*,

*x*∈ $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$

• cosec (cosec

^{–1}

*x*) =

*x*,

*x*∈

**R**– (–1, 1) and cosec

^{–1}(cosec

*x*) =

*x*,

*x*∈

**$\left[-\frac{\pi}{2},\frac{\pi}{2}\right]-\left\{0\right\}$**

• sec (sec

^{–1}

*x*) =

*x*,

*x*∈

**R**– (–1, 1) and sec

^{–1}(sec

*x*) =

*x*,

*x*∈ [0, π] – $\left\{\frac{\pi}{2}\right\}$

• cot (cot

^{–1}

*x*) =

*x*,

*x*∈

**R**and cot

^{–1}(cot

*x*) =

*x*,

*x*∈ (0, π)

❖ For suitable values of domains, we have

• sin

^{–1}$\left(\frac{1}{x}\right)$= cosec

^{–1}

*x*,

*x*≥ 1 or

*x*≤ –1

• cos

^{–1}$\left(\frac{1}{x}\right)$= sec

^{–1}

*x*,

*x*≥ 1 or

*x*≤ –1

• tan

^{–1}$\left(\frac{1}{x}\right)$= cot

^{–1}

*x*,

*x*> 0

• cosec

^{–1}$\left(\frac{1}{x}\right)$ = sin

*x*,

*x*∈

**R**– (–1, 1)

• sec

^{–1}$\left(\frac{1}{x}\right)$ = cos

*x*,

*x*∈

**R**– (–1, 1)

• cot

^{–1}$\left(\frac{1}{x}\right)$ = tan

^{–1}

*x*,

*x*> 0

❖ For suitable values of domains, we have

• sin

^{–1}(–

*x*) = –sin

^{–1}

*x*,

*x*∈ [–1, 1]

• cos

^{–1}(–

*x*) = π – cos

^{–1}

*x*,

*x*∈ [–1, 1]

• tan

^{–1 }(–

*x*) = –tan

^{–1}

*x*,

*x*∈

**R**

• cosec

^{–1}(–

*x*) = –cosec

^{–1}

*x*, $\left|x\right|$ ≥ 1

• sec

^{–1}(–

*x*) = π – sec

^{–1}

*x*, $\left|x\right|$ ≥ 1

• cot

^{–1}(–

*x*) = π – cot

^{–1}

*x*,

*x*∈

**R**

❖ For suitable values of domains, we have

• sin

^{–1}

*x*+ cos

^{–1}

*x*= $\frac{\pi}{2}$,

*x*∈ [–1, 1]

• tan

^{–1}

*x +*cot

^{–1}

*x*=$\frac{\pi}{2}$,

*x*∈

**R**

• sec

^{–1}

*x*+ cosec

^{–1}

*x*= $\frac{\pi}{2}$, $\left|x\right|$ ≥ 1

❖ For suitable values of domains, we have

• tan

^{–1}

*x*+ tan

^{–1}

*y*= tan

^{–1}$\frac{x+y}{1-xy},xy1$

• tan

^{–1}

*x*– tan

^{–1}

*y*= tan

^{–1}$\frac{x-y}{1+xy},xy-1$

❖ For

*x*∈ [–1, 1], we have 2tan

^{–1}

*x*= sin

^{–1}$\frac{2x}{1+{x}^{2}}$

❖ For

*x*∈ (–1, 1), we have 2tan

^{–1}

*x*= tan

^{–1}$\frac{2x}{1-{x}^{2}}$

❖ For

*x*≥ 0, we have 2 tan

^{–1}

*x*= cos

^{–1}$\frac{1-{x}^{2}}{1+{x}^{2}}$

**Chapter 3: Matrices**

❖ A matrix is an ordered rectangular array of numbers or functions. The numbers or functions are called the elements or the entries of the matrix.

For example: $\left[\begin{array}{ccc}-10& \mathrm{sin}x& \mathrm{log}x\\ {e}^{x}& 2& -9\end{array}\right]$is a matrix having 6 elements. In this matrix, number of rows = 2 and number of columns = 3

❖

**Order of a matrix**

A matrix having

*m*rows and

*n*columns is called a matrix of order

*m*×

*n*. In such a matrix, there are

*mn*numbers of elements. A matrix

*A*of order

*m*×

*n*can be written as:

$A={\left[\begin{array}{ccccccc}{a}_{11}& {a}_{12}& {a}_{13}& ...& {a}_{1j}& ...& {a}_{1n}\\ {a}_{21}& {a}_{22}& {a}_{23}& ...& {a}_{2j}& ...& {a}_{2n}\\ .& & & & & & \\ .& & & & & & \\ .& & & & & & \\ {a}_{i1}& {a}_{i2}& {a}_{i3}& ...& {a}_{ij}& ...& {a}_{in}\\ .& & & & & & \\ .& & & & & & \\ .& & & & & & \\ {a}_{m1}& {a}_{m2}& {a}_{m3}& ...& {a}_{mj}& ...& {a}_{mn}\end{array}\right]}_{m\times n}$

The above matrix

*A*can be written as${\left[{a}_{ij}\right]}_{m\times n}$, where 1 ≤

*i*≤

*m*and 1 ≤

*j*≤

*n*,

*i*, j ∈

**N**

For example: The order of the matrix $\left[\begin{array}{cc}\mathrm{sin}x& \mathrm{cos}x\\ -1& 1+\mathrm{sin}x\\ 0& \mathrm{cos}x\end{array}\right]$ is 3 × 2.

❖

**Types of matrices**

•

**Row matrix**

A matrix

*A*is said to be a row matrix, if it has only one row. In general, $A={\left[{a}_{ij}\right]}_{1\times n}$ is a row matrix of order 1 ×

*n*.

**Example:**$\left[\begin{array}{ccccc}-9& 6& 5& e& \mathrm{sin}x\end{array}\right]$ is a row matrix of order 1 × 5.

•

**Column matrix**

A matrix

*B*is said to be a column matrix, if it has only one column. In general, $B={\left[{b}_{ij}\right]}_{m\times 1}$ is a column matrix of order

*m*× 1.

**Example:**$B=\left[\begin{array}{c}-6\\ 19\\ 13\end{array}\right]$ is a column matrix of order 3 × 1.

•

**Square matrix**

A matrix

*C*is said to be a square matrix, if the number of rows and columns of the matrix are equal. In general, $C={\left[{b}_{ij}\right]}_{m\times n}$is a square matrix, if

*m*=

*n*

**Example:**$C=\left[\begin{array}{cc}-1& 9\\ 5& 1\end{array}\right]$ is a square matrix.

•

**Diagonal matrix**

A square matrix

*A*is said to be a diagonal matrix, if all its non-diagonal elements are zero. In general, $A={\left[{a}_{ij}\right]}_{m\times n}$is a diagonal matrix, if ${a}_{ij}=0$for

*i*≠

*j*

**Example:**$\left[\begin{array}{cccc}1& 0& 0& 0\\ 0& 2& 0& 0\\ 0& 0& -1& 0\\ 0& 0& 0& 3\end{array}\right]$ is a diagonal matrix.

•

**Scalar matrix**

A diagonal matrix is said to be a scalar matrix, if its diagonal elements are equal. In general, $A={\left[{a}_{ij}\right]}_{m\times n}$is a scalar matrix, if ${a}_{ij}=0$ for

*i*≠

*j*and

*a*=

_{ij}*k*, for

*i*=

*j*, where

*k*is constant.

**Example:**$\left[\begin{array}{ccc}\mathrm{sin}2x& 0& 0\\ 0& \mathrm{sin}2x& 0\\ 0& 0& \mathrm{sin}2x\end{array}\right]$ is a scalar matrix.

•

**Identity matrix**

A square matrix in which all the diagonal elements are equal to 1 and the rest are all zero is called an identity matrix. It is denoted by

*I*. In general, $I={\left[{a}_{ij}\right]}_{m\times n}$is an identity matrix, if ${a}_{ij}=0$ for

*i*≠

*j*and

*a*=1 for

_{ij}*i*=

*j*

**Example:**$I=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]$ is an identity matrix.

•

**Zero matrix**

If all the elements of a matrix are zero, then it is called a zero matrix. It is denoted by

*O*.

**Example:**$O=\left[\begin{array}{cc}0& 0\\ 0& 0\\ 0& 0\end{array}\right]$ is a zero matrix.

❖

**Equality of matrices**

Two matrices $A=\left[{a}_{ij}\right]$ and $B=\left[{b}_{ij}\right]$are said to be equal matrices, if they are of same order

•

*a*=

_{ij}*b*, for all possible values of

_{ij}*i*and

*j*

❖

**Addition of matrices**

• Two matrices $A=\left[{a}_{ij}\right]$ and $B=\left[{b}_{ij}\right]$ can be added, if they are of the same order.

• The sum of two matrices

*A*and

*B*of same order

*m*×

*n*is defined as matrix $C={\left[{c}_{ij}\right]}_{m\times n},$ where

*c*=

_{ij}*a*+

_{ij}*b*for all possible values of

_{ij}*i*and

*j*.

❖

**Multiplication of a matrix by a scalar**

The multiplication of a matrix

*A*of order

*m*×

*n*by a scalar

*k*is defined as

$kA=k{\left[{a}_{ij}\right]}_{m\times n}={\left[k\left({a}_{ij}\right)\right]}_{m\times n}$

❖

**Difference of matrices**

• The

**negative of a matrix**

*B*is denoted by –

*B*and is defined as (–1)

*B*.

• The difference of two matrices

*A*and

*B*is defined, if and only if they are of same order. The difference of the matrices

*A*and

*B*is defined as

*A*–

*B*=

*A*+ (–1)

*B*

❖

**Properties of matrix addition**

If

*A*,

*B*, and

*C*are three matrices of same order, then they follow the following properties related to addition:

• Commutative law:

*A*+

*B*=

*B*+

*A*

• Associative law:

*A*+ (

*B*+

*C*) = (

*A*+

*B*) +

*C*

• Existence of additive identity: For every matrix

*A*, there exists a matrix

*O*such that

*A*+

*O*=

*O*+

*A*=

*A*. In this case,

*O*is called the additive identity for matrix addition.

• Existence of additive inverse: For every matrix

*A*, there exists a matrix (–

*A*) such that

*A*+ (–

*A*) = (–

*A*) +

*A*=

*O*. In this case, (–

*A*) is called the additive inverse or the negative of

*A*.

❖

**Properties of scalar multiplication of a matrix**

If

*A*and

*B*are matrices of same order and

*k*and

*l*are scalars, then

•

*k*(

*A*+

*B*) =

*kA*+

*kB*

• (

*k*+

*l*)

*A*=

*kA*+

*lA*

❖

**Multiplication of matrices**

The product of two matrices

*A*and

*B*is defined, if the number of columns of

*A*is equal to the number of rows of

*B*.

If $A={\left[{a}_{ij}\right]}_{m\times n}$ and $B={\left[{b}_{ij}\right]}_{n\times p}$ are two matrices, then their product is defined as $AB=C={\left[{c}_{ik}\right]}_{m\times p}$, where ${c}_{ik}=\sum _{j=1}^{n}{a}_{ij}{b}_{jk}$

**Example:**If $A=\left[\begin{array}{ccc}2& -3& 7\\ 0& 1& -9\end{array}\right]$and $B=\left[\begin{array}{cc}-5& 9\\ 7& 2\\ 0& 1\end{array}\right]$, then find

*AB*.

**Solution:**

$AB=\left[\begin{array}{ccc}2& -3& 7\\ 0& 1& -9\end{array}\right]\times \left[\begin{array}{cc}-5& 9\\ 7& 2\\ 0& 1\end{array}\right]\phantom{\rule{0ex}{0ex}}=\left[\begin{array}{cc}2\times \left(-5\right)+\left(-3\right)\times 7+7\times 0& 2\times 9+\left(-3\right)\times 2+7\times 1\\ 0\times \left(-5\right)+1\times 7+\left(-9\right)\times 0& 0\times 9+1\times 2+\left(-9\right)\times 1\end{array}\right]\phantom{\rule{0ex}{0ex}}=\left[\begin{array}{cc}-31& 19\\ 7& -7\end{array}\right]$

❖

**Properties of multiplication of matrices**

If

*A*,

*B*, and

*C*are any three matrices, then they follow the following properties related to multiplication:

• Associative law: (

*AB*)

*C*=

*A*(

*BC*)

• Distribution law:

*A*(

*B*+

*C*) =

*AB*+

*AC*and (

*A*+

*B*)

*C*=

*AC*+

*BC*, if both sides of equality are defined.

• Existence of multiplicative identity: For every square matrix

*A*, there exists an identity matrix

*I*of same order such that

*IA*=

*AI*=

*A*. In this case,

*I*is called the multiplicative identity.

• Multiplication of two matrices is not commutative. There are many cases where the product

*AB*of two matrices

*A*and

*B*is defined, but the product

*BA*need not be defined.

❖

**Transpose of a matrix**

If

*A*is an

*m*×

*n*matrix, then the matrix obtained by interchanging the rows and columns of

*A*is called the transpose of matrix

*A*. The transpose of

*A*is denoted by

*A*′ or

*A*. In other words, if $A={\left[{a}_{ij}\right]}_{m\times n}$, then $A\prime ={\left[{a}_{ij}\right]}_{n\times m}$

^{T}**Example:**The transpose of the matrix $\left[\begin{array}{ccc}2& 8& -3\\ 1& 11& 9\end{array}\right]$ is $\left[\begin{array}{cc}2& 1\\ 8& 11\\ -3& 9\end{array}\right].$

❖

**Properties of transpose of matrices**

• (

*A*′)′ =

*A*

• (

*kA*′) =

*kA*′, where

*k*is a constant

• (

*A*+

*B*)′ =

*A*′ +

*B*′

• (

*A*

*B*)′ =

*B*′

*A*′

❖

**Symmetric and skew symmetric matrices**

• If

*A*is square matrix such that

*A*′ =

*A*, then

*A*is called a symmetric matrix.

• If

*A*is a square matrix such that

*A*′

*= –*

*A*, then

*A*is called a skew symmetric matrix.

• For any square matrix

*A*with entries as real numbers,

*A*+

*A*′ is a symmetric matrix and

*A*–

*A*′ is a skew symmetric matrix.

• Every square matrix can be expressed as the sum of a symmetric matrix and a skew symmetric matrix. In other words, if

*A*is any matrix, then

*A*can be expressed as

*P*+

*Q*, where $P=\frac{1}{2}\left(A+A\text{'}\right)$ and $Q=\frac{1}{2}\left(A-A\text{'}\right)$

❖

**Elementary operations or transformations on a matrix**

The various elementary operations or transformations on a matrix are as follows:

•

*R*↔

_{i}*R*or

_{j}*C*↔

_{i}*C*

_{j}•

*R*↔

_{i}*kR*or

_{i}*C*↔

_{i}*kC*

_{j}•

*R*↔

_{i}*R*+

_{i}*kR*or

_{j}*C*↔

_{i}*C*+

_{i}*kC*

_{j}**Example:**By applying

*R*

_{1}→

*R*

_{1}– 7

*R*

_{3}to the matrix $\left[\begin{array}{ccc}-9& 5& 8\\ 5& 6& 11\\ 2& -1& 0\end{array}\right]$, we obtain$\left[\begin{array}{ccc}-23& 12& 8\\ 5& 6& 11\\ 2& -1& 0\end{array}\right].$

❖

**Inverse of a matrix**

• If

*A*and

*B*are the square matrices of same order such that

*AB*=

*BA*=

*I*, then

*B*is called the inverse of

*A*and

*A*is called the inverse of

*B*. i.e.,

*A*

^{–1}=

*B*and

*B*

^{–1}=

*A*

• If

*A*and

*B*are invertible matrices of the same order, then (

*AB*)

^{–1}=

*B*

^{–1}

*A*

^{–1}

• Inverse of a square matrix, if it exists, is unique.

• If the inverse of a matrix exists, then it can be calculated either by using elementary row operations or by using elementary column operations.

**Chapter 4: Determinants**

❖

**Determinants of Matrices:**

- Determinant of a square matrix A is denoted by or det (A).
- Determinant of a matrix .

- Determinant of a matrix is given by, .

- Determinant of a matrix is given by (expanding along R1):

.

Similarly, we can find the determinant of A by expanding along any other row or along any column.

❖ **Properties of Determinants:**

- If the rows and the columns of a square matrix are interchanged, then the value of the determinant remains unchanged.

**Example:**

This property is same as saying, if A is a square matrix, then .

- If we interchange any two rows (or columns), then sign of determinant changes.

**Example:
**

- If any two rows or any two columns of a determinant are identical or proportional, then the value of the determinant is zero.

**Example:**

- If each element of a row or a column of determinant is multiplied by a constant α, then its determinant value gets multiplied by α.

**Example:**

$\left|\begin{array}{ccc}{a}_{1}& {b}_{1}& {c}_{1}\\ \alpha {a}_{2}& \alpha {b}_{2}& \alpha {c}_{2}\\ {a}_{3}& {b}_{3}& {c}_{3}\end{array}\right|=\alpha \left|\begin{array}{ccc}{a}_{1}& {b}_{1}& {c}_{1}\\ {a}_{2}& {b}_{2}& {c}_{2}\\ {a}_{3}& {b}_{3}& {c}_{3}\end{array}\right|$

- If some or all elements of a row or a column in a determinant can be expressed as the sum of two (or more) elements, then the determinant can be expressed as the sum of two (or more) determinants.

**Example:**

.

- If each element of any row or column of a determinant, the equimultiples of corresponding elements of other row or column are added, then the value of the determinant remains unchanged.

**Example:**

.

❖ **Area of Triangle:**

- Area of a triangle with vertices (
*x*_{1},*y*_{1}), (*x*_{2},*y*_{2}), and (*x*_{3},*y*_{3}) is given by,

Since area is always positive, we take the absolute value of the above determinant.

❖ **Minors and Cofactors:**

**Minors**- Minor of an element
*a*of a determinant is the determinant obtained by deleting its_{ij}*i*^{th}row and*j*^{th}column in which the element*a*lies. Minor of an element_{ij}*a*is denoted by_{ij}*M*._{ij} - Minor of an element of a determinant of order
*n*(*n*≥ 2) is a determinant of order (*n*− 1).

- Minor of an element

**Example:**

The minor of the element

*a*

_{22}in the determinant $\left|\begin{array}{ccc}10& 2& -8\\ 11& 21& 6\\ 3& 9& 5\end{array}\right|$ is given by, ${M}_{22}=\left|\begin{array}{cc}10& -8\\ 3& 5\end{array}\right|=\left(10\times 5\right)-\left(-8\times 3\right)=50+24=74$.

**Cofactors${A}_{23={\left(-1\right)}^{2+3}}{M}_{23}=-1\left(2-10\right)=-1\left(-8\right)=8.$**- Cofactor of an element
*a*, denoted by_{ij}*A*, is defined by_{ij}*A*= (− 1)_{ij}^{i}^{+}^{j}^{ }*M*, where_{ij}*M*is the minor of_{ij}*a*._{ij}

- Cofactor of an element

**Example:**

The cofactor of the element

*a*

_{23}in the determinant $\left|\begin{array}{ccc}-1& 2& -1\\ 3& 5& 4\\ 5& -2& -3\end{array}\right|$ is given by, ${A}_{23={\left(-1\right)}^{2+3}}{M}_{23}=-1\left(2-10\right)=-1\left(-8\right)=8.$

- The value of a determinant is equal to the sum of the product of elements of any row (or column) with their corresponding cofactors. That is,

- If elements of a row (or column) are multiplied with cofactors of any other row (or column), then their sum is zero. That is,
*a*_{12}*A*_{13}+*a*_{22}*A*_{23}+*a*_{32}*A*_{33}= 0.

❖ **Adjoint and inverse of a Matrix:**

- If A is a square matrix, then A (adjA) = (adjA) A = |A| I.
- A square matrix A is said to be singular, if .
- A square matrix A is said to be non-singular, if .
- If A and B are square matrices of same order, then .

Therefore, if A and B are non-singular matrices of same order, then AB and BA are also non-singular matrices of same order.

- If A is a non-singular matrix of order
*n*, then. - A square matrix A is invertible, if and only if A is non-singular and inverse of A is given by the formula:

.

❖ **Consistency and Solution of System of Linear Equations:**

- The system of following linear equations can be written as AX = B, where

.

- A system of linear equations is said to be consistent, if its solution (one or more) exists.
- A system of linear equations is said to be inconsistent, if its solution does not exist.
- Unique solution of the equation AX = B is given by X = A
^{–1}B, where . - For a square matrix A in equation AX = B, if
- , then there exists a unique solution.
- and (adjA) B ≠ O, then no solution exists.
- and (adjA) B ≠ O, then the system may or may not be consistent.

**Example :**

Solve the following system of linear equations:

**Solution:**

The given system of equations can be written in the form AX = B, where

Now,

Therefore, A is a non-singular matrix and hence, the given system of linear equations has only one solution.

Now,

*x* = 1, *y* = 3, and *z* = 5.

**Chapter 5: Continuity and Differentiability**

❖

**Continuity**

• Suppose

*f*is a real function on a subset of the real numbers and

*c*be a point in the domain of

*f*. Then,

*f*is continuous at

*c*, if $\underset{x\to c}{\mathrm{lim}}f\left(x\right)=f\left(c\right)$

More elaborately, we can say that

*f*is continuous at

*c*, if

$\underset{x\to {c}^{-}}{\mathrm{lim}}f\left(x\right)=\underset{x\to {c}^{+}}{\mathrm{lim}}f\left(x\right)=f\left(c\right)$

• If

*f*is not continuous at

*c*, then we say that

*f*is discontinuous at

*c*and

*c*is called the point of discontinuity.

• A real function

*f*is said to be continuous, if it is continuous at every point in the domain of

*f*.

❖

**Algebra of continuous functions**

• If

*f*and

*g*are two continuous real functions, then

• (

*f*+

*g*) (

*x*), (

*f*–

*g*) (

*x*),

*f*(

*x*).

*g(x*) are continuous

• $\frac{f\left(x\right)}{g\left(x\right)}$is continuous, if

*g(x*) ≠ 0

• If

*f*and

*g*are two continuous functions, then

*fog*is also continuous.

❖

**Differentiability**

• Suppose

*f*is a real function and

*c*is a point in its domain. Then, the derivative of

*f*at

*c*is defined by,

$f\text{'}\left(c\right)=\underset{h\to 0}{\mathrm{lim}}\frac{f\left(c+h\right)-f\left(c\right)}{h}$

• Derivative of a function

*f*(

*x*), denoted by $\frac{d}{dx}\left(f\left(x\right)\right)\mathrm{or}f\text{'}\left(x\right),$, is defined by $f\text{'}\left(x\right)=\underset{h\to 0}{\mathrm{lim}}\frac{f\left(x+h\right)-f\left(x\right)}{h}$

**Example**: Find derivative of sin 2

*x*.

**Solution:**

Let

*f*(

*x*) = sin 2

*x*

$\therefore f\text{'}\left(x\right)=\underset{h\to 0}{\mathrm{lim}}\frac{\mathrm{sin}2\left(x+h\right)-\mathrm{sin}2x}{h}\phantom{\rule{0ex}{0ex}}=\underset{h\to 0}{\mathrm{lim}}\frac{2\mathrm{cos}\left(2x+h\right)\xb7\mathrm{sin}h}{h}\phantom{\rule{0ex}{0ex}}=2\underset{h\to 0}{\mathrm{lim}}\mathrm{cos}\left(2x+h\right)\xb7\underset{h\to 0}{\mathrm{lim}}\frac{\mathrm{sin}h}{h}\phantom{\rule{0ex}{0ex}}=2\times \mathrm{cos}2x\times 1\phantom{\rule{0ex}{0ex}}=2\mathrm{cos}2x$

❖

**Algebra of derivatives**

$\u2022\left(f+g\right)\text{'}=f\text{'}+g\text{'}\phantom{\rule{0ex}{0ex}}\u2022\left(f-g\right)\text{'}=f\text{'}-g\text{'}\phantom{\rule{0ex}{0ex}}\u2022\left(fg\right)\text{'}=f\text{'}g\text{'}\phantom{\rule{0ex}{0ex}}\u2022{\left(\frac{f}{g}\right)}^{\text{'}}=\frac{f\text{'}g-fg\text{'}}{{g}^{2}},\mathrm{where}g\ne 0$

❖ Every differentiable function is continuous, but the converse is not true.

❖

**Derivative of a composite function**

**Chain rule:**This rule is used to find the derivative of a composite function. If

*f = v o u,*then

*t*=

*u*(

*x*); and if both $\frac{dt}{dx}$ and $\frac{dv}{dt}$ exist, then $\frac{df}{dx}=\frac{dv}{dt}\xb7\frac{dt}{dx}$

Similarly, if

*f*= (

*w o u*)

*o v*, and if

*t*=

*v*(

*x*),

*s*=

*u*(

*t*), then $\frac{df}{dx}=\frac{d\left(wou\right)}{dt}\xb7\frac{dt}{dx}=\frac{dw}{ds}\xb7\frac{ds}{dt}\xb7\frac{dt}{dx}$

❖

**Derivatives of some useful functions**

$\u2022\frac{d}{dx}\left({\mathrm{sin}}^{-1}x\right)=\frac{1}{\sqrt{1-{x}^{2}}}\phantom{\rule{0ex}{0ex}}\u2022\frac{d}{dx}\left({\mathrm{cos}}^{-1}x\right)=\frac{-1}{\sqrt{1-{x}^{2}}}\phantom{\rule{0ex}{0ex}}\u2022\frac{d}{dx}\left({\mathrm{tan}}^{-1}x\right)=\frac{1}{1+{x}^{2}}\phantom{\rule{0ex}{0ex}}\u2022\frac{d}{dx}\left({\mathrm{cot}}^{-1}x\right)=\frac{-1}{1+{x}^{2}}\phantom{\rule{0ex}{0ex}}\u2022\frac{d}{dx}\left({\mathrm{sec}}^{-1}x\right)=\frac{1}{x\sqrt{{x}^{2}-1}}\phantom{\rule{0ex}{0ex}}\u2022\frac{d}{dx}\left({\mathrm{cosec}}^{-1}x\right)=\frac{1}{x\sqrt{{x}^{2}}-1}\phantom{\rule{0ex}{0ex}}\u2022\frac{d}{dx}\left(\mathrm{log}x\right)=\frac{1}{x}\phantom{\rule{0ex}{0ex}}\u2022\frac{d}{dx}\left({e}^{x}\right)={e}^{x}\phantom{\rule{0ex}{0ex}}\u2022\frac{d}{dx}\left({e}^{ax}\right)=a{e}^{ax}$

❖

**Properties of logarithmic functions**

$\u2022{\mathrm{log}}_{a}xy={\mathrm{log}}_{a}x+{\mathrm{log}}_{a}y\phantom{\rule{0ex}{0ex}}\u2022{\mathrm{log}}_{a}\left(\frac{x}{y}\right)={\mathrm{log}}_{a}x-{\mathrm{log}}_{a}y\phantom{\rule{0ex}{0ex}}\u2022{\mathrm{log}}_{a}{x}^{n}=n{\mathrm{log}}_{a}x$

$\u2022{\mathrm{log}}_{a}xy={\mathrm{log}}_{a}x+{\mathrm{log}}_{a}y\phantom{\rule{0ex}{0ex}}\u2022{\mathrm{log}}_{a}\left(\frac{x}{y}\right)={\mathrm{log}}_{a}x-{\mathrm{log}}_{a}y\phantom{\rule{0ex}{0ex}}\u2022{\mathrm{log}}_{a}{x}^{n}=n{\mathrm{log}}_{a}x$

❖

**Logarithmic differentiation**

Derivative of a function $f\left(x\right)={\left[ux\right]}^{v\left(x\right)}$ can be calculated by taking logarithm on both sides, i.e., $\mathrm{log}f\left(x\right)=v\left(x\right)\mathrm{log}\left[u\left(x\right)\right],$ and then differentiating both sides with respect to

*x*.

❖

**Derivatives of functions in parametric forms**

If the variables

*x*and

*y*are expressed in form of

*x*=

*f*(

*t*) and

*y*=

*g*(

*t*), then they are said to be in parametric form. In this case, $\frac{dy}{dx}=\frac{dy}{dt}\times \frac{dt}{dx}=\frac{g\text{'}\left(t\right)}{f\text{'}\left(t\right)},$ provided $f\text{'}\left(t\right)\ne 0$

❖

**Second order derivative**

If

*y*=

*f*(

*x*), then $\frac{dy}{dx}=f\text{'}\left(x\right)\mathrm{and}\frac{{d}^{2}y}{d{x}^{2}}\mathrm{or}f\text{'}\text{'}\left(x\right)=\frac{d}{dx}\left({\displaystyle \frac{dy}{dx}}\right)$

Here, $f\text{'}\text{'}\left(x\right)$or $\frac{{d}^{2}y}{d{x}^{2}}$ is called the second order derivative of

*y*with respect to

*x*.

❖

**Rolle’s theorem**

If

*f*: [

*a*,

*b*] →

**R**is continuous on [

*a, b*] and differentiable on (

*a, b*) such that

*f*(

*a*) =

*f*(

*b*), where

*a*and

*b*are some real numbers, then there exists some

*c*∈ (

*a, b*) such that $f\text{'}\left(c\right)=0$

❖

**Mean value theorem**

If

*f*: [

*a, b*] →

**R**is continuous on [

*a, b*] and differentiable on (

*a, b*), then there exists some

*c*∈ (

*a, b*) such that $f\text{'}\left(c\right)=\frac{f\left(b\right)-f\left(a\right)}{b-a}$

**Chapter 6: Application of Derivatives**

❖

**Rate of change of quantities**

• For a quantity

*y*varying with another quantity

*x*, satisfying the rule

*y*=

*f*(

*x*), the rate of change of

*y*with respect to

*x*is given by $\frac{dy}{dx}\mathrm{or}f\text{'}\left(x\right).$

• The rate of change of

*y*with respect to the point

*x*=

*x*

_{0}is given by ${\begin{array}{c}\frac{dy}{dx}]\end{array}}_{x={x}_{0}}\mathrm{or}f\text{'}\left({x}_{0}\right).$

• If the variables

*x*and

*y*are expressed in form of

*x*=

*f*(

*t*) and

*y*=

*g(t*), then the rate of change of

*y*with respect to

*x*is given by $\frac{dy}{dx}=\frac{g\text{'}\left(t\right)}{f\text{'}\left(t\right)},$ provided $f\text{'}\left(t\right)\ne 0$

❖

**Increasing and decreasing functions**

• A function

*f*: (

*a*,

*b*) →

**R**is said to be

i. increasing on (

*a*,

*b*), if

*x*

_{1}<

*x*

_{2}in (

*a*,

*b*)$\Rightarrow f\left({x}_{1}\right)\le f\left({x}_{2}\right)\forall {x}_{1},{x}_{2}\in \left(a,b\right)$

ii. decreasing on (

*a*,

*b*), if

*x*

_{1}<

*x*

_{2}in (

*a*,

*b*) $\Rightarrow f\left({x}_{1}\right)\ge f\left({x}_{2}\right)\forall {x}_{1},{x}_{2}\in \left(a,b\right)$

**OR**

• If a function

*f*is continuous on [

*a*,

*b*] and differentiable on (

*a*,

*b*), then

i.

*f*is increasing in [

*a*,

*b*], if $f\text{'}\left(x\right)>0$ for each

*x*∈ (

*a*,

*b*)

ii.

*f*is decreasing in [

*a*,

*b*], if $f\text{'}\left(x\right)<0$ for each

*x*∈ (

*a*,

*b*)

iii.

*f*is constant function in [

*a*,

*b*], if $f\text{'}\left(x\right)=0$ for each

*x*∈ (

*a*,

*b*)

• A function

*f*: (

*a*,

*b*) →

**R**is said to be

i. strictly increasing on (

*a*,

*b*), if

*x*

_{1}<

*x*

_{2}in (

*a*,

*b*) ⇒

*f*(

*x*

_{1}) <

*f*(

*x*

_{2})∀

*x*

_{1},

*x*

_{2}∈ (

*a*,

*b*)

ii. strictly decreasing on (

*a*,

*b*), if

*x*

_{1}<

*x*

_{2}in (

*a*,

*b*) ⇒

*f*(

*x*

_{1}) >

*f*(

*x*

_{2})∀

*x*

_{1},

*x*

_{2}∈ (

*a*,

*b*)

• The graphs of various types of functions can be shown as follows:

❖

**Tangents and normals**

• For the curve

*y*=

*f*(

*x*), the slope of tangent at the point (

*x*

_{0},

*y*

_{0}) is given by ${\begin{array}{c}\frac{dy}{dx}]\end{array}}_{\left({x}_{0},{y}_{0}\right)}\mathrm{or}f\text{'}\left({x}_{0}\right).$

• For the curve

*y*=

*f*(

*x*), the slope of normal at the point (

*x*

_{0},

*y*

_{0}) is given by

**${\frac{-1}{\begin{array}{c}\frac{dy}{dx}]\end{array}}}_{\left({x}_{0},{y}_{0}\right)}\mathrm{or}\frac{-1}{f\text{'}\left({x}_{0}\right)}.$**

• The equation of tangent to the curve

*y*

*= f*(

*x*) at the point (

*x*

_{0},

*y*

_{0}) is given by, $y-{y}_{0}=f\text{'}\left({x}_{0}\right)\times \left(x-{x}_{0}\right)$

• If $f\text{'}\left({x}_{0}\right)$ does not exist, then the tangent to the curve

*y*

*= f*(

*x*) at the point (

*x*

_{0},

*y*

_{0}) is parallel to the

*y*-axis and its equation is given by

*x*=

*x*

_{0}

• The equation of normal to the curve

*y*

*= f*(

*x*) at the point (

*x*

_{0},

*y*

_{0}) is given by, $y-{y}_{0}=\frac{-1}{f\text{'}\left({x}_{0}\right)}\left(x-{x}_{0}\right)$

• If $f\text{'}\left({x}_{0}\right)$ does not exist, then the normal to the curve

*y*

*= f*(

*x*) at the point (

*x*

_{0},

*y*

_{0}) is parallel to the

*x*-axis and its equation is given by

*y*=

*y*

_{0}

• If $f\text{'}\left({x}_{0}\right)$ = 0, then the respective equations of the tangent and normal to the curve

*y*

*= f*(

*x*) at the point (

*x*

_{0},

*y*

_{0}) are

*y*=

*y*

_{0}and

*x*=

*x*

_{0}

❖

**Approximations**

Let

*y*=

*f*(

*x*) and let ∆

*x*be a small increment in

*x*and ∆

*y*be the increment in

*y*corresponding to the increment in

*x*i.e., ∆

*y*=

*f*(

*x*+ ∆

*x*) –

*f*(

*x*)

Then, $dy=f\text{'}\left(x\right)dx\mathrm{or}dy=\left({\displaystyle \frac{dy}{dx}}\right)\u2206x$ is a good approximation of ∆

*y*, when

*dx*= ∆

*x*is relatively small and we denote it by

*dy*≈ ∆

*y*

❖

**Maxima and minima**

Let a function

*f*be defined on an interval I. Then,

*f*is said to have

• maximum value in I, if there exists

*c*∈ I such that

*f*(

*c*) >

*f*(

*x*),∀

*x*∈ I [In this case,

*c*is called the point of maxima]

• minimum value in I, if there exists

*c*∈ I such that

*f*(

*c*) <

*f*(

*x*),∀

*x*∈ I [In this case,

*c*is called the point of minima]

• an extreme value in I, if there exists

*c*∈ I such that

*c*is either point of maxima or point of minima [In this case,

*c*is called an extreme point]

**Note:**Every continuous function on a closed interval has a maximum and a minimum value.

❖

**Local maxima and local minima**

Let

*f*be a real-valued function and

*c*be an interior point in the domain of

*f*. Then

*c*is called a point of

• local maxima, if there exists

*h*> 0 such that

*f*(

*c*) >

*f*(

*x*),∀

*x*∈ (

*c*–

*h, c*+

*h*) [In this case,

*f*(

*c*) is called the local maximum value of

*f*]

• local minima, if there exists

*h*> 0 such that

*f*(

*c*) <

*f*(

*x*),∀

*x*∈ (

*c*–

*h, c*+

*h*) [In this case,

*f*(

*c*) is called the local maximum value of

*f*]

❖

**Critical point:**A point

*c*in the domain of a function

*f*at which either $f\text{'}\left(c\right)=0$ or

*f*is not differentiable is called a critical point of

*f*.

❖

**First derivative test**

Let

*f*be a function defined on an open interval I. Let

*f*be continuous at a critical point

*c*in I. Then:

• If $f\text{'}\left(x\right)$ changes sign from positive to negative as

*x*increases through

*c*, i.e. if $f\text{'}\left(x\right)>0$ at every point sufficiently close to and to the left of

*c*, and $f\text{'}\left(x\right)<0$ at every point sufficiently close to and to the right of

*c*, then

*c*is a point of local maxima.

• If $f\text{'}\left(x\right)$ changes sign from negative to positive as

*x*increases through

*c*, i.e. if $f\text{'}\left(x\right)<0$ at every point sufficiently close to and to the left of

*c*, and $f\text{'}\left(x\right)>0$ at every point sufficiently close to and to the right of

*c*, then

*c*is a point of local minima.

• If $f\text{'}\left(x\right)$does not change sign as

*x*increases through

*c*, then

*c*is neither a point of local maxima nor a point of local minima. Such a point

*c*is called point of inflection.

❖

**Second derivative test**

Let

*f*be a function defined on an open interval I and

*c*∈ I. Let

*f*be twice differentiable at

*c*and $f\text{'}\left(c\right)$= 0. Then:

• If $f\text{'}\text{'}\left(c\right)<0,$ then

*c*is a point of local maxima. In this situation,

*f*(

*c*) is local maximum value of

*f*.

• If $f\text{'}\text{'}\left(c\right)>0,$ then

*c*is a point of local minima. In this situation,

*f*(

*c*) is local minimum value of

*f*.

• If $f\text{'}\text{'}\left(c\right)=0,$ then the test fails. In this situation, we follow first derivative test and find whether

*c*is a point of maxima or minima or a point of inflection.

❖

**Absolute maximum value or absolute minimum value**

• Let

*f*be a differentiable and continuous function on a closed interval, then

*f*always attains its maximum and minimum value in the interval I, which are respectively known as the absolute maximum and absolute minimum value of

*f*. Also,

*f*attains these values at least once each in [

*a, b*].

• Let

*f*be a differentiable function on a closed interval I and

*c*be any interior point of I such that $f\text{'}\left(c\right)<0,$ then

*f*attains its absolute maximum value and its absolute minimum value at

*c*.

• To find the absolute maximum value or/and absolute minimum value, we follow the steps listed below:

Step 1: Find all critical points

*f*in the interval.

Step 2: Take the end point of interval.

Step 3: Calculate the values of

*f*at the points found in step 1 and step 2.

Step 4: Identify the maximum and minimum values of

*f*out of values calculated in step 3.

The maximum value will be the absolute maximum (greatest) value of

*f*and the minimum value will be the absolute minimum (least) value of

*f*.

**Chapter 7: Integrals**

❖ Integration is the inverse process of differentiation. If $\frac{d}{dx}f\left(x\right)=g\left(x\right)$, then we can write $\int g\left(x\right)dx$ =

*f*(

*x*) + C. This is called the general or the indefinite integral and C is called the constant of integration.

❖

**Some standard indefinite integrals**

$\u2022\int {x}^{n}dx=\frac{{x}^{n+1}}{n+1}+\mathrm{C},n\ne -1\phantom{\rule{0ex}{0ex}}\u2022\int dx=x+\mathrm{C}\phantom{\rule{0ex}{0ex}}\u2022\int \mathrm{sin}xdx=-\mathrm{cos}x+\mathrm{C}\phantom{\rule{0ex}{0ex}}\u2022\int \mathrm{cos}\mathit{}xdx=\mathrm{sin}\mathit{}x+\mathrm{C}\phantom{\rule{0ex}{0ex}}\u2022\int \mathrm{sec}{}^{2}xdx=\mathrm{tan}\mathit{}x+\mathrm{C}\phantom{\rule{0ex}{0ex}}\u2022\int {\mathrm{cosec}}^{2}xdx=-\mathrm{cot}\mathit{}x+\mathrm{C}\phantom{\rule{0ex}{0ex}}\u2022\int \mathrm{sec}\mathit{}x\mathrm{tan}xdx=\mathrm{sec}x+\mathrm{C}\phantom{\rule{0ex}{0ex}}\begin{array}{l}\u2022\int \mathrm{cosec}x\mathrm{cot}\mathit{}xdx=-\mathrm{cosec}x+\mathrm{C}\\ \u2022\int \frac{dx}{\sqrt{1-{x}^{2}}}=\mathrm{sin}{}^{-1}x+\mathrm{C}\mathrm{or}-\mathrm{cos}{}^{-1}x+\mathrm{C}\\ \u2022\int \frac{dx}{1+{x}^{2}}=\mathrm{tan}{}^{-1}x+\mathrm{C}\mathrm{or}-\mathrm{cot}{}^{-1}x+\mathrm{C}\\ \u2022\int \frac{dx}{x\sqrt{{x}^{2}-1}}=\mathrm{sec}{}^{-1}x+\mathrm{C}\mathrm{or}-{\mathrm{cosec}}^{-1}x+\mathrm{C}\\ \u2022\int {e}^{x}dx={e}^{x}+\mathrm{C}\\ \u2022\int {a}^{x}dx=\frac{{a}^{x}}{\mathrm{log}\mathit{}a}+\mathrm{C}\\ \u2022\int \frac{1}{x}dx=\mathrm{log}\left|x\right|+\mathrm{C}\end{array}$

$\int {e}^{ax}dx=\frac{{e}^{ax}}{a}+\mathrm{C}$

❖

**Properties of indefinite integrals**

• $\frac{d}{dx}\int f\left(x\right)dx=f\left(x\right)$ and $\int f\text{'}\left(x\right)dx=f\left(x\right)+C$

• If the derivative of two indefinite integrals is the same, then they belong to same family of curves and hence they are equivalent.

• $\int \left[f\right(x)\pm g(x\left)\right]dx=\int f\left(x\right)dx\pm \int g\left(x\right)dx$

• $\int kf\left(x\right)dx=k\int f\left(x\right)dx$, where

*k*is any constant

❖

**Methods of integration**

There are three important methods of integration, namely,

**integration by substitution**,

**integration using partial fractions**, and

**integration by parts**.

❖

**Integration by substitution**

A change in the variable of integration often reduces an integral to one of the fundamental integrals, which can be easily found out. The method in which we change the variable to some other variable is called the method of substitution.

**Using substitution method of integration, we obtain the following standard integrals:**

$\begin{array}{l}\u2022\int \mathrm{tan}xdx=-\mathrm{log}|\mathrm{cos}x|+\mathrm{C}\mathrm{or}\mathrm{log}|\mathrm{sec}x|+\mathrm{C}\\ \u2022\int \mathrm{cot}xdx=\mathrm{log}|\mathrm{sin}x|+\mathrm{C}\\ \u2022\int \mathrm{sec}xdx=\mathrm{log}\left|\right(\mathrm{sec}x+\mathrm{tan}x\left)\right|+\mathrm{C}\\ \u2022\int \mathrm{cosec}xdx=\mathrm{log}|\mathrm{cosec}x-\mathrm{cot}x|+\mathrm{C}\end{array}$

❖

**Integration by partial fractions**

The following table shows how a function of the form $\frac{\mathrm{P}\left(x\right)}{\mathrm{Q}\left(x\right)},$ where Q(

*x*) ≠ and degree of Q(

*x*) is greater than the degree of P(

*x*), is broken by the concept of partial fractions. After doing this, we find the integration of the given function by integrating the right hand side (i.e., partial fractional form).

Function |
Form of partial fraction |

$\frac{px+q}{(x-a)(x-b)},a\ne \mathit{b}$ | $\frac{\mathrm{A}}{x-a}+\frac{\mathrm{B}}{x-b}$ |

$\frac{px+q}{(x-a{)}^{2}}$ | $\frac{\mathrm{A}}{x-a}+\frac{\mathrm{B}}{(x-a{)}^{2}}$ |

$\frac{p{x}^{2}+qx+r}{(x-a)(x-b)(x-c)}$ | $\frac{\mathrm{A}}{x-a}+\frac{\mathrm{B}}{x-b}+\frac{\mathrm{C}}{x-c}$ |

$\frac{p{x}^{2}+qx+r}{(x-a{)}^{2}(x-b)}$ | $\frac{\mathrm{A}}{x-a}+\frac{\mathrm{B}}{(x-a{)}^{2}}+\frac{\mathrm{C}}{x-b}$ |

$\frac{p{x}^{2}+qx+r}{(x-a)({x}^{2}+bx+c)}$, where x^{2} + bx + c cannot be factorised |
$\frac{\mathrm{A}}{x-a}+\frac{\mathrm{B}x+\mathrm{C}}{{x}^{2}+bx+c}$ |

Here, A, B, C are constants that are to be determined.

❖

**Integrals of some special functions**

**$\begin{array}{l}\u2022\int \frac{1}{{x}^{2}-{a}^{2}}dx=\frac{1}{2a}\mathrm{log}\left|\frac{x-a}{x+a}\right|+\mathrm{C}\\ \u2022\int \frac{1}{{a}^{2}-{x}^{2}}dx=\frac{1}{2a}\mathrm{log}\left|\frac{a+x}{a-x}\right|+\mathrm{C}\\ \u2022\int \frac{1}{\sqrt{{x}^{2}-{a}^{2}}}dx=\mathrm{log}|x+\sqrt{{x}^{2}-{a}^{2}}|+\mathrm{C}\\ \u2022\int \frac{1}{\sqrt{{a}^{2}-{x}^{2}}}=\mathrm{sin}{}^{-1}\frac{x}{a}+\mathrm{C}\\ \u2022\int \frac{1}{\sqrt{{x}^{2}+{a}^{2}}}dx=\mathrm{log}|x+\sqrt{{x}^{2}+{a}^{2}}|+\mathrm{C}\\ \u2022\int \frac{1}{{a}^{2}+{x}^{2}}dx=\frac{1}{a}{\mathrm{tan}}^{}\left(\frac{x}{a}\right)+\mathrm{C}\end{array}$**

❖

**Method of some special types of integrals**

• Integrals of the types $\int \frac{dx}{a{x}^{2}+bx+c}\mathrm{or}\int \frac{dx}{\sqrt{a{x}^{2}+bx+c}}$:

We can reduce these types of integrals into standard form by expressing

$a{x}^{2}+bx+c\mathrm{as}a\left[{x}^{2}+\frac{b}{a}x+\frac{c}{a}\right]=a\left[{\left(x+\frac{b}{2a}\right)}^{2}+\left(\frac{c}{a}-\frac{{b}^{2}}{4{a}^{2}}\right)\right]$ and then applying substitution method by putting $x+\frac{b}{2a}\mathrm{as}u\left(\mathrm{say}\right).$

• Integrals of the type $\int \frac{px+q}{a{x}^{2}+bx+c}dx\mathrm{or}\int \frac{px+q}{\sqrt{a{x}^{2}+bx+c}}dx$:

These types of integrals can be transformed into standard form by expressing $px+q\mathrm{as}\mathrm{A}\cdot \frac{d}{dx}(a{x}^{2}+bx+c)+\mathrm{B}=\mathrm{A}(2ax+b)+\mathrm{B}$, where A and B are determined by comparing coefficients on both sides.

❖

**Integration by parts**

For given functions

*f*(

*x*) and

*g*(

*x*), $\int f\left(x\right)\xb7g\left(x\right)dx=f\left(x\right)\int g\left(x\right)dx-\int \left[{f}^{\text{'}}\left(x\right)\cdot \int g\left(x\right)dx\right]dx\phantom{\rule{0ex}{0ex}}$

In other words, the integral of the product of two functions is equal to first function × integral of the second function – integral of {differential of the first function × integral of the second function}.

Here, the functions

*f*and

*g*have to be taken in proper order with respect to the ILATE rule, where I, L, A, T, and E respectively represent inverse, logarithm, arithmetic, trigonometric, and exponential function.

• We can find the integrals of the type$\int {e}^{x}\left[f\left(x\right)+{f}^{\text{'}}\left(x\right)\right]dx$by using the ILATE rule and obtain $\int {e}^{x}\left[f\left(x\right)+{f}^{\text{'}}\left(x\right)\right]dx=\int {e}^{x}f\left(x\right)dx+\mathrm{C}$

• Using the method of integration by parts, we obtain the following standard integrals:

$\mathrm{i}.\int \sqrt{{x}^{2}-{a}^{2}}dx=\frac{x}{2}\sqrt{{x}^{2}-{a}^{2}}-\frac{{a}^{2}}{2}\mathrm{log}\left|x+\sqrt{{x}^{2}-{a}^{2}}\right|+\mathrm{C}$

$\mathrm{ii}.\sqrt{{x}^{2}+{a}^{2}}dx=\frac{x}{2}\sqrt{{x}^{2}+{a}^{2}}+\frac{{a}^{2}}{2}\mathrm{log}|x+\sqrt{{x}^{2}+{a}^{2}}|+\mathrm{C}$

$\mathrm{iii}.\int \sqrt{{a}^{2}-{x}^{2}}dx=\frac{x}{2}\sqrt{{a}^{2}-{x}^{2}}+\frac{{a}^{2}}{2}\mathrm{sin}{}^{-1}\frac{x}{a}+\mathrm{C}$

❖

**Definite integrals**

• A definite integral is denoted by $\underset{a}{\overset{b}{\int}}f\left(x\right)dx$, where

*a*is the lower limit and

*b*is the upper limit of the integral. If $\int f\left(x\right)dx=F\left(x\right)+\mathrm{C}$, then

$\underset{a}{\overset{b}{\int}}f\left(x\right)dx=F\left(b\right)-F\left(a\right)$

• The definite integral $\underset{a}{\overset{b}{\int}}f\left(x\right)dx$ represents the area function A(

*x*) since $\underset{a}{\overset{b}{\int}}f\left(x\right)dx$ is the area bounded by the curve

*y*=

*f*(

*x*),

*x*Î [

*a*,

*b*], the

*x*-axis, and the ordinates

*x*=

*a*and

*x*=

*b*

• The definite integral $\underset{a}{\overset{b}{\int}}f\left(x\right)dx$ can be expressed as the sum of limits as

$\underset{a}{\overset{b}{\int}}f\left(x\right)dx=(b-a)\underset{n\to \infty}{\mathrm{lim}}\frac{1}{n}\left[f\right(a)+f(a+h)+\mathrm{...}+f(a(n-1)h\left)\right]$, where $h=\frac{b-a}{n}\to 0$ as

*n*® ¥

❖

**First fundamental theorem of integral calculus**

Let

*f*be a continuous function on the closed interval [

*a*,

*b*] and let A (

*x*) be the area function. Then, $\mathrm{A}\text{'}\left(x\right)=f\left(x\right)\forall x\in [a,b]$

❖

**Second fundamental theorem**

**of integral calculus**

Let

*f*be a continuous function on the closed interval [

*a*,

*b*] and let

*F*be an anti-derivative of

*f*. Then,

$\underset{a}{\overset{b}{\int}}f\left(x\right)dx=\left[F\right(x){]}_{a}^{b}=F(b)-F(a)$

❖

**Some useful properties of definite integrals**

$\u2022\underset{a}{\overset{b}{\int}}f\left(x\right)dx=\underset{a}{\overset{b}{\int}}f\left(t\right)dt$

$\u2022\underset{a}{\overset{b}{\int}}f\left(x\right)dx=-\underset{b}{\overset{a}{\int}}f\left(x\right)dx\cdot \mathrm{In}\mathrm{particular},\underset{a}{\overset{a}{\int}}f\left(x\right)dx=0$

$\u2022\underset{a}{\overset{b}{\int}}f\left(x\right)dx=\underset{a}{\overset{c}{\int}}f\left(x\right)dx+\underset{c}{\overset{b}{\int}}f\left(x\right)dx$

$\u2022\underset{a}{\overset{b}{\int}}f\left(x\right)dx=\underset{a}{\overset{b}{\int}}f(a+b-x)dx$

$\u2022\underset{0}{\overset{a}{\int}}f\left(x\right)dx=\underset{0}{\overset{a}{\int}}f(a-x)dx$

$\u2022\underset{0}{\overset{2a}{\int}}f\left(x\right)dx=\underset{0}{\overset{a}{\int}}f\left(x\right)dx+\underset{0}{\overset{a}{\int}}f(2a-x)dx$

$\u2022\underset{0}{\overset{2a}{\int}}f\left(x\right)dx=\left\{\begin{array}{c}2\underset{0}{\overset{a}{\int}}f\left(x\right)dx,\mathrm{if}f(2a-x)=f\left(x\right)\\ 0,\mathrm{if}f(2a-x)=-f\left(x\right)\end{array}\right.$

$\u2022\underset{-a}{\overset{a}{\int}}f\left(x\right)dx=\left\{\begin{array}{c}2\underset{0}{\overset{a}{\int}}f\left(x\right)dx,\mathrm{if}f\mathrm{is}\mathrm{an}\mathrm{even}\mathrm{function}\mathrm{i}.\mathrm{e}.,\mathrm{if}f(-x)=f\left(x\right)\\ 0,\mathrm{if}f\mathrm{is}\mathrm{an}\mathrm{odd}\mathrm{function}\mathrm{i}.\mathrm{e}.,\mathrm{if}f(-x)=-f\left(x\right)\end{array}\right.$

**Chapter 8: Application of Integrals**

❖

**Area under simple curves**

• Area of the region bounded by the curve

*y*=

*f*(

*x*),

*x*-axis, and the lines

*x = a*and

*x = b*(

*b*>

*a*) is given by

*A*= $\underset{a}{\overset{b}{\int}}ydx$ or

*A*= $\underset{a}{\overset{b}{\int}}f\left(x\right)dx$

• The area of the region bounded by the curve

*x*=

*g*(

*y*),

*y*-axis, and the lines

*y = c*and

*y = d*is given by

*A*= $\underset{c}{\overset{d}{\int}}xdy$ or

*A*=$\underset{c}{\overset{d}{\int}}g\left(y\right)dx$

❖

**Area of the region bounded by a curve and a line**

• If a line

*y = mx + p*intersects a curve

*y*=

*f*(

*x*) at

*a*and

*b*, then the area of this curve under the line

*y = mx + p*or the lines

*x = a*and

*x = b*is

$A=\underset{a}{\overset{b}{\int}}ydx\mathrm{or}A=\underset{a}{\overset{b}{\int}}f\left(x\right)dx$

• If a line

*y = mx + p*intersects a curve

*x*=

*g*(

*y*) at

*c*and

*d*respectively, then area of this curve under the line

*y*=

*mx + p*or lines

*y = c*and

*y = d*is given by,

$A=\underset{c}{\overset{d}{\int}}xdy=\underset{c}{\overset{d}{\int}}g\left(y\right)dy$

❖

**Area between two curves**

The area of the region enclosed between two curves

*y = f*(

*x*) and

*y = g*(

*x*) and the lines

*x = a*and

*x = b*is given by,

$A=\left\{\begin{array}{l}\underset{a}{\overset{b}{\int}}\left[f\left(x\right)-g\left(x\right)\right]dx.wheref\left(x\right)\ge g\left(x\right)in\left[a,b\right]\\ \underset{a}{\overset{c}{\int}}\left[f\left(x\right)-g\left(x\right)\right]dx+\underset{c}{\overset{b}{\int}}\left[g\left(x\right)-f\left(x\right)\right]dx,\mathrm{where}acb\mathrm{and}f\left(x\right)\ge g\left(x\right)\mathrm{in}\left[a,c\right]\mathrm{and}f\left(x\right)\le g\left(x\right)\mathrm{in}\left[c,b\right]\end{array}\right.$

**Chapter 9: Differential Equations**

❖An equation is called a differential equation, if it involves variables as well as derivatives of dependent variable with respect to independent variable.

For example:

$x\frac{{d}^{4}y}{d{x}^{4}}+y{\left(\frac{{d}^{2}y}{d{x}^{2}}\right)}^{3}-2{x}^{2}y\frac{dy}{dx}+3=0$ is a differential equation.

Sometimes, we may write $\frac{dy}{dx}\text{'}\frac{{d}^{2}y}{d{x}^{2}}\text{'}\frac{{d}^{3}y}{d{x}^{3}}\text{'}\frac{{d}^{4}y}{d{x}^{4}}$etc. as ${y}^{\text{'}},{y}^{\text{'}\text{'}},{y}^{\text{'}\text{'}\text{'}},{{y}^{\text{'}\text{'}\text{'}}}^{\text{'}}$etc. respectively. Also, note that we cannot say that $\mathrm{tan}\left({y}^{\text{'}}\right)+x=0$ is a differential equation.

❖

**Order and degree of a differential equation**

•

**Order of a differential equation**is defined as the order of the highest order derivative of dependent variable with respect to independent variable involved in the given differential equation.

For example: The highest order derivative present in the differential equation ${x}^{3}{y}^{5}{{y}^{\text{'}\text{'}\text{'}}}^{\text{'}}-3{x}^{2}{y}^{\text{'}\text{'}}+xy{y}^{\text{'}}-5=0\mathrm{is}{{y}^{\text{'}\text{'}\text{'}}}^{\text{'}}{x}^{3}{y}^{5}{{y}^{\text{'}\text{'}\text{'}}}^{\text{'}}-3{x}^{2}{y}^{\text{'}\text{'}}+xy{y}^{\text{'}}-5=0\mathrm{is}{{y}^{\text{'}\text{'}\text{'}}}^{\text{'}}$. Therefore, the order of this differential equation is 4.

•

**Degree of a differential equation**is the highest power of the highest order derivative in it.

For example: The degree of the differential equation $({y}^{\text{'}\text{'}\text{'}}{)}^{2}-2x({y}^{\text{'}\text{'}}{)}^{5}-xy({y}^{\text{'}\text{'}}{)}^{2}+{y}^{\text{'}}=0$ is 2, since the highest power of the highest order derivative, ${y}^{\text{'}\text{'}\text{'}}$, is 2.

• If a differential equation is defined, then its order and degree are always positive integers.

❖

**General and particular solutions of a differential equation**

• A function that satisfies the given differential equation is called a solution of a given differential equation.

• The solution of a differential equation, which contains arbitrary constants, is called general solution (primitive) of the differential equation.

• The solution of a differential equation, which is free from arbitrary constants i.e., the solution obtained from the general solution by giving particular values to arbitrary constants is called a particular solution of the differential equation.

❖

**Formation of differential equations**

To form a differential equation from a given function, we differentiate the function successively as many times as the number of constants in the given function and then eliminate the arbitrary constants.

❖

**Methods of solving first order, first degree differential equations**

•

**Variable separable method**

This method is used to solve such an equation in which variables can be separated completely, i.e., terms containin

*g y*should remain wit

*h dy*and terms containing

*x*should remain with

*dx*.

•

**Homogeneous differential equation**

A differential equation which can be expressed as $\frac{dy}{dx}=f(x,y)\mathrm{or}\frac{dx}{dy}=g(x,y)$, where $f\left(x,y\right)$ and $g\left(x,y\right)$are homogenous functions of degree zero is called a homogenous differential equation. To solve such an equation, we have to substitute

*y*=

*vx*in the given differential equation and then solve it by variable separable method.

•

**Linear differential equation**

a) A differential equation which can be expressed in the form of $\frac{dy}{dx}+\mathrm{P}y=\mathrm{Q},$ where P and Q are constants or functions of

*x*only, is called a first order linear differential equation.

In this case, we find integrating factor (I.F.) by using the formula:

$\mathrm{I}.\mathrm{F}.={e}^{\int \mathrm{P}dx}$

Then, the solution of the differential equation is given by,

*y*(I.F) = $\int (\mathrm{Q}\times \mathrm{I}.\mathrm{F}.)dx+\mathrm{C}$

b) A linear differential equation can also be of the form $\frac{dx}{dy}+{\mathrm{P}}_{1}x={\mathrm{Q}}_{1},$

where P

_{1}and Q

_{1}are constants or functions of y only.

In this case, $\mathrm{I}.\mathrm{F}.={e}^{\int {\mathrm{P}}_{1}dy}$

And the solution of the differential equation is given by,

*x*(I.F.) = $\int ({\mathrm{Q}}_{1}\times \mathrm{I}.\mathrm{F}.)dy+\mathrm{C}$

**Chapter 10: Vector Algebra**

❖

**Scalar**

The quantity which involves only one value, i.e. magnitude, is called a scalar quantity. For example: Time, mass, distance, energy, etc.

❖

**Vector**

The quantity which has both magnitude and a direction is called a vector quantity. For example: force, momentum, acceleration, etc.

❖

**Directed line**

A line with a direction is called a directed line. Let $\overrightarrow{\mathrm{AB}}$ be a directed line along direction B.

Here,

• The length of the line segment AB represents the magnitude of the above directed line. It is denoted by $\left|\overrightarrow{\mathrm{AB}}\right|$ or $\left|\overrightarrow{a}\right|$ or

*a*.

• $\overrightarrow{\mathrm{AB}}$ represents the vector in the direction towards point B. Therefore, the vector represented in the above figure is $\overrightarrow{\mathrm{AB}}$. It can also be denoted by $\overrightarrow{a}$.

• The point A from where the vector $\overrightarrow{\mathrm{AB}}$ starts is called its initial point and the point B where the vector $\overrightarrow{\mathrm{AB}}$ ends is called its terminal point.

❖

**Position vector**

The position vector of a point P(

*x*,

*y*,

*z*) with respect to the origin (0, 0, 0) is given by $\overrightarrow{\mathrm{OP}}=x\hat{i}+y\hat{j}+z\hat{k}$. This form of any vector is known as the component form.

Here,

• $\hat{i},\hat{j}$, and $\hat{k}$ are called the unit vectors along the

*x*-axis,

*y*-axis, and

*z*-axis respectively.

•

*x*,

*y*, and

*z*are the scalar components (or rectangular components) along

*x*-axis,

*y*-axis, and

*z*-axis respectively.

• $x\hat{i},y\hat{j},z\hat{k}$ are called vector components of $\overrightarrow{\mathrm{OP}}$ along the respective axes.

• The magnitude of $\overrightarrow{\mathrm{OP}}$ is given by $\left|\overrightarrow{\mathrm{OP}}\right|=\sqrt{{x}^{2}+{y}^{2}+{z}^{2}}$

❖

**Components and direction ratios**

• The scalar components of a vector are its direction ratios and represent its projections along the respective axis.

• The direction ratios of a vector $\overrightarrow{p}=a\hat{i}+b\hat{j}+c\hat{k}$ are

*a*,

*b*, and

*c*.

Here,

*a*,

*b*, and

*c*respectively represent projections of $\overrightarrow{p}$ along

*x*-axis,

*y*-axis, and

*z*-axis.

❖

**Direction cosines**

• The cosines of the angle made by the vector $\overrightarrow{r}=a\hat{i}+b\hat{j}+c\hat{k}$ with the positive directions of

*x*,

*y*, and

*z*axes are its direction cosines. These are usually denoted by

*l*,

*m*, and

*n*. Also, ${l}^{2}+{m}^{2}+{n}^{2}=1$

• The direction cosines (

*l*,

*m*,

*n*) of a vector $a\hat{i}+b\hat{j}+c\hat{k}$ are

$l=\frac{a}{r},m=\frac{b}{r},n=\frac{c}{r}$, where

*r*= magnitude of the vector $a\hat{i}+b\hat{j}+c\hat{k}$

❖

**Types of vectors**

•

**Zero vector:**A vector whose initial and terminal points coincide is called a zero vector (or null vector). It is denoted as $\overrightarrow{0}$. The vectors $\overrightarrow{\mathrm{AA}},\overrightarrow{\mathrm{BB}}$ represent zero vectors.

•

**Unit vector:**A vector whose magnitude is unity, i.e. 1 unit, is called a unit vector. The unit vector in the direction of any given vector $\overrightarrow{a}$ is denoted by $\hat{a}$ and it is calculated by $\hat{a}=\frac{1}{\left|\overrightarrow{a}\right|}\overrightarrow{a}$

**Note:**if

*l*,

*m*, and

*n*are direction cosines of a vector, then $l\hat{i}+m\hat{j}+n\hat{k}$ is the unit vector in the direction of that vector.

•

**Co-initial vectors:**Two or more vectors are said to be co-initial vectors, if they have the same initial point.

•

**Collinear vectors:**Two or more vectors are said to be collinear vectors, if they are parallel to a same line irrespective of their magnitude and direction.

•

**Equal vectors:**Two vectors $\overrightarrow{a}$ and $\overrightarrow{b}$ are said to equal, if they have same magnitude and direction regardless of the position of their initial points. They are written as $\overrightarrow{a}=\overrightarrow{b}$

•

**Negative of a vector:**Two vectors are said to be negative of one another, if they have same magnitude, but their direction is opposite to one another.

For example, the negative of a vector $\overrightarrow{\mathrm{AB}}$ is written as $\overrightarrow{\mathrm{BA}}=-\overrightarrow{\mathrm{AB}}$

❖

**Addition of vectors**

•

**Triangle law of vector addition:**If two vectors are represented by two sides of a triangle in order, then the third closing side of the triangle in the opposite direction of the order represents the sum of the two vectors.

$\overrightarrow{\mathrm{AC}}=\overrightarrow{\mathrm{AB}}+\overrightarrow{\mathrm{BC}}$

**Note:**The vector sum of the three sides of a triangle taken in order is $\overrightarrow{0}$

•

**Parallelogram law of vector addition:**If two vectors are represented by two adjacent sides of a parallelogram in order, then the diagonal closing side of the triangle of the parallelogram in the opposite direction of the order represents the sum of two vectors.

$\overrightarrow{c}=\overrightarrow{a}+\overrightarrow{b}$

❖

**Properties of vector addition**

• Commutative property: $\overrightarrow{a}+\overrightarrow{b}=\overrightarrow{b}+\overrightarrow{a}$

• Associative property: $\overrightarrow{a}+(\overrightarrow{b}+\overrightarrow{c})=(\overrightarrow{a}+\overrightarrow{b})+\overrightarrow{c}$

• Existence of additive identity: The vector $\overrightarrow{0}$ is additive identity of a vector $\overrightarrow{a}$, since $\overrightarrow{a}+\overrightarrow{0}=\overrightarrow{0}+\overrightarrow{a}=\overrightarrow{a}$

• Existence of additive inverse: The vector $-\overrightarrow{a}$ is called additive inverse of $\overrightarrow{a}$, since $\overrightarrow{a}+(-\overrightarrow{a})=(-\overrightarrow{a})+\overrightarrow{a}=0$

❖

**Operations on vectors**

• The multiplication of vector $\overrightarrow{a}={a}_{1}\hat{i}+{a}_{2}\hat{j}+{a}_{3}\hat{k}$ by any scalar l is given by,

$\lambda \overrightarrow{a}=\left(\lambda {a}_{1}\right)\hat{i}+\left(\lambda {a}_{2}\right)\hat{j}+\left(\lambda {a}_{3}\right)\hat{k}$

• The magnitude of the vector $\lambda \overrightarrow{a}$ is given by $\left|\lambda \overrightarrow{a}\right|=\left|\lambda \right|\left|\overrightarrow{a}\right|$

• The sum of two vectors $\overrightarrow{a}={a}_{1}\hat{i}+{a}_{2}\hat{j}+{a}_{3}\hat{k}$ and $\overrightarrow{b}={b}_{1}\hat{i}+{b}_{2}\hat{j}+{b}_{3}\hat{k}$ is given by,

$\overrightarrow{a}+\overrightarrow{b}=({a}_{1}+{b}_{1})\hat{i}+({a}_{2}+{b}_{2})\hat{j}+({a}_{3}+{b}_{3})\hat{k}$

• The difference of two vectors $\overrightarrow{a}={a}_{1}\hat{i}+{a}_{2}\hat{j}+{a}_{3}\hat{k}$ and $\overrightarrow{b}={b}_{1}\hat{i}+{b}_{2}\hat{j}+{b}_{3}\hat{k}$is given by $\overrightarrow{a}-\overrightarrow{b}=({a}_{1}-{b}_{1})\hat{i}+({a}_{2}-{b}_{2})\hat{j}+({a}_{3}-{b}_{3})\hat{k}$

❖

**Equality of vectors**

The vectors $\overrightarrow{a}={a}_{1}\hat{i}+{a}_{2}\hat{j}+{a}_{3}\hat{k}$ and $\overrightarrow{b}={b}_{1}\hat{i}+{b}_{2}\hat{j}+{b}_{3}\hat{k}$ are equal, if and only if

*a*

_{1}=

*b*

_{1},

*a*

_{2}=

*b*

_{2}, and

*a*

_{3}=

*b*

_{3}

❖

**Distributive law for vectors**

Let $\overrightarrow{{a}_{1}}$ and $\overrightarrow{{a}_{2}}$ be two vectors, and

*k*

_{1}and

*k*

_{2}be any scalars, then the following are the distributive laws of addition and multiplication of a vector by a scalar:

• ${k}_{1}\overrightarrow{{a}_{1}}+{k}_{2}\overrightarrow{{a}_{1}}=({k}_{1}+{k}_{2})\overrightarrow{{a}_{1}}$

• ${k}_{1}\left({k}_{2}\overrightarrow{{a}_{1}}\right)=\left({k}_{1}{k}_{2}\right)\overrightarrow{{a}_{1}}$

• $\overrightarrow{{k}_{1}}(\overrightarrow{{a}_{1}}+\overrightarrow{{a}_{2}})={k}_{1}\overrightarrow{{a}_{1}}+{k}_{1}\overrightarrow{{a}_{2}}$

❖

**Collinear vectors**

• Two vectors $\overrightarrow{a}$ and $\overrightarrow{b}$ are collinear, if and only if there exists a non-zero scalar such that $\overrightarrow{b}=\lambda \overrightarrow{a}$

• Two vectors $\overrightarrow{a}={a}_{1}\hat{i}+{a}_{2}\hat{j}+{a}_{3}\hat{k}$ and $\overrightarrow{b}={b}_{1}\hat{i}+{b}_{2}\hat{j}+{b}_{3}\hat{k}$ are collinear, if and only if $\frac{{a}_{1}}{{b}_{1}}=\frac{{a}_{2}}{{b}_{2}}=\frac{{a}_{3}}{{b}_{3}}$

❖

**Vector joining two points**

The magnitude of the vector joining the two points P

_{1}(

*x*

_{1},

*y*

_{1},

*z*

_{1}) and P

_{2}(

*x*

_{2},

*y*

_{2},

*z*

_{2}) is given by $\overrightarrow{{P}_{1}{P}_{2}}=\sqrt{({x}_{2}-{x}_{1}{)}^{2}+({y}_{2}-{y}_{1}{)}^{2}+({z}_{2}-{z}_{1}{)}^{2}}$

❖

**Section formula**

The position vector of a point R dividing a line segment joining the points P and Q, whose position vectors are $\overrightarrow{a}$ and $\overrightarrow{b}$respectively, in the ratio

*m*:

*n*

• internally, is given by$\frac{n\overrightarrow{a}+m\overrightarrow{b}}{m+n}$

• externally, is given by $\frac{m\overrightarrow{b}-n\overrightarrow{a}}{m-n}$

❖

**Scalar product of vectors**

The scalar product of two non-zero vectors $\overrightarrow{a}$ and $\overrightarrow{b}$ is denoted by $\overrightarrow{a}\cdot \overrightarrow{b}$ and it is given by the formula $\overrightarrow{a}\cdot \overrightarrow{b}=\left|\overrightarrow{a}\right|\left|\overrightarrow{b}\right|\mathrm{cos}\theta $, where

*θ*is the angle between $\overrightarrow{a}$ and $\overrightarrow{b}$ such that 0 ≤

*θ*≤ π

If either $\overrightarrow{a}=0$ or $\overrightarrow{b}=0$, then in this case,

*θ*is not defined and $\overrightarrow{a}\cdot \overrightarrow{b}=0$

The following are the observations related to the scalar product of two vectors:

• $\overrightarrow{a}\cdot \overrightarrow{b}$ is a real number.

• The angle

*θ*between vectors $\overrightarrow{a}$ and $\overrightarrow{b}$ is given by,

$\mathrm{cos}\theta =\frac{\overrightarrow{a}\cdot \overrightarrow{b}}{\left|\overrightarrow{a}\right|\left|\overrightarrow{b}\right|}\theta ={\mathrm{cos}}^{-1}\left(\frac{\overrightarrow{a}\cdot \overrightarrow{b}}{\left|\overrightarrow{a}\right|\left|\overrightarrow{b}\right|}\right)$

• $\overrightarrow{a}\cdot \overrightarrow{b}=0$, if and only if $\overrightarrow{a}\perp \overrightarrow{b}$

• If

*θ*= 0, then $\overrightarrow{a}\cdot \overrightarrow{b}=\left|\overrightarrow{a}\right|\left|\overrightarrow{b}\right|$

• If

*θ*= π, then $\overrightarrow{a}\cdot \overrightarrow{b}=-\left|\overrightarrow{a}\right|\left|\overrightarrow{b}\right|$

• $\hat{i}\cdot \hat{i}=\hat{j}\cdot \hat{j}=\hat{k}\cdot \hat{k}=1,\hat{i}\cdot \hat{j}=\hat{j}\cdot \hat{k}=\hat{k}\cdot \hat{i}=0$

• If $\overrightarrow{a}={a}_{1}\hat{i}+{a}_{2}\hat{j}+{a}_{3}\hat{k}$ and $\overrightarrow{b}={b}_{1}\hat{i}+{b}_{2}\hat{j}+{b}_{3}\hat{k}$, then $\overrightarrow{a}\cdot \overrightarrow{b}={a}_{1}{b}_{1}+{a}_{2}{b}_{2}+{a}_{3}{b}_{3}$

❖

**Properties of scalar product**

• Commutative property: $\overrightarrow{a}\cdot \overrightarrow{b}=\overrightarrow{b}\cdot \overrightarrow{a}$

• Distributivity of scalar product over addition: $\overrightarrow{a}\cdot (\overrightarrow{b}+\overrightarrow{c})=\overrightarrow{a}\cdot \overrightarrow{b}+\overrightarrow{a}\cdot \overrightarrow{c}$

❖

**Projection of a vector**

• If $\hat{p}$ is the unit vector along a line

*l*, then the projection of a vector $\overrightarrow{a}$ on the line

*l*is given by $\overrightarrow{a}.\hat{p}$.

• Projection of a vector $\overrightarrow{a}$ on other vector $\overrightarrow{b}$ is given by $\overrightarrow{a}\cdot \overrightarrow{b}$ or $\frac{\overrightarrow{a}\cdot \overrightarrow{b}}{\left|\overrightarrow{b}\right|}$.

❖

**Vector product of vectors**

The vector product (or cross product) of two non-zero vectors $\overrightarrow{a}$ and $\overrightarrow{b}$ is denoted by $\overrightarrow{a}\times \overrightarrow{b}$ and is defined by $\overrightarrow{a}\times \overrightarrow{b}=\left|\overrightarrow{a}\right|\left|\overrightarrow{b}\right|\mathrm{sin}\theta \hat{n}$, where

*θ*is the angle between $\overrightarrow{a}$ and $\overrightarrow{b}$, 0 ≤

*θ*≤ π, and $\hat{n}$ is a unit vector perpendicular to both $\overrightarrow{a}$ and $\overrightarrow{b}$.

If $\overrightarrow{a}={a}_{1}\hat{i}+{a}_{2}\hat{j}+{a}_{3}\hat{k}$ and $\overrightarrow{b}={b}_{1}\hat{i}+{b}_{2}\hat{j}+{b}_{3}\hat{k}$ are two vectors, then their cross product $\overrightarrow{a}\times \overrightarrow{b}$, is defined by $\overrightarrow{a}\times \overrightarrow{b}=\left[\begin{array}{ccc}\hat{i}& \hat{j}& \hat{k}\\ {a}_{1}& {a}_{2}& {a}_{3}\\ {b}_{1}& {b}_{2}& {b}_{3}\end{array}\right]$

The following are the observations made by the vector product of two vectors:

• $\overrightarrow{a}\times \overrightarrow{b}=\overrightarrow{0}$, if and only if $\overrightarrow{a}\left|\right|\overrightarrow{b}$

•

• In terms of vector product, the angle

*θ*between two vectors and is given by or

• If and represent the adjacent sides of a triangle, then its area is given as .

• If and represent the adjacent sides of a parallelogram, then its area is given as .

❖

**Properties of vector product**

• Not commutative:

However,

• Distributivity of vector product over addition:

**Chapter 11: Three Dimensional Geometry**

❖ **Direction cosines (d.c.’s) of a line**

• D.c.’s of a line are the cosines of angles made by the line with the positive direction of the coordinate axes.

• If *l*, *m*, and *n* are the d.c.’s of a line, then *l*^{2} + *m*^{2} + *n*^{2} = 1

• D.c.’s of a line joining two points P (*x*_{1}, *y*_{1},* z*_{1}) and Q (*x*_{2}, *y*_{2},* z*_{2}) are $\frac{{x}_{2}-{x}_{1}}{\mathrm{PQ}},\frac{{y}_{2}-{y}_{1}}{\mathrm{PQ}},\frac{{z}_{2}-{z}_{1}}{\mathrm{PQ}}$, where PQ = $\sqrt{({x}_{2}-{x}_{1}{)}^{2}+({y}_{2}-{y}_{1}{)}^{2}+({z}_{2}-{z}_{1}{)}^{2}}$

❖ **Direction ratios (d.r.’s) of a line**

• D.r.’s of a line are the numbers which are proportional to the d.c.’s of the line.

• D.r.’s of a line joining two points P (*x*_{1}, *y*_{1},* z*_{1}) and Q (*x*_{2}, *y*_{2},* z*_{2}) are given by *x*_{1} –*x*_{2}, *y*_{1} – *y*_{2}, *z*_{1} – *z*_{2} or *x*_{2} – *x*_{1}, *y*_{2} – *y*_{1}, *z*_{2} – *z*_{1}.

❖ If *a*, *b*, and *c* are the d.r.’s of a line and *l*, *m*, and *n* are its d.c.’s, then

• $\frac{l}{a}=\frac{m}{b}=\frac{n}{c}$

•

❖ **Equation of a line through a given point and parallel to a given vector**

• **Vector form**

Equation of a line that passes through the given point whose position vector is $\overrightarrow{a}$ and which is parallel to a given vector $\overrightarrow{b}$ is $\overrightarrow{r}=\overrightarrow{a}+\lambda \overrightarrow{b}$, where λ is a constant.

• **Cartesian form**

⚬ Equation of a line that passes through a point (*x*_{1}, *y*_{1}, *z*_{1}) having d.r.’s as *a*, *b*, *c* is given by

⚬ Equation of a line that passes through a point (*x*_{1}, *y*_{1}, *z*_{1}) having d.c.’s as *l*, *m*, *n* is given by

❖ **Equation of a line passing through two given points**

• **Vector form:** Equation of a line passing through two points whose position vectors are $\overrightarrow{a}$ and $\overrightarrow{b}$ is given by $\overrightarrow{r}=\overrightarrow{a}+\lambda (\overrightarrow{b}-\overrightarrow{a})$, where λ ∈ **R**

• **Cartesian form:** Equation of a line that passes through two given points (*x*_{1}, *y*_{1},* z*_{1}) and (*x*_{2}, *y*_{2},* z*_{2}) is given by,

$\frac{x-{x}_{1}}{{x}_{2}-{x}_{1}}=\frac{y-{y}_{1}}{{y}_{2}-{y}_{1}}=\frac{z-{z}_{1}}{{z}_{2}-{z}_{1}}$

❖ **Skew lines and angle between them**

• Two lines in space are said to be **skew lines**, if they are neither parallel nor intersecting. They lie in different planes.

• Angle between two skew lines is the angle between two intersecting lines drawn from any point (preferably from the origin) parallel to each of the skew lines.

❖

**Angle between two non-skew lines**

•

**Cartesian form**

⚬ If

*l*

_{1},

*m*

_{1},

*n*, and

_{1}*l*

_{2},

*m*

_{2},

*n*

_{2}are the d.c.’s of two lines and

*θ*is the acute angle between them, then

⚬ If

*a*

_{1},

*b*

_{1},

*c*

_{1}and

*a*

_{2},

*b*

_{2},

*c*

_{2}are the d.r.’s of two lines and

*θ*is the acute angle between them, then

•

**Vector form**

If

*θ*is the acute angle between the lines and , then

❖ Two lines with d.r.’s *a*_{1}, *b*_{1}, *c*_{1} and *a*_{2}, *b*_{2}, *c*_{2} are

• perpendicular, if

• parallel, if

❖ **Shortest distance between two skew lines:** The shortest distance is the line segment perpendicular to both the lines.

• **Vector form: **Distance between two skew lines and is given by,

$d=\left|\frac{({\overrightarrow{b}}_{1}\times {\overrightarrow{b}}_{2})\cdot ({\overrightarrow{a}}_{2}-{\overrightarrow{a}}_{1})}{|{\overrightarrow{b}}_{1}\times {\overrightarrow{b}}_{2}|}\right|$

• **Cartesian form: **The shortest distance between two lines

is given by,

$d=\left|\frac{\left|\begin{array}{ccc}{x}_{2}-{x}_{1}& {y}_{2}-{y}_{1}& {z}_{2}-{z}_{1}\\ {a}_{1}& {b}_{1}& {c}_{1}\\ {a}_{2}& {b}_{2}& {c}_{2}\end{array}\right|}{\sqrt{({b}_{1}{c}_{2}-{b}_{2}{c}_{1}{)}^{2}+({c}_{1}{a}_{2}-{c}_{2}{a}_{1}{)}^{2}+({a}_{1}{b}_{2}-{a}_{2}{b}_{1}{)}^{2}}}\right|$

❖ The shortest distance between two parallel lines and is given by,

❖ **Equation of a plane in normal form**

• **Vector form: **Equation of a plane which is at a distance of *d* from the origin, and the unit vector normal to the plane through the origin is

, where is the position vector of a point in the plane

• **Cartesian form: **Equation of a plane which is at a distance *d* from the origin and the d.c.’s of the normal to the plane as *l*, *m*, *n* is *lx* + *my* + *nz* = *d*

❖ **Equation of a plane perpendicular to a given vector and passing through a given point**

• **Vector form: **Equation of a plane through a point whose position vector is and perpendicular to the vector $\overrightarrow{\mathrm{N}}$ is where is the position vector of a point in the plane

• **Cartesian form: **Equation of plane passing through the point (*x*_{1}, *y*_{1}, *z*_{1}) and perpendicular to a given line whose d.r.’s are A, B, C is $\mathrm{A}(x-{x}_{1})+\mathrm{B}(y-{y}_{1})+\mathrm{C}(z-{z}_{1})=0$

❖ **Equation of a plane passing through three non-collinear points**

• **Cartesian form: **Equation of a plane passing through three non-collinear points (*x*_{1}, *y*_{1}, *z*_{1}), (*x*_{2}, *y*_{2}, *z*_{2}), and (*x*_{3}, *y*_{3}, *z*_{3}) is

$\left|\begin{array}{l}x-{x}_{1}y-{y}_{1}z-{z}_{1}\\ {x}_{2}-{x}_{1}{y}_{2}-{y}_{1}{z}_{2}-{z}_{1}\\ {x}_{3}-{x}_{1}{y}_{3}-{y}_{1}{z}_{3}-{z}_{1}\end{array}\right|=0$

• **Vector form: **Equation of a plane that contains three non-collinear points having position vectors and is , where is the position vector of a point in the plane.

❖ **Intercept form of the equation of a plane**

Equation of a plane having *x*, *y*, and *z* intercepts as *a*, *b*, and *c* respectively i.e., the equation of the plane that cuts the coordinate axes at (*a*, 0, 0), (0, *b*, 0), and (0, 0, *c*) is given by,

$\frac{x}{a}+\frac{y}{b}+\frac{z}{c}=1$

❖ **Planes passing through the intersection of two planes**

• **Vector form: **Equation of the plane passing through intersection of two planes and is given by,

, where *λ* is a non-zero constant

• **Cartesian form: **Equation of a plane passing through the intersection of two planes and is given by,

, where *λ* is a non-zero constant

❖ **Co-planarity of two lines**

• **Vector form: **Two lines and are co-planar, if

$({\overrightarrow{a}}_{2}-{\overrightarrow{a}}_{1})\cdot ({\overrightarrow{b}}_{1}\times {\overrightarrow{b}}_{2})=0$

• **Cartesian form: **Two lines and are co-planar, if

$\left|\begin{array}{ccc}{x}_{2}-{x}_{1}& {y}_{2}-{y}_{1}& {z}_{2}-{z}_{1}\\ {a}_{1}& {b}_{1}& {c}_{1}\\ {a}_{2}& {b}_{2}& {c}_{2}\end{array}\right|=0$

❖ **Angle between two planes: **The angle between two planes is defined as the angle between their normals.

• **Vector form: **If *θ* is the angle between the two planes and , then

Note that if two planes are perpendicular to each other, then ; and if they are parallel to each other, then is parallel to .

• **Cartesian form: **If *θ* is the angle between the two planes and , then

$\mathrm{cos}\theta \left|\frac{{\mathrm{A}}_{1}{\mathrm{A}}_{2}+{\mathrm{B}}_{1}{\mathrm{B}}_{2}+{\mathrm{C}}_{1}{\mathrm{C}}_{2}}{\sqrt{{\mathrm{A}}_{1}^{2}+{\mathrm{B}}_{1}^{2}+{\mathrm{C}}_{1}^{2}}\sqrt{{\mathrm{A}}_{2}^{2}+{\mathrm{B}}_{2}^{2}+{\mathrm{C}}_{2}^{2}}}\right|$

Note that if two planes are perpendicular to each other, then ; and if they are parallel to each other, then

❖ **Distance of a point from a plane**

• **Vector form: **The distance of a point, whose position vector is , from the plane $\overrightarrow{r}\cdot \hat{n}=d$ is .

**Note:**

⚬ If the equation of the plane is in the form of where is the normal to the plane, then the perpendicular distance is

⚬ Length of the perpendicular from origin to the plane

• **Cartesian form: **The distance from a point (*x*_{1}, *y*_{1,}*z*_{1}) to the plane A*x* + B*y* + C*z* + D = 0 is

❖ **Angle between a line and a plane: **The angle *ϕ* between a line and the plane is the complement of the angle between the line and the normal to the plane and is given by

**Chapter12: Linear Programming**

❖ Problems which seek maximise (or minimise) of a linear function (say, of two variables

*x*and

*y*) subject to certain constraints as determined by a set of linear inequalities are called optimisation problems.

❖ A Linear Programming Problem (L.P.P.) is the one that is concerned with finding the optimal value (maximum or minimum value) of a linear function of several variables (called objective function), subject to the conditions that the variables are non-negative and satisfy a set of linear inequalities (called constraints). The variables are sometimes called the decision variables.

**For example:**The following is an L.P.P.

Maximize

*Z*= 10

*x*+ 12

*y*

Subject to the following constraints:

5

*x*+ 3

*y*≤ 30 ... (1)

*x*+ 2

*y*≥ 2 ... (2)

*x*≥ 0,

*y*≥ 0 ... (3)

In this L.P.P, the objective function is

*Z*= 10

*x*+ 12

*y*

The inequalities (1), (2), and (3) are called constraints.

❖ The common region determined by all the constraints including the non-negative constraints

*x*≥ 0,

*y*≥ 0 of a linear programming problem is called the

**feasible region**(or solution region) for the problem. The region outside this feasible region is called

**infeasible region**.

❖ Points within and on the boundary of the feasible region represent

**feasible solutions**of the constraints. Any point outside the feasible region is an

**infeasible solution**.

❖ Any point in the feasible region that gives the optimal value (maximum or minimum) of the objective function is called an

**optimal solution**.

❖

**Fundamental theorems for solving linear programming problems**

**Theorem 1**: Let R be the feasible region for a linear programming problem and let

*Z*=

*ax*+

*by*be the objective function. When

*Z*has an optimal value, where the variables

*x*and

*y*are subject to constraints described by linear inequalities, this optimal value must occur at a corner point of the feasible region.

**Theorem 2:**Let R be the feasible region for a linear programming problem, and let

*Z*=

*ax*+

*by*be the objective function. If R is bounded, then the objective function

*Z*has both a maximum and a minimum value on R and each of these occurs at a corner point of R.

❖ If the feasible region is unbounded, then a maximum or a minimum may not exist. However, if it exists, then it must occur at a corner point of R.

❖

**Corner point method:**This method is used for solving a linear programming problem and it comprises of the following steps:

Step 1) Find the feasible region of the L.P.P. and determine its corner points.

Step 2) Evaluate the objective function

*Z*=

*ax*+

*by*at each corner point. Let

*M*and

*m*respectively be the largest and smallest values at these points.

Step 3) If the feasible region is bounded, then

*M*and

*m*respectively are the maximum and minimum values of the objective function.

**If the feasible region is unbounded**

• If the open half plane determined by

*ax*+

*by*>

*M*has no point in common with the feasible region, then

*M*is the maximum value of the objective function. Otherwise, the objective function has no maximum value.

• If the open half plane determined by

*ax*+

*by*<

*m*has no point in common with the feasible region, then

*m*is the minimum value of the objective function. Otherwise, the objective function has no minimum value.

❖ If two corner points of the feasible region are both optimal solutions of the same type, i.e. both produce the same maximum or minimum, then any point on the line segment joining these two points is also an optimal solution of the same type.

❖ A few important linear programming problems are: diet problems, manufacturing problems, transportation problems, and allocation problems.

**Example 1:**

A firm is engaged in breeding goats. The goats are fed on various products grown in the farm. They require certain nutrients, named A, B, and C. The goats are fed on two products P and Q. One unit of product P contains 12 units of A, 18 units of B, and 25 units of C, while one unit of product Q contains 24 units of A, 9 units of B, 25 units of C. The minimum requirement of A and B are 144 units and 108 units respectively whereas the maximum requirement of C is 250 units. Product P costs Rs 35 per unit whereas product Q costs Rs 45 per unit. Formulate this as a linear programming problem. How many units of each product may be taken to minimise the cost? Also find the minimum cost.

**Solution:**

Let

*x*and

*y*be the number of units taken from products P and Q respectively to minimise the cost. Mathematical formulation of the given L.P.P. is as follows:

Minimise

*Z*= 35

*x*+ 45

*y*

Subject to constraints

12

*x*+ 24

*y*≥ 144 (constraints on A) ⇒

*x*+ 2

*y*≥ 12 ... (1)

18

*x*+ 9

*y*≥ 108 (constraints on B) ⇒ 2

*x*+

*y*≥ 12 ... (2)

25

*x*+ 25

*y*≤ 250 (constraints on C) ⇒

*x*+

*y*≤ 10 ... (3)

*x*≥ 0,

*y*≥ 0 ... (4)

The feasible region determined by the system of constraints is as follows:

The shaded region is the feasible region.

The corner points are L (4, 4), M (2, 8), and N (8, 2). The value of

*Z*at these corner points are as follows:

It can be observed that the value

*Z*is minimum at the corner point L (4, 4) and the minimum value is 320.

Therefore, 4 units of each of the products P and Q are taken to minimise the cost and the minimum cost is Rs 320.

**Chapter 13: Probability**

❖

**Conditional probability**

If

*E*and

*F*are two events associated with the sample space of a random experiment, then the conditional probability of event

*E*, given that

*F*has already occurred, is denoted by P(

*E*/

*F*) and is given by the formula:

P(

*E*/

*F*) = $\frac{\mathrm{P}(E\cap F)}{\mathrm{P}\left(F\right)}$, where P (

*F*) ≠ 0

❖

**Properties of conditional probability**

If

*E*and

*F*are two events of a sample space

*S*of an experiment, then the following are the properties of conditional probability:

• 0 ≤ P(

*E*/

*F*) ≤ 1

• P(

*F*/

*F*) = 1

• P(

*S*/

*F*) = 1

• P(

*E*’/

*F*) = 1 – P(

*E*/

*F*)

• If

*A*and

*B*are two events of a sample space

*S*and

*F*is an event of

*S*such that P(

*F*) ≠ 0, then

⚬ P((

*A*∪

*B*)/

*F*) = P(

*A*/

*F*) + P(

*B*/

*F*) – P((

*A*⋂

*B*)/

*F*)

⚬ P((

*A*∪

*B*)/

*F*) = P(

*A*/

*F*) + P(

*B*/

*F*),

if the events

*A*and

*B*are disjoint.

❖

**Multiplication theorem of probability**

If

*E*,

*F*, and

*G*are events of a sample space

*S*of an experiment, then

• P(

*E*⋂

*F*) = P(

*E*). P(

*F*/

*E*), if P(

*E*) ≠ 0

• P(

*E*⋂

*F*) = P(

*F*). P(

*E*/

*F*), if P(

*F*) ≠ 0

• P(

*E*⋂

*F*⋂

*G*) = P(

*E*). P(

*F*/

*E*). P(

*G*/(

*E*⋂

*F*)) = P(

*E*). P(

*F*/

*E*). P(

*G*/

*EF*)

❖

**Independent events**

Two events

*E*and

*F*are said to be independent events, if the probability of occurrence of one of them is not affected by the occurrence of the other.

• If

*E*and

*F*are two independent events, then

⚬ P(

*F*/

*E*) = P(

*F*), provided P(

*E*) ≠ 0

⚬ P(

*E*/

*F*) = P(

*E*), provided P(

*F*) ≠ 0

• If three events

*A*,

*B*, and

*C*are independent events, then

P(

*A*⋂

*B*⋂

*C*) = P(

*A*). P(

*B*). P(

*C*)

• If the events

*E*and

*F*are independent events, then

⚬

*E*’ and

*F*are independent

⚬

*E*’ and

*F*’ are independent

❖

**Partition of a sample space**

A set of events

*E*

_{1},

*E*

_{2}, …

*E*is said to represent a partition of the sample space

_{n}*S*, if

•

*E*⋂

_{i }*E*=

_{j }*ϕ*,

*i*≠

*j*,

*i*,

*j*= 1, 2, 3, …

*n*

•

*E*

_{1}∪

*E*

_{2}∪ … ∪

*E*=

_{n}*S*

• P(

*E*) > 0, ∀

_{i}*i*= 1, 2, 3, …

*n*

❖

**Theorem of total probability**

Let {

*E*

_{1},

*E*

_{2}, …

*E*} be a partition of the sample space

_{n}*S*, and suppose P(

*E*

_{i}) > 0, ∀

*i*= 1, 2, …

*n*. Let

*A*be any event associated with

*S*, then

P(

*A*) = P(

*E*

_{1}). P(

*A*/

*E*

_{1}) + P(

*E*

_{2}). P(

*A*/

*E*

_{2}) + … + P(

*E*). P(

_{n}*A*/

*E*)

_{n}$=\sum _{j=1}^{n}\mathrm{P}\left({E}_{j}\right)\mathrm{P}(A/{E}_{j})$

❖

**Bayes’ theorem**

If

*E*

_{1},

*E*

_{2}, ..

*E*are

_{n}*n*non-empty events, which constitute a partition of sample space

*S*, then

$\mathrm{P}({E}_{i}/A)=\frac{\mathrm{P}\left({E}_{i}\right)\mathrm{P}(A/{E}_{i})}{{\displaystyle \sum _{j=1}^{n}\mathrm{P}\left({E}_{j}\right)\mathrm{P}(A/{E}_{j})}},i=1,2,\mathrm{3...},n$

❖

**Random variables and their probability distribution**

• A random variable is a real-valued function whose domain is the sample space of a random experiment.

• The probability distribution of a random variable

*X*is the system of numbers:

X: |
x_{1} |
x_{2} |
… | x_{n} |

P(X): |
p_{1} |
p_{2} |
… | p_{n} |

Here, the real numbers

*x*

_{1},

*x*

_{2}, …,

*x*are the possible values of the random variable

_{n}*X*and

*p*(

_{i}*i*= 1, 2, …,

*n*) is the probability of the random variable

*X*taking the value of

*x*i.e., P(

_{i}*X*=

*x*) =

_{i}*p*

_{i}❖

**Mean/expectation of a random variable**

Let

*X*be a random variable whose possible values

*x*

_{1},

*x*

_{2},

*x*

_{3}, …

*x*occur with probabilities

_{n}*p*

_{1},

*p*

_{2},

*p*

_{3}, …

*p*respectively. The mean of

_{n}*X*(denoted by

*µ*) or the expectation of

*X*(denoted by E(

*X*)) is the number $\sum _{i=1}^{n}{x}_{i}{p}_{i}$.

That is, $E\left(X\right)=\mu =\sum _{i=1}^{n}{x}_{i}{p}_{i}={x}_{1}{p}_{1}+{x}_{2}{p}_{2}+\mathrm{...}{x}_{n}{p}_{n}$

❖

**Variance of a random variable**

Let

*X*be a random variable whose possible values

*x*

_{1},

*x*

_{2}, …

*x*occur with probabilities

_{n}*p*(

*x*

_{1}),

*p*(

*x*

_{2}), …

*p*(

*x*) respectively. Let

_{n}*µ*= E(

*X*) be the mean of

*X*. The variance of

*X*denoted by Var (

*X*) or σ

_{x}^{2}is calculated by any of the following formulae:

• ${\sigma}_{x}^{2}=\sum _{i=1}^{n}({x}_{i}-\mu {)}^{2}p({x}_{i})$

• ${\sigma}_{x}^{2}=E(X-\mu {)}^{2}$

• ${\sigma}_{x}^{2}=\sum _{i=1}^{n}{x}_{i}^{2}p\left({x}_{i}\right)-{\left[\sum _{i=1}^{n}{x}_{i}p\left({x}_{i}\right)\right]}^{2}$

• ${\sigma}_{{}_{x}}^{2}=E\left({X}^{2}\right)-{\left[E\left(X\right)\right]}^{2},$ where ${\left[E\left(X\right)\right]}^{2}=\sum _{i=1}^{n}{x}_{i}^{2}p\left({x}_{i}\right)$

It is advisable to students to use the fourth formula.

❖

**Standard deviation:**The non-negative number ${\sigma}_{x}=\sqrt{\mathrm{Var}\left(X\right)}$ is called the standard deviation of the random variable

*X*.

${\sigma}_{x}=\sqrt{\mathrm{E}\left({X}^{2}\right)-{\left[\mathrm{E}\left(X\right)\right]}^{2}}$

❖

**Bernoulli trials:**Trials of a random experiment are called Bernoulli trials, if they satisfy the following conditions:

• There should be finite number of trials.

• The trials should be independent.

• Each trial has exactly two outcomes: success or failure.

• The probability of success remains the same in each trial.

❖ A binomial distribution with

*n*-Bernoulli trials and probability of success in each trial as

*p*is denoted by B(

*n*,

*p*).

❖

**Binomial distribution:**For binomial distribution B(

*n*,

*p*), the probability of

*x*successes is denoted by P(

*X*=

*x*) or P(

*X*) and is given by$\mathrm{P}(X=x)={}^{n}\mathrm{C}_{\mathit{x}}{q}^{n-x}{p}^{x},x=0,1,2,...n,q=1-p$

Here, P(

*X*) is called the probability function of the binomial distribution.