Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Constrained and Unconstrained Optimization

Prof. Adrijit Goswami


Department of Mathematics
Indian Institute of Technology, Kharagpur

Lecture - 01
Introduction to Optimization

Today, we will start the constrained and unconstrained optimization. In the first lecture, we
will discuss little bit about the preliminaries of optimization.

(Refer Slide Time: 00:35)

What is optimization? Optimization, usually we tell as a mathematical process or a


mathematical discipline that concerns with finding the minimum or maximum value of an
objective function where the objective function consisting of one or more than one variables
subject to certain constraints or restrictions we say.

So, basically the optimization means: we have a function of several variables under certain
constraints which we want to optimize. That is we want to minimize or maximize the
objective function. Basically this optimization technique or operations research initiated in
1940s during Second World War. So, let us see one particular problem. If you see this
particular problem, we have 3 machines . These machines can produce 4
products . Each machine can produce how much quantity of each product is
written. On this column, if you see the last column total time available per week say in terms
of minutes, 3000 minutes, 9500 minutes, 6300 minutes and the per unit profit is also given on
the last row.

So, if I want to develop an optimization problem from this, where I have some machines.
Some machines are producing some products what is the production rate that is given to us.
What is the time available that is given to us and what is the unit profit that also is given to
us? So, from here if I want to optimize this particular problem. I can assume that is the
number of units that the product j is been producing per week (say). So, our point will be, we
can develop an objective function . Our aim is I am producing quantity of product ,
quantity of product and like this. And per unit profit is this one. So, therefore, 7.5 is the
unit profit for product . So, is the actual profit.

If, I am producing quantity of product , quantity of product , quantity of product


…like this and their unit profit is this, then I can find out one objective function like this
. So, this is a function z which is a function of four
variables, but this is only not the thing there are certain constraints, because each machine
can produce a maximum quantity of each product and the time constraint is also given; that
means, if you see these first row for the machine . It can produce maximum of 2.7 of
product that is , it can produce product of 3 units, so, it is , t is 4.6 for
the product 3 it is and for product 4 it is 3, so it is

So, this is these machine can produce these 4 products within this time, but we had the
available time limit is 3000; that means I have to produce this total product this one, within
this time. So, it should be less than equals 3000; that means total time cannot exceed the 3000
units of time. Similarly, I can create another one for the second row that is for machine
that is . So, this is the second one and for the third
machine it is .

So, if you see this is the objective function and these are the constraints or the restrictions.
Where; obviously, I will tell that , where z is 1, 2, 3 and 4. Since these are the
products. So, cannot be they cannot be negative. So, like this way I have formulated for this
particular table one optimization problem. If you see here, this is the objective function and
this is the constants. So, optimization is a technique by which we can find out the values of
the where decision variables say here , which will satisfy this constraints.
And since it is an objective function and this is a profit function so; obviously, we will try to
maximum this function always we will try to maximize. So, maximize equals this subject
to this one.

So, this is the basic idea of the optimization. Now you will see various types of optimizations
are there.

(Refer Slide Time: 06:58)

One, I can tell as constrained, constrained optimization another one we can tell as
unconstrained optimization. The meaning of constrained and unconstrained optimization
means unconstrained means only you have the objective function and you want to find out
the optimum value of this objective function without using these constraints. So, this is if
these particular constraints are not there. If I have only max: . So, this is one
unconstrained optimization problem whereas, if I have the constraints then we can tell that
this is a constrained optimization. So, this second one is the constrained optimization
whatever we have discussed. So, this is one part.

The other part is, I can put it as it can be linear it can be non-linear also. Linear means
whatever objective functions you are considering here. The objective functions should be
linear in nature. So, as you know one particular function may be linear function liner function
means you are just like is there. So, this is linear and the constraints are
also linear. And other type of function I can make that is say, maximize z which is equal to
. So, it becomes non-linear. So, in optimization we try to solve both linear and non-
linear functions in both cases, it may be linear it may be non-linear.

The other type which we can talk about that is about the variables that is it should be discrete
it can be continuous or it can be probabilistic. So, whenever you are talking about the
decision variables, for this particular problem are decision variables. What type
of variables are they? So, they may be discrete values they can take they can take continuous
values that is any value on the real line or the values may be probabilistic. If it is probabilistic
then we have to convert it in to the corresponding continuous cases. So, the optimization I
can derive like this it can be unconstrained or constrained optimization. The problem may be
linear it can be non-linear. The decision variables which we are associating they may be
discrete they may be continuous or they make be probabilistic.

Now, let us go little bit about little mathematics which is required for the solution of the
initial problem because after this we will start the linear programming problem at first. So,
before going through the linear programming problem, let us see some basic definitions
which you may know. So, we will just brush up all these things. One is vector space.
Whenever you are writing a matrix, you sometimes you write a matrix like this .

Or sometimes you write a matrix that is also. So, the first one is a row vector we call it

the second one is a column vector.

So, the vectors can be given, you know as a row vector or a column vector .

And this row vector or vector can be given geometric representations. Also if you think this
this is a vector. So, this represents a point in 2 dimensional space. Please note this
one this represents a point in 2 dimensional space. Similarly, if I take a point if I take a vector
like this this will represent a point in 3 dimensional spaces. And if I take a vector
of n elements , then also if I imagine it then it will be one point in n dimensional
vectors. And you have to remember that the row vector and the column vector both are
equivalent.

Next one is null vector. Null vector is denoted by this O where all the elements are 0 of that
vector that is it will look like this . So, this is a null vector. Similarly, you are
having the unit vector.
(Refer Slide Time: 12:30).

Unit vector we define like this. I can tell … .


So, basically the unit vector you can denote by , where the vector with unity as a value of
the ith component or in other sense for , the value of the ith component will be one and the
value of the other elements will be 0, which we have represented like this. I think it is clear
that unit vector is the vector whose ith component is one and all other elements are zeros.

Now, suppose you have a set of vectors . So, you have set of n vectors which
are closed under addition and multiplication then we call that this set of n vectors will form a
vector space. So, basically what is a vector space if I have a set of n vectors which
is closed under addition and multiplication. Then this set of n vectors and will form a vector
space. So, quite naturally the question will come what is what do you mean by closed under
addition.

Closed under addition means, if we take the sum of any two vectors then the resultant vector
will also be a member of the set. So, if , if I take addition of any the any of
this 2 then the resultant or the sum of any 2 vectors will also be a member of the set then we
say that it is closed under addition.

On the same way multiplication closed under multiplication means, whenever I have 2
vectors, if I take any 2 vectors from set of n vectors . And if I multiply then
resultant vector should also be a member of this . So, basically closed under
addition and multiplication means whenever you perform these 2 arithmetic operations, the
resultant vector also belongs to the set itself. As an example if you see set of all real numbers
or set of all complex numbers. If you take they will form a vector space. So, set of all real
numbers we are saying set of all real numbers. Similarly, say set of all complex numbers set
of all complex numbers they will form vector space.

Suppose I take a set of polynomials of degree . Will it form a vector space or not? If I can
take any 2 polynomials like this of degree , I am taking
another one . If I form . And if you see in
, this vanishes therefore, the sum is a polynomial that is true it is a
polynomial of degree how much? The degree will be not because the nth term to the power
term vanishes. So, the degree is .

So, from here we can conclude that; that means addition after addition whatever result you
are getting that does not belongs to the set of polynomials of degree . So, in other sense you
can say that the set of polynomials of degree n will not form a vector space. So, I hope it is
clear. Let us go to the next one that is linear combination which is very important.

(Refer Slide Time: 17:54)

This parts which we will be using after words. You have a vector , which belongs to
(say) and it will be linear combination of the vectors if we can form like this
for some scalars . So, if I represent in terms of some
vectors like this way where are scalars then
we say is linear combination of .

In addition to this, if , that means, summation of these all scalars is equals to 1.


Then this linear combination we call it as convex combination. So, if I can represent a vector
a like this in terms of vectors like this and if
s then the combination, we call it as convex combination.

If you consider 3 vectors , if you have the other vector say


and if I take other one . Can I write down So, basically I
have 3 vectors , I am writing as a linear combination of where
and . So, this is combination linear combination of . from this linear
combination itself we come to the other one that is linear dependence you may have studied it
in matrix notations linear dependence.

So, you have a set of vectors , and I will say that this set of vectors are
linearly dependent, if there exist ’s not all zero. I can find out where all the ’s will not be
vanishing and this . Then we say that these
vectors are linearly dependent. So, please note that here all the lambda is will not be 0. In the
earlier example, if you take this example I think =0. Therefore,
we can say that these vectors these vectors are linearly dependent.

On the other hand, if you see if I can find out the vectors such that such that
, this one holds if all the ’s vanishes, that is
. Then we say that these vectors are linearly
independent. This, we will use frequently.
(Refer Slide Time: 23:04).

So, please note the definition of linearly independent set of vectors will be
linearly independent, if for some scalars ’s we can get one
equation . If all these scalars =0, then only we say that this
vectors are linearly independent. For an example if I take
and one can you find out
some nonzero values of for which this will vanish, but if we calculate you will
find will hold if your only.

So, we say that . So, are linearly independently. So, basically if 3


vectors are linearly dependent geometrically means they lie on the same line and whenever
some vectors are linearly dependent; that means one vector can be represented as linear
combination of the others. So, that is one thing.

Next is spanning set. You had the set of all vectors in from here if, I can get a set of
vectors (say), we say that this set of vectors will span or generate
means if I take any linear combination of this that will be a vector of or in the sense
other way I can tell that in from if you take any vector that will be a linear combination of
these .

So, therefore, if in I can find out a set of vectors which can generate or span
all the vectors of this , then we say this set as a spanning set. This consider this one,
This set spans I think means using this set linear combination of this set will
be giving us any vector of , because I can write down something like this
for certain values of , I can obtain this value.

So, these spans all the vectors of . Similarly, there is another concept which we
call basis. So, a basis for this is a linearly independent subset of vectors which spans the
entire space. So, basically basis is nothing, but the linearly independent. Please note this one,
linearly independent set of vectors, these are linearly independent set of vectors
, which spans the entire vector space and this we call as the basis. And similarly
your dimension is the number of linearly independent vectors in the spanning set is known as
the dimension of the basis. So, in one-word basis is nothing but the a set of linearly
independent vectors which spans the entire and the dimension is the elements number of
elements or number of linearly independent columns of that set. So, next we will start in the
next class.

You might also like