Section 4.3 Intro to Optimization
Probably the most far reaching application of calculus is optimization. When we say *optimization*, we are referring to the common applied task of finding the highs and lows of a given circumstance. For example, business people are always looking for maximum profits or minimum costs. Engineers are often looking for maximum stress on something so they can build the right structure. Etc. Optimzation is an important issue in almost every sort of endeavor.
To move forward, let’s set down some language often used in optimization. For us, it’s easiest to talk about this in the context of a function since generally we use functions to describe circumstances to which mathematics applies. When we say *local* or alternatively *relative* maximum, we mean a value the function obtains at some location in its domain that is larger than or equal to any other value it obtains in some interval (possible a very small interval) containing the location. In otherwords, the function is maximal at that point in that little neighborhood but might get bigger outside of the neighborhood. Local / relative minimum is analogously defined. When we say *global* or *absolute* maximum, we mean a value the function obtains at some location in its domain and that value is larger than or equal to any other value the function obtains in its domain. Global / Absolute minimum is analogously defined. When we say the function obtains an *optimum* or *extreme* value, we are just indicating that it obtains a max or min but aren’t specifying which one. Let’s look at and example to practice the language. Consider the graph of
\(f(x)=2x^3-3x^2-12x+18\) given below.
This function has domain
\([-3,3]\text{.}\) You’ll notice that by the definitions given above it has a *local* min at 2. Notice the language here. There is a local min *at* 2. That local min *is*
\(-2=f(2)\text{.}\) Let’s explore further. Can you find a global min? What is it and where does it happen? Well, yes ... there is one. It happens *at*
\(x=-3\) and it *is*
\(-28\) (i.e.
\(f(-3)\)). Okay, finish it out on your own. Do you find any other extremes? There are two more. Can you say what they are and also where they occur?
So, how does calculus help find these optimums? Take a look at the graph above. Consider the open interval from -1 to 2 (i.e.
\((-1,2)\)). You’ll notice there are no optimums on that interval. Indeed, if you look at any point on the graph in that interval you’ll see that there are larger values just to the left of it and smaller values just to the right of it. If you haven’t looked ... look. Look, for example, at
\((1,5)=(1,f(1))\) on the graph.
Look at the values of the function to the left and right of this point. Like any other value of the function in this interval (from
\(x=-1\) to
\(x=2\)), the value of the function is *not* optimal in any way (i.e. it has larger results to the left and smaller ones to the right). So, if we are looking for optimums, this interval can be ignored. There are no optimums there. Now, how could calculus help us spot this useless interval? Do you see any calculus connection in this interval. Here, let’s plot the tangent line at
\((1,5)\) in case the connection isn’t evident.
What do you notice about the slope of that tangent line? Yes, it’s negative. In other words,
\(f'(1)\) is negative. In fact, you can see that the derivative (i.e. slopes of tangent lines) are negative in that entire subinterval from -1 to 2. The derivative being negative there ensures that the function is strictly decreasing, always has negative slope and so the function can’t obtain an optimum there. Similarly, you can see that the derivative is always positive from -3 to -1 so the function doesn’t have an optimum in that open interval and the derivative is always positive from 2 to 3 so there is no optimum in that open interval. That quickly leaves only
\(x=-3, -2, 1\) and
\(x=3\) as places where optimums can occur. This leads to what we call a first derivative analysis theorem.
Theorem 4.3.1.
If \(f'(x)\lt 0\) for all \(x\) in an open interval OR \(f'(x)\gt 0\) for all \(x\) in an open interval, Then \(f\) does not have an optimum on that open interval.
Suppose for example that we hadn’t yet studied the function above and someone just hands us \(f(x)=2x^3-3x^2-12x+18\text{.}\) Let’s do a first derivative analysis on it without even creating the graph. So,
\begin{equation*}
f'(x)=6x^2-6x-12=6(x-2)(x+1)\text{.}
\end{equation*}
We see that this derivative is \(0\) at \(x=-1\) and \(x=2\text{.}\) And on the open intervals \((-3,-1), (-1,2), \) and \((2,3)\) we see that \(f\) is either always positive or always negative so the theorem above concludes that there are no optimums there. Thus, the only *possible* places for optimums are \(x=-3, -2, 1\) and \(x=3\text{.}\) It’s really that simple. Having narrowed down to just that handful of options, we can study the behavior or the function on either side of those locations to conclude if we have an optimum there and if so, what kind.