Skip to content
Snippets Groups Projects
Commit d4c37cc8 authored by Kevin Höllring's avatar Kevin Höllring
Browse files

Add clarifying comments to second problem

parent 405e63f9
No related branches found
No related tags found
No related merge requests found
Pipeline #32872 failed
...@@ -204,13 +204,14 @@ in your command line/shell before executing your program. ...@@ -204,13 +204,14 @@ in your command line/shell before executing your program.
(\emph{Note:} you can test your implementation using \code{make test\_gradient\_descent}) (\emph{Note:} you can test your implementation using \code{make test\_gradient\_descent})
\end{homeworkProblem} \end{homeworkProblem}
\begin{homeworkProblem} \begin{homeworkProblem}
An alternative to following the path of steepest descent towards a local minimum would be to use the fact that in a local optimum the gradient of the function one wants to optimize will vanish/become zero. A theoretical alternative to following the path of steepest descent towards a local minimum would be to use the fact that in a local optimum the gradient of the function one wants to optimize will vanish/become zero.
But we have already implemented a method for iteratively determining the roots of functions $g:\mathbb{R}^n\to\mathbb{R}^n$ using the Newton method and the gradient of a function $f:\mathbb{R}^n\to \mathbb{R}$ is such a function.\\ But we have already implemented a method for iteratively determining the roots of functions $g:\mathbb{R}^n\to\mathbb{R}^n$ using the Newton method and the gradient of a function $f:\mathbb{R}^n\to \mathbb{R}$ is such a function.\\
Implement the method \code{GradientRootfinder::optimize} where you determine a numerical approximation for a local extremum using the Newton method on the gradient of the function \code{func} starting at the intial position \code{start} until the iteration yields a precision of \code{precision} according to the previous definition of precision.\par Implement the method \code{GradientRootfinder::optimize} where you determine a numerical approximation for a local extremum using the Newton method on the gradient of the function \code{func} starting at the intial position \code{start} until the iteration yields a precision of \code{precision} according to the previous definition of precision.\par
Note: You will need to provide the \code{findRoot} method in \code{NewtonRootFinder} with a function of appropriate format in order for the algorithm to work. Note: You will need to provide the \code{findRoot} method in \code{NewtonRootFinder} with a function of appropriate format in order for the algorithm to work.
If you have not already done so for the previous exercise, take a look at \href{https://de.cppreference.com/w/cpp/language/lambda}{\code{C++}-lambda functions.} and the \class{LambdaWrapper} as provided in \path{include/lambda_wrapper.h}. If you have not already done so for the previous exercise, take a look at \href{https://de.cppreference.com/w/cpp/language/lambda}{\code{C++}-lambda functions.} and the \class{LambdaWrapper} as provided in \path{include/lambda_wrapper.h}.
If you want to skip this part of the exercise or in order to check that your implementation works, you can use the function \code{nabla} defined in \path{include/differential.h} to generate a function calculating the gradient of a function by providing both a \class{Function} object and a \class{Differentiator} object.\par If you want to skip this part of the exercise or in order to check that your implementation works, you can use the function \code{nabla} defined in \path{include/differential.h} to generate a function calculating the gradient of a function by providing both a \class{Function} object and a \class{Differentiator} object.\par
(\emph{Note:} you can test your implementation using \code{make test\_gradient\_root}) You can test your implementation using \code{make test\_gradient\_root}.\\
What do you observe? Why do the test cases behave this way despite the conditions (timeout, desired precision,\ldots) being the same as for the first problem?
\end{homeworkProblem} \end{homeworkProblem}
\begin{homeworkProblem} \begin{homeworkProblem}
Last but not least we will have a look at the \emph{CG}-method of optimization which is related to the general idea of gradient descent. Last but not least we will have a look at the \emph{CG}-method of optimization which is related to the general idea of gradient descent.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment