From d4c37cc8b11d2b4e8cdc971888cffada9be5b5d5 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Kevin=20H=C3=B6llring?= <kevin.hoellring@fau.de>
Date: Tue, 26 Nov 2019 08:24:51 +0100
Subject: [PATCH] Add clarifying comments to second problem

---
 instructions/exercise_4.tex | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/instructions/exercise_4.tex b/instructions/exercise_4.tex
index 5929b24..e44ba90 100644
--- a/instructions/exercise_4.tex
+++ b/instructions/exercise_4.tex
@@ -204,13 +204,14 @@ in your command line/shell before executing your program.
 	(\emph{Note:} you can test your implementation using \code{make test\_gradient\_descent})
 	\end{homeworkProblem}
 	\begin{homeworkProblem}
-	An alternative to following the path of steepest descent towards a local minimum would be to use the fact that in a local optimum the gradient of the function one wants to optimize will vanish/become zero. 
+	A theoretical alternative to following the path of steepest descent towards a local minimum would be to use the fact that in a local optimum the gradient of the function one wants to optimize will vanish/become zero. 
 	But we have already implemented a method for iteratively determining the roots of functions $g:\mathbb{R}^n\to\mathbb{R}^n$ using the Newton method and the gradient of a function $f:\mathbb{R}^n\to \mathbb{R}$ is such a function.\\
 	Implement the method \code{GradientRootfinder::optimize} where you determine a numerical approximation for a local extremum using the Newton method on the gradient of the function \code{func} starting at the intial position \code{start} until the iteration yields a precision of \code{precision} according to the previous definition of precision.\par
 	Note: You will need to provide the \code{findRoot} method in \code{NewtonRootFinder} with a function of appropriate format in order for the algorithm to work. 
 	If you have not already done so for the previous exercise, take a look at \href{https://de.cppreference.com/w/cpp/language/lambda}{\code{C++}-lambda functions.} and the \class{LambdaWrapper} as provided in \path{include/lambda_wrapper.h}.
 	If you want to skip this part of the exercise or in order to check that your implementation works, you can use the function \code{nabla} defined in \path{include/differential.h} to generate a function calculating the gradient of a function by providing both a \class{Function} object and a \class{Differentiator} object.\par
-	(\emph{Note:} you can test your implementation using \code{make test\_gradient\_root})
+	You can test your implementation using \code{make test\_gradient\_root}.\\
+	What do you observe? Why do the test cases behave this way despite the conditions (timeout, desired precision,\ldots) being the same as for the first problem?
 	\end{homeworkProblem}
 	\begin{homeworkProblem}
 	Last but not least we will have a look at the \emph{CG}-method of optimization which is related to the general idea of gradient descent.
-- 
GitLab