diff --git a/instructions/exercise_4.tex b/instructions/exercise_4.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c1a296202e6532abeb888227635793a79e50718b
--- /dev/null
+++ b/instructions/exercise_4.tex
@@ -0,0 +1,220 @@
+\documentclass{article}
+
+\usepackage{fancyhdr}
+\usepackage{extramarks}
+\usepackage{amsmath}
+\usepackage{amsthm}
+\usepackage{enumitem}
+\usepackage{amsfonts}
+\usepackage{tikz}
+\usepackage[plain]{algorithm}
+\usepackage{algpseudocode}
+\usepackage[obeyspaces]{url}
+\usepackage{listings}
+\usepackage{todonotes}
+\usepackage{hyperref}
+
+\usetikzlibrary{automata,positioning}
+
+%
+% Basic Document Settings
+%
+
+\topmargin=-0.45in
+\evensidemargin=0in
+\oddsidemargin=0in
+\textwidth=6.5in
+\textheight=9.0in
+\headsep=0.25in
+
+\linespread{1.1}
+
+\pagestyle{fancy}
+\lhead{\hmwkClass}
+\chead{}
+\rhead{\hmwkTitle}
+\lfoot{}
+\cfoot{\thepage}
+
+\renewcommand\headrulewidth{0.4pt}
+\renewcommand\footrulewidth{0.4pt}
+
+\setlength\parindent{0pt}
+
+%
+% Create Problem Sections
+%
+
+\newcommand{\enterProblemHeader}[1]{
+	\nobreak\extramarks{}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
+	\nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
+}
+
+\newcommand{\exitProblemHeader}[1]{
+	\nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{}
+	\stepcounter{#1}
+	\nobreak\extramarks{Problem \arabic{#1}}{}\nobreak{}
+}
+
+\setcounter{secnumdepth}{0}
+\newcounter{partCounter}
+\newcounter{homeworkProblemCounter}
+\setcounter{homeworkProblemCounter}{1}
+\nobreak\extramarks{Problem \arabic{homeworkProblemCounter}}{}\nobreak{}
+
+%
+% Homework Problem Environment
+%
+% This environment takes an optional argument. When given, it will adjust the
+% problem counter. This is useful for when the problems given for your
+% assignment aren't sequential. See the last 3 problems of this template for an
+% example.
+%
+\newenvironment{homeworkProblem}[1][-1]{
+	\ifnum#1>0
+	\setcounter{homeworkProblemCounter}{#1}
+	\fi
+	\section{Problem \arabic{homeworkProblemCounter}}
+	\setcounter{partCounter}{1}
+	\enterProblemHeader{homeworkProblemCounter}
+}{
+	\exitProblemHeader{homeworkProblemCounter}
+}
+
+%
+% Homework Details
+%   - Title
+%   - Due date
+%   - Class
+%   - Section/Time
+%   - Instructor
+%   - Author
+%
+
+\newcommand{\hmwkTitle}{Exercise\ \#4}
+\newcommand{\hmwkDueDate}{November 25, 2019}
+\newcommand{\hmwkClass}{Computational physics and numerical methods 1}
+\newcommand{\hmwkClassTime}{}
+\newcommand{\hmwkClassInstructor}{Prof. Smith}
+\newcommand{\hmwkAuthorName}{\textbf{H\"ollring Kevin}}
+
+%
+% Title Page
+%
+
+\title{
+	\vspace{2in}
+	\textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\
+	\normalsize\vspace{0.1in}\small{Due\ on\ \hmwkDueDate\ at 3:10pm}\\
+	\vspace{0.1in}\large{\textit{\hmwkClassInstructor\ \hmwkClassTime}}
+	\vspace{3in}
+}
+
+\author{\hmwkAuthorName}
+\date{}
+
+\renewcommand{\part}[1]{\textbf{\large Part \Alph{partCounter}}\stepcounter{partCounter}\\}
+
+%
+% Various Helper Commands
+%
+
+% Useful for algorithms
+\newcommand{\alg}[1]{\textsc{\bfseries \footnotesize #1}}
+
+% For derivatives
+\newcommand{\deriv}[1]{\frac{\mathrm{d}}{\mathrm{d}x} (#1)}
+
+% For partial derivatives
+\newcommand{\pderiv}[2]{\frac{\partial}{\partial #1} (#2)}
+
+% Integral dx
+\newcommand{\dx}{\mathrm{d}x}
+
+% Alias for the Solution section header
+\newcommand{\solution}{\textbf{\large Solution}}
+
+% Probability commands: Expectation, Variance, Covariance, Bias
+\newcommand{\E}{\mathrm{E}}
+\newcommand{\Var}{\mathrm{Var}}\usepackage[obeyspaces]{url}
+\newcommand{\Cov}{\mathrm{Cov}}
+\newcommand{\Bias}{\mathrm{Bias}}
+\newcommand{\bigO}{\mathcal{O}}
+
+\DeclareUrlCommand\class{%
+	\renewcommand{\UrlBigBreaks}{\do\.}%
+	\renewcommand{\UrlBreaks}{\do\.}%
+}
+
+\newcommand{\code}[1]{\texttt{#1}}
+
+\begin{document}
+In this exercise we will focus on different algorithms that can be used to optimize functions numerically. 
+You will implement the simplest iterative optimization technique there is, the \emph{gradient descent}.
+Since a (locally) optimal value is associated with the gradient of the function vanishing, we will also implement a numerically unstable optimization which is based on us finding roots of the gradient.
+Last but not least we will focus on a more sophisticated variant of the optimization using gradient descent, the so-called \emph{conjugate gradient method}.\\
+Certain functions that you implemented in previous exercises are provided as a pre-compiled library for this exercise.
+The library can be found in the folder \path{libs/} and is called \path{library.a}.
+It contains the code of compiled source files grouped together into one binary file. 
+Unlike the binary executables that we generated in previous exercises, this file usually does not contain an entry point like \code{main()}.\\
+There are two different types of program libraries: \emph{static} (usually ending with \path{.a}) and \emph{dynamic} libaries (usually ending with \path{.so}). 
+The difference between them is when the provided code is integrated with the code that you write. \\
+A static library is just an archive of compiled code. 
+When the compiler builds your executable it will copy the code from that archive into your executable and it will be permanently integrated into your program.
+Once the program is executed, no further action has to be taken and you only need to provide someone with your executable in order to run it. 
+If the library needs to be updated, your own program needs to be recompiled.\\
+A dynamic library is also an archive of compiled code but it does not get integrated into your program by the compiler.
+Instead the compiler will add markers to your program where functions from that library get used.
+Once the program is executed, your program will then trigger the loading of the missing functions so the code in the dynamic library will only become a part of your program once it is actually being run.
+This makes it necessary for you to deliver your own program together with the dynamic libraries it requires. 
+On the other hand, you can share dynamic libraries between different programs because you only need to store them once if the program loader knows where to find them. 
+In addition you can update dynamic libraries without having to recompile the programs depending on them as long as their programming interface stays the same. 
+This helps when fixing security issues or programming errors.
+The reason why it is not always the best choice to use dynamic libraries instead of static ones is that the loading of the dynamic library at runtime is slower than running your program that was compiled using a static library.
+You trade faster compilation times and more flexibility with the dynamic libaries versus worse runtime-efficiency. 
+In the end you need to decide for yourself whether a dynamic or a static library is the better choice.\par
+If you wish to compile your program, in addition to telling the compiler about where to find the header files (see previous exercises) you need to tell it about where to find the libraries that you are using. In case of a static library just add the path to the library file as an additional input (like your source code files). The compiler will detect the file contents and use it appropriately.\\
+The more sophisticated version (which also works for dynamic libraries) is to tell your Compiler (or rather: the Linker) about where to find the libraries as we did with the header files previously. Here the flag to be added to your command is \code{-L<path\_to\_libraries>} where \code{<path\_to\_libraries>} is the path to the folder where the libraries are stored and there is no space between the \code{L} and the path.
+Additionally you need to tell your Compiler/Linker about which libraries to use. 
+You can require a certain library by adding the flag \code{-l<libname>} where \code{<libname>} is the name of your static/dynamic library without its file extension (\path{.a} or \path{.so}) and again without whitespace after the \code{l}. So in our case we would add
+\begin{center}
+\code{-L./libs/ -llibrary}
+\end{center}
+to our compilation commands. In case of a dynamic library in order for the program loader to correctly locate the library once your program is executed it either needs to be in specific places (e.g. \path{/lib/, /usr/lib/}) or you need to tell the loader where to find your library by defining 
+\begin{center}
+\code{LD\_LIBRARY\_PATH=<path\_to\_libraries>}
+\end{center}
+in your command line/shell before executing your program.
+
+	\begin{homeworkProblem}
+	In this program we will implement a simple gradient descent algorithm. The code should be implemented in the class \class{GradientDescent} with its interface declaration in \path{include/optimizer.h} and its implementation in \path{src/optimizer.cpp}.
+	The basic idea underlying the gradient descent is that the gradient $\nabla f(x)$ of a function points towards its steepest incline and away from its steepest decline. Therefore if one decides to follow the steepest decline one expects to at some point arrive at a local minimum or diverge infinitely.\\
+	The algorithm expects a function $f$ (\code{func}), an initial position $x_0$ (\code{start}) and a target precision $p$ (\code{precision}). 
+	It then proceeds to iteratively calculate points $x_k$ using the formula
+	$$x_{k+1} = x_k -\alpha_k \nabla f(x_k).$$
+	There are different options for choosing $\alpha_k$. Optimally one would determine it to be the largest value so that the function strictly decreases from $x_k$ to $x_{k+1}$ but this is usually hard to determine. 
+	Instead we will opt for $\alpha_k$ to be chosen initially as some fixed value $a$ in each step, which we will refer to as \emph{step size}. 
+	If $f(x_{k+1}) > f(x_k)$, halve the value of $\alpha_k$ and calculate $x_{k+1}$ again until the value of $f(x_{k+1})$ is less than that of $f(x_k)$. 
+	Once the function value of the $x_{k+1}$ is less than that of $x_k$ continue to the next step of the iteration. \\
+	In case you enounter a zero-gradient or the gradient length falls below $p$ the algorithm terminates and returns the current position $x_k$ as an estimate for the local minimum argument.
+	You usually additionally choose a limit to the number of iterations to avoid being trapped in an infinite loop due to the iteration diverging.\\
+	The class provides you with a member \code{stepsize} to be used as $a$ and a Differentiator (\code{diff}) which you can use to calculate the differential values of functions where required. These two things are not passed as arguments to the function \code{GradientDescent::optimize}.\par
+	Implement the iterated gradient descent as described in the method \code{GradientDescent::optimize}. 
+	\end{homeworkProblem}
+	\begin{homeworkProblem}
+	An alternative to following the path of steepest descent towards a local minimum would be to use the fact that in a local optimum the gradient of the function one wants to optimize will vanish/become zero. 
+	But we have already implemented a method for iteratively determining the roots of functions $g:\mathbb{R}^n\to\mathbb{R}^n$ using the Newton method and the gradient of a function $f:\mathbb{R}^n\to \mathbb{R}$ is such a function.\\
+	Implement the method \code{GradientRootfinder::optimize} where you determine a numerical approximation for a local extremum using the Newton method on the gradient of the function \code{func} starting at the intial position \code{start} until the iteration yields a precision of \code{precision} according to the previous definition of precision.\par
+	Note: You will need to provide the \code{findRoot} method in \code{NewtonRootFinder} with a function of appropriate format in order for the algorithm to work. 
+	If you have not already done so for the previous exercise, take a look at \href{https://de.cppreference.com/w/cpp/language/lambda}{\code{C++}-lambda functions.} and the \class{LambdaWrapper} as provided in \path{include/lambda_wrapper.h}.
+	If you want to skip this part of the exercise or in order to check that your implementation works, you can use the function \code{nabla} defined in \path{include/differential.h} to generate a function calculating the gradient of a function by providing both a \class{Function} object and a \class{Differentiator} object.
+	\end{homeworkProblem}
+	\begin{homeworkProblem}
+		\begin{enumerate}
+		\item 
+			\begin{enumerate}[label=\alph*)]
+			\item 
+			\end{enumerate}
+		\end{enumerate}
+	\end{homeworkProblem}
+\end{document}