diff --git a/README.md b/README.md
index 32a53e77b285ca8bcd3dcc486fa69392cb188037..04334e83b4798a333c4c15efe1892656e3026bcd 100644
--- a/README.md
+++ b/README.md
@@ -137,7 +137,7 @@ python tools/process.py \
   --output_dir photos/resized
 ```
 
-No other processing is required, the colorzation mode (see Training section below) uses single images instead of image pairs.
+No other processing is required, the colorization mode (see Training section below) uses single images instead of image pairs.
 
 ## Training
 
@@ -197,6 +197,17 @@ The test run will output an HTML file at `facades_test/index.html` that shows in
 
 <img src="docs/test-html.png" width="300px"/>
 
+## Exporting
+
+You can export the model to be served or uploaded with `--mode export`. As with testing, you should specify the checkpoint to use with `--checkpoint`.
+
+```sh
+python pix2pix.py \
+  --mode export \
+  --output_dir facades_export \
+  --checkpoint facades_train
+```
+
 ## Code Validation
 
 Validation of the code was performed on a Linux machine with a ~1.3 TFLOPS Nvidia GTX 750 Ti GPU and an Azure NC6 instance with a K80 GPU.