diff --git a/README.md b/README.md index 8cf9eb768565bd6e80389237193cd4d67e18bc2b..04334e83b4798a333c4c15efe1892656e3026bcd 100644 --- a/README.md +++ b/README.md @@ -168,17 +168,6 @@ python pix2pix.py \ In this mode, image A is the black and white image (lightness only), and image B contains the color channels of that image (no lightness information). -### Exporting the model - -You can export the model to be served or uploaded: - -```sh -python pix2pix.py \ - --mode export \ - --output_dir facades_export \ - --checkpoint facades_train -``` - ### Tips You can look at the loss and computation graph using tensorboard: @@ -208,6 +197,17 @@ The test run will output an HTML file at `facades_test/index.html` that shows in <img src="docs/test-html.png" width="300px"/> +## Exporting + +You can export the model to be served or uploaded with `--mode export`. As with testing, you should specify the checkpoint to use with `--checkpoint`. + +```sh +python pix2pix.py \ + --mode export \ + --output_dir facades_export \ + --checkpoint facades_train +``` + ## Code Validation Validation of the code was performed on a Linux machine with a ~1.3 TFLOPS Nvidia GTX 750 Ti GPU and an Azure NC6 instance with a K80 GPU.