We present an integral framework for training sketch simplification networks that convert challenging rough sketches into clean line drawings. Our approach augments a simplification network with a discriminator network, training both networks jointly so that the discriminator network discerns whether a line drawing is a real training data or the output of the simplification network, which in turn tries to fool it. This approach not only encourages the output sketches to be more similar in appearance to the training sketches, but allows training with additional unsupervised data. By training with additional rough sketches and line drawings that are not corresponding to each other, we can improve the quality of the sketch simplification. Our models that significantly outperform the state of the art in the sketch simplification task, and show we can also optimize for a single image, which improves accuracy at the cost of additional computation time. Using the same framework, it is possible to train the network to perform pencil drawing generation, which is not possible using the standard mean squared error loss. We validate our framework with two user tests, where our approach is preferred to the state of the art in sketch simplification 88.9% of the time.