Generative adversarial networks (GANs) aim to generate realistic data from some prior distribution, (i.e., the input of the generator). However, such prior distribution is often independent of real data and may lose semantic information. In practice, a latent distribution can be learned to represent the semantic information, but it is hard to be used for sampling in GANs for generating data. In this paper, we exploit Local Coordinate Coding (LCC) to improve GANs. Consequently, we are able to employ a new LCC based sampling method with a local coordinate system, rather than sampling from pre-defined prior distribution. More importantly, relying on LCC, we theoretically prove that the generalization ability of GANs depends on the intrinsic dimension of the latent manifold. Moreover, we conduct extensive experiments on real-world datasets to demonstrate the effectiveness of the proposed method.