$30 off During Our Annual Pro Sale. View Details »

Out of class novelty generation, an experimenta...

Mehdi
November 08, 2017

Out of class novelty generation, an experimental foundation

Recent advances in machine learning have brought the field closer to computational creativity research. From a creativity research point of view, this offers the potential to study creativity in relationship with knowledge acquisition. From a machine learning perspective, however, several aspects of creativity need to be better defined to allow the machine learning community to develop and test hypotheses in a systematic way. We propose an actionable definition of creativity as the generation of out-of-distribution novelty. We assess several metrics designed for evaluating the quality of generative models on this new task. We also propose a new experimental setup. Inspired by the usual held-out validation, we hold out entire classes for evaluating the generative potential of models. The goal of the novelty generator is then to use training classes to build a model that can generate objects from future (hold-out) classes, unknown at training time - and thus, are novel with respect to the knowledge the model incorporates. Through extensive experiments on various types of generative models, we are able to find architectures and hyperparameter combinations which lead to out-of-distribution novelty.

Mehdi

November 08, 2017
Tweet

More Decks by Mehdi

Other Decks in Research

Transcript

  1. Mehdi Cherti, Balázs Kégl CNRS & Université Paris Saclay Center

    for Data Science Akin Kazakci Mines Paristech Centre de gestion scientifique 1 ICTAI 2017
  2. Motivation • A notable characteristic of human intelligence is novelty

    generation: the capacity to build/invent/think new objects from their available knowledge. • Examples : • Design of new products (e.g., in engineering industry) • Artistic works (e.g., paintings, music) • Intellectual works (e.g., scientific theories) 2
  3. Motivation Can we build programs that can autonomously build new

    product designs, new paintings, new music styles, new scientific theories ?
  4. Questions we asked • What is meant by generation of

    novelty ? • Can we generate novelty ? • How can a program generating novelty be evaluated ?
  5. The importance of the representation • The designer chooses a

    representation • The representation reflects/encodes the knowledge of the designer about the domain • The designer uses the representation to generate new objects • Different designers will choose different representations (b.c., different knowledge), leading to completely different objects (Reich, 1995)
  6. The fitness function barrier • For most computational creativity systems,

    the value (fitness) function is fixed and predetermined. • Also, representation (genotype) is fixed and predetermined. • The objects generated by the system reflect designer’s preference - not the machine’s.
  7. Can we learn representations ? •Important subfield of machine learning

    : representation learning • In ML: we know how to learn good representations for prediction (supervised learning) and how to evaluate them •Q : What is a good representation for the generation of new objects ?
  8. Can we learn representations ? •Can we use generative models

    of ML to do that ? •Problem: Current generative models in ML are mostly trained based on maximum likelihood or some proxy of it => unlikely to generate “novelty” Train data Test data Generative model Generate Learn What ML wants
  9. Questions we asked • What is meant by generation of

    novelty ? • Can we generate novelty ? • How can a program generating novelty be evaluated ?
  10. What is meant by generation of novelty ? Our definition

    attempt: Generate Novelty = Generate new types/classes/categories
  11. Questions we asked • What is meant by generation of

    novelty ? • Can we generate novelty, that is, new types ? • How can a program generating novelty be evaluated ?
  12. Can we generate new types ? (Kazakci, 2016) Clusters found

    semi-manually Train data Generative model Learn Generate
  13. Can we generate new types ? In Kazakçı et al.

    2016: • We show that symbols of new types can be generated by carefully tuned autoencoders • We make a first step of defining a conceptual and experimental framework of novelty generation • However, we make no attempt to design evaluation metrics A set of types (clusters) discovered by the model
  14. • What is meant by generation of novelty ? •

    Can we generate novelty, that is, new types ? • This paper: How can a program generating novelty be evaluated ? Questions we asked
  15. How can a program generating novelty be evaluated ? Idea

    : simulate the unknown Train on known classes, Test on classes known to the experimenter but unknown to the model Examples: Train on all fashion styles up to 2000, test on fashion styles from 2000- Train on baroque and classical music, test on romantic music Train on drug-like molecules, test on Malaria drugs Our setup: Train on digits, test on letters
  16. How can a program generating novelty be evaluated ? Generative

    model Learn Generate Q : How many of those are letters ?
  17. How can a program generating novelty be evaluated ? Discriminator

    36 classes = 10 for digits + 26 for letters Learn
  18. How can a program generating novelty be evaluated ? Low

    nb.letters High nb.letters Problem: “noise” can get misclassified as letter
  19. How can a program generating novelty be evaluated ? Problem:

    “noise” gets misclassified as letter Solution: we use Objectness = posterior entropy Low objectness High objectness
  20. How can a program generating novelty be evaluated ? High

    nb. letters Low nb. letters High objectless Low objectless
  21. Experiments • We do a large scale experiment where we

    train ~1000 models (autoencoders, GANs) by varying their hyper-parameters. • From each model, we generate 1000 images, then we evaluate the model using our proposed metrics (count + objectness) • Question we tried to answer: Can we find models that can generate novelty ?
  22. Results • Selecting models for letters count + objectness lead

    to models that can generate novelty • Selecting models for digits count + objectness lead to models that memorize training classes
  23. Summary • We propose a workable definition of novelty •

    We propose a set of scores to evaluate the capacity of models to generate novelty • We find models that can generate novelty
  24. Perspectives • The immediate next goal is to analyze the

    models in a systematic way • Next step : how can we build programs that can build their own value function ?
  25. backup : Generating new types of objects: generating new symbols

    • We use an iterative method to build symbols the net has never seen (inspired by Bengio et al. (2013) but we don’t try to avoid spurious samples): • Start with a random image • force the network to construct (i.e. interpret) • , until convergence, f(x) = decode(encode(x))