what has been learned in one setting (i.e., distribution P1) is exploited to improve generalization in another setting (say distribution P2). — Extracted from Deep Learning, pg 536 “
already been trained on one problem is used in some way on a second related problem. The advantages are numerous, inclusive of: • Learning a new task relies on previously learned tasks • Less training data is needed, hence learning process is faster • Learning gets to be more accurate
Learning for Image Classification using several pre-trained powerful deep learning models built by researchers in the past. We’ll use the following pre-trained models: • MobileNet • ResNet • Inception (aka GoogleNet)