Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Hacking Facial Recognition With Beards

Hacking Facial Recognition With Beards

I gave a streaming session on the IBMDeveloper Twitch about how to perform facial recognition, and the human processes involved.

David Okun

August 25, 2018
Tweet

More Decks by David Okun

Other Decks in Programming

Transcript

  1. @dokun24 Agenda • Ethics In Machine Learning • Vernacular •

    Doing The Facial Recognition • Demo • Existing Challenges • Q & A
  2. @dokun24 The Highest Level Process • Face Detection • Image

    Normalization • Feature Extraction • Feature Matching
  3. @dokun24 What is OpenCV? • Open(source) Computer Vision • Normalizes

    computer vision applications & infrastructure • Target detection, texture mapping, etc
  4. @dokun24 What is dlib? • C++ library for machine learning

    algorithms • Here, mostly for facial detection • 68 landmark points
  5. @dokun24 What is TensorFlow? • High performance computation library for

    machine learning • Open source, heavily adopted • The lowest level of code needed for training CNNs
  6. @dokun24 c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) print(c.shape)

    ==> TensorShape([Dimension(2), Dimension(3)]) d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]]) print(d.shape) ==> TensorShape([Dimension(4), Dimension(2)]) # Raises a ValueError, because `c` and `d` do not have compatible # inner dimensions. e = tf.matmul(c, d) f = tf.matmul(c, d, transpose_a=True, transpose_b=True) print(f.shape) ==> TensorShape([Dimension(3), Dimension(4)])
  7. @dokun24 What is Keras? • A neural network library written

    in Python • Can run on top of TensorFlow • Creates the layers that help create a feature vector
  8. @dokun24 from keras.layers import Input, Dense from keras.models import Model

    # This returns a tensor inputs = Input(shape=(784,)) # a layer instance is callable on a tensor, and returns a tensor x = Dense(64, activation='relu')(inputs) x = Dense(64, activation='relu')(x) predictions = Dense(10, activation='softmax')(x) # This creates a model that includes # the Input layer and three Dense layers model = Model(inputs=inputs, outputs=predictions) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(data, labels) # starts training
  9. @dokun24 [0.0109382765, 0.0727260783, -0.0886565521, 0.106995322, -0.0263014287, -0.0352396965, -0.0471194535, 0.0224863011, 0.00886561163,

    -0.136294395, -0.00985514186, -0.0441077091, 0.0644643679, 0.0119109377, -0.00304533541, 0.00841313601, -0.0451855995, 0.0409480296, -0.0849511996, -0.046876207, 0.00489062304, -0.100049019, -0.0260294266, 0.0340725258, -0.0513369851, -0.00715692108, -0.138269156, -0.0447790548, -0.0971052274, 0.016863659, 0.0200845413, 0.0345470943, 0.0226635113, 0.0210720059, 0.0939424559, -0.0567186847, -0.0420572162, -0.00359278591, 0.0274323691, -0.0161195938, 0.0396690778, 0.0509826653, 0.100426823, -0.0316316895, -0.0500608087, -0.00339857256, -0.0342332497, 0.0790704489, -0.0289952923, 0.0568330586, -0.0285114124, 0.0588419661, -0.0434439555, 0.0621240847, 0.0360112451, 0.00799505785, -0.0279962141, -0.0449286103, 0.0152444597, 0.0455824099, -0.0581656098, -0.00988157, -0.024159437, 0.0274357516, -0.0862255767, -0.00760430517, -0.102911048, -0.0202399883, 0.00621778751, -0.0181367081, 0.0223715473, -0.125922918, -0.0999212191, -0.0126653658, -0.0358478688, -0.0665559843, 0.0375230871, -0.0261705182, -0.0212064162, 0.0475422479, -0.0623819679, 0.0129780034, 0.0282707643, 0.0232121553, -0.00730743492, -0.0821457431, 0.0655974671, -0.0265328269, 0.0388734452, 0.0616755709, -0.0121487472, -0.0232637692, -0.0545362122, 0.0236765929, -0.0611603297, 0.0797719285, -0.0404306911, 0.0323628858, -0.00949066877, -0.0609771982, -0.00158646447, 0.0596057661, -0.0802996904, 0.0247787572, 0.0387842879, 0.0258943904, -0.093511194, 0.0587848015, -0.0104612159, -0.108764656, -0.0245255344, -0.00470191566, 0.0061077862, -0.0946708396, 0.0128557365, 0.123939671, 0.0517629161, 0.0203773696, 0.0309179667, 0.0296497084, -0.0960420221, 0.0165317804, 0.0315312482, 0.0090330299, 0.0824666694, 0.137421414, 0.00069823768, -0.0312179867, 0.0248888023, -0.00145759375, -0.0291704088, -0.0118671609, 0.0213795807, -0.0371772498, 0.00247276342, 0.0654902682, -0.0687400624, 0.00264206412, 0.0854841322, -0.0100153023, -0.0529452562, 0.0973913893, 0.0627576113, 0.00176332367, -0.0661887601, -0.080224067, 0.0554779172, -0.0210913122, 0.0315915756, 0.0259354841, -0.0917198285, -0.0626271218, -0.0229110811, -0.0031353659, -0.0217538457, 0.057530541, 0.0180884395, -0.116396718, 0.0102889976, -0.0272365212, -0.0515930578, 0.0503248908, -0.0153394509, 0.0429311357, 0.0498886444, -0.0364963003, -0.00377800176, 0.0172923729, -0.0085753873, -0.000616554695, -0.0112605086, -0.0504184701, -0.0347453021, -0.0306184422, 0.0429552235, -0.126647428, 0.0414417945, 0.0330664888, 0.0490230545, -0.00483355578, -0.0539604612, -0.00565166539, -0.120982081, -0.00506902765, -0.0661799386, 0.0654867887, -0.0254629534, -0.00545939198, 0.112354159, -0.0514094941, 0.0167419966, 0.0574088842, 0.0635244325, 0.0998285115, 0.014563757, 0.0446437597, -0.0102947541, 0.0601763278, -0.022337636, 0.037583936, -0.00868016109, 0.0387439467, -0.0472361892, -0.00683514262, -0.0536096953, 0.0930362642, -0.0444846824, 0.0863161162, -0.0145008266, -0.0109270848, -0.0247354154, 0.0888869762, 0.0915196687, -0.0189450141, 0.157319754, -0.074196659, -0.0373945273, -0.0393407792, 0.110559419, -0.123502225, -0.0390469283, 0.0392427184, -0.0211585611, 0.029190179, 0.0259871911, -0.0924885496, -0.0496961176, 0.0109286346, -0.0429181717, 0.0285253581, -0.0200652219, -0.188982397, -0.0164047889, 0.0247660689, 0.0287661962, -0.0118430201, 0.0300309248, 0.0160504375, -0.00699294591, 0.0520862937, -0.0729718357, -0.0837474763, -0.0414310731, -0.096074976, 0.0275698956, -0.051039014, 0.084851712, 0.0742572099, -0.0493934005, 0.0458364189, 0.055183582, -0.0109172817, -0.0432627872, -0.0828055739, -0.0384820662, 0.0220153034, -0.00765768997, 0.0994410664, 0.017342262, -0.0428088047, -0.0226635933, 0.0442144275, -0.0242784154, -0.0128913475, 0.00506109418, 0.0339680836, 0.0699482784, 0.0170274191, 0.0268076807, -0.0130135585, -0.131615028, 0.10316924, -0.0259890705, 0.122296281, -0.0297779553, -0.0306672305, -0.0287104975, -0.048643548, -0.0360500105, -0.0858685449, -0.00986277591, -0.0646833256, -0.0840798244, 0.0136408471, 0.0169043299, -0.0971477106, -0.016923707, 0.0805660486, 0.0159345381, 0.0525551066, -0.0761455074, -0.136946559, 0.0588576943, -0.0372881182, 0.0313418806, 0.0984985977, -0.0552069917, 0.0313524827, 0.0150029277, 0.0668970719, 0.0640067905, 0.0310357977, 0.0117677432, -0.0163922533, -0.0199124962, -0.0404609703, 0.0657613128, 0.0340500884, -0.0149180656, 0.0291028358, 0.0193162505, -0.0158343688, 0.103551552, -0.0468648039, -0.0689977854, 0.0592100658, 0.037243735, 0.0348685384, -0.0724523813, 0.00524123944]
  10. @dokun24 Existing Challenges • Landmark detection with enough light •

    Different poses / insufficient training data • Occlusion / facial expressions