your model smaller - Quantization - instead of using Float32 use uint8 ◦ Need fake quantization node ◦ Search for Fixed Point Quantization for more details • Mixing samples ◦ java.lang.IllegalArgumentException: Failed to get input dimensions. 0-th input should have 602112 bytes, but found 150528 bytes. ◦ java.lang.IllegalArgumentException: Cannot convert an TensorFlowLite tensor with type FLOAT32 to a Java object of type [[B (which is compatible with the TensorFlowLite type UINT8)