I’m trying to run a Tflite model on android, with flutter but i’m getting this error-
E/AndroidRuntime(18461): Caused by: java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_sequential_5_input:0) with 150528 bytes from a Java Buffer with 602112 bytes. E/AndroidRuntime(18461): at org.tensorflow.lite.TensorImpl.throwIfSrcShapeIsIncompatible(TensorImpl.java:418) E/AndroidRuntime(18461): at org.tensorflow.lite.TensorImpl.setTo(TensorImpl.java:139) E/AndroidRuntime(18461): at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:237) E/AndroidRuntime(18461): at org.tensorflow.lite.InterpreterImpl.runForMultipleInputsOutputs(InterpreterImpl.java:135) E/AndroidRuntime(18461): at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:80) E/AndroidRuntime(18461): at org.tensorflow.lite.InterpreterImpl.run(InterpreterImpl.java:128) E/AndroidRuntime(18461): at org.tensorflow.lite.Interpreter.run(Interpreter.java:80) E/AndroidRuntime(18461): at sq.flutter.tflite.TflitePlugin$RunModelOnBinary.runTflite(TflitePlugin.java:530) E/AndroidRuntime(18461): at sq.flutter.tflite.TflitePlugin$TfliteTask.doInBackground(TflitePlugin.java:471) E/AndroidRuntime(18461): at sq.flutter.tflite.TflitePlugin$TfliteTask.doInBackground(TflitePlugin.java:445) E/AndroidRuntime(18461): at android.os.AsyncTask$3.call(AsyncTask.java:378) E/AndroidRuntime(18461): at java.util.concurrent.FutureTask.run(FutureTask.java:266) E/AndroidRuntime(18461): … 4 more I/Process (18461): Sending signal. PID: 18461 SIG: 9
And this line stood out to me-
Caused by: java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_sequential_5_input:0) with 150528 bytes from a Java Buffer with 602112 bytes.
What am I doing wrong? Here’s my code-
Future<List<dynamic>> runModel(Uint8List image) async { print("Loadin gmodel"); String? res = await Tflite.loadModel( model: "assets/model.tflite", labels: "assets/labels.txt", ); print("model loaded and loading running predictin"); img.Image? Image = img.decodeJpg(image); var recognitions = await Tflite.runModelOnBinary( binary: imageToByteListFloat32(Image!, 224), numResults: 2, // get this value to be the number of classes you have threshold: 0.05, // defaults to 0.1, or put whatever you want here asynch: true // defaults to true ); print(recognitions); await Tflite.close(); return []; } Uint8List imageToByteListFloat32(img.Image image, int inputSize) { var convertedBytes = Float32List(1 * inputSize * inputSize * 3); var buffer = Float32List.view(convertedBytes.buffer); int pixelIndex = 0; for (var i = 0; i < inputSize; i++) { for (var j = 0; j < inputSize; j++) { var pixel = image.getPixel(j, i); buffer[pixelIndex++] = img.getRed(pixel) / 255.0; buffer[pixelIndex++] = img.getGreen(pixel) / 255.0; buffer[pixelIndex++] = img.getBlue(pixel) / 255.0; } } return convertedBytes.buffer.asUint8List(); }
Advertisement
Answer
The model seems to be requesting a UINT8 (unsigned 8-bits integer) tensor.
I think you can simplify your code a bit:
- Prepare a UInt8 buffer instead of Float32
- You don’t need to divide the value by 255.0
Then it should work.
(As a side note, using ByteBuffer will be much more efficient than array/list)