or Java on Android • Swift or Objective-C on iOS • C++ on Windows • Objective-C on macOS • C on Linux https://docs. fl utter.dev/development/platform-integration/platform- channels?tab=type-mappings-c-plus-plus-tab 14
package is now discontinued since these APIs are no longer available in the latest Firebase SDKs. As an alternative you can switch to Google's standalone ML Kit library via google_ml_kit for on-device vision APIs. For calling the Cloud Vision API from your app the recommended approach is using Firebase Authentication and Functions, which gives you a managed, serverless gateway to Google Cloud Vision APIs. For an example Functions project see the vision-annotate-images sample project.
iteΤυΫϸϲ⑲ώͩᑿ͘ PackageΤᑿ͘ Assets Folder Firebase ML Custom Tensor fl ow Hub ᥦᐠᥭΝ Firebase_ml_mod el_downloader t fl ite & t fl ite_ fl utter t fl ite model google_ml_kit End End Start Webᑿ͘ TensorFlow.js ᥦᐠPackageᥭΝ End
1. Flutter 2. Firebase 3. TensorFlow.js detect the user’s face within the frame of the camera 4. MediaPipe estimates 468 3D face landmarks in real-time https://medium.com/ fl utter/how-its- made-holobooth-6473f3d018dd https://github.com/ fl utter/holobooth