Vision Intelligence · IoT · AIamiSense
Vision Intelligence for the Modern Pet Home
amiSense is an AI-powered IoT platform that uses computer vision to identify individual animals without tags, collars, or microchips. A personalized EfficientNetV2B0 model is trained on your pets, deployed to a Cloud-hosted edge processor, and continuously monitors your space — triggering precise, per-animal automations in real time.
One feeder. Three pets. Three very different needs.
Managing multiple pets individually is hard. Scheduled feeders don't know which pet showed up. Microchip feeders require expensive hardware on every animal. Cameras record everything but tell you nothing.
amiSense was born from a real problem: three cats with completely different dietary needs, all sharing the same space. The solution wasn't more collars or more chips — it was smarter software that sees the difference.
Activity Feed
Today's detections
Luna
Feeder opened
97%
2s ago
Mochi
Feeder opened
93%
4m ago
Shadow
Logged detection
89%
12m ago
Luna
Feeder opened
96%
1h ago
47
Detections
93%
Avg Conf.
3
Pets
Train it once. Let it work forever.
Train your model
Record short videos of each pet using the amiSense mobile app. Upload them to the cloud. Our AI training pipeline (EfficientNetV2B0 on Google Cloud) builds a personalized recognition model trained on your lighting, your environment, your animals.
Install the smart camera
The amiSense Smart Camera (ESP32-CAM) connects to your WiFi via Bluetooth provisioning from the mobile app. No complicated setup — point it at the feeding area and it starts streaming to our edge AI processor.
Automate any action
Define what happens when each pet is detected. Open the feeder for exactly 3 seconds. Log a recognition event. Trigger a GPIO output. Schedule actions by day and time. The system decides, the camera executes — you just watch the activity feed.
Everything you need. Nothing you don't.
Edge-to-Cloud Recognition
EfficientNetV2B0 trained on your animals, deployed to the edge
amiSense builds a high-precision, per-customer recognition model using EfficientNetV2B0 transfer learning — trained on your pets in your specific lighting and environment. The trained TFLite model is deployed to a Cloud-hosted edge processor that delivers low-latency inference at 1 frame per second, continuously.
- EfficientNetV2B0 — proven lightweight deep learning architecture
- Per-customer isolated model training pipeline on Google Cloud
- Background class suppresses false-positive detections
- "Wrong pet?" correction triggers automatic model retraining
- 2-phase transfer learning: frozen base → fine-tune
- >90% validation accuracy per trained model
Built, tested, and heading to alpha.
All five components — backend, mobile, training, edge processor, and device firmware — have completed Phase 2 development.
Auth, devices, pets, activity feed, live streaming, BLE provisioning
AI training pipeline, catalog actions, scheduling, corrections feedback loop, push notifications
Alpha release, onboarding improvements, multi-device enhancements, public availability
Multi-species recognition (dogs, birds, small mammals) and veterinary API integrations for health event logging
Built on proven infrastructure
Ready to bring vision intelligence into your home?
amiSense is entering alpha. Register now and be among the first to deploy a personalized, tag-free AI recognition system for your pets. Expanding to multi-species recognition and veterinary API integrations in Q4.
No spam. We'll only reach out when early access opens.