amiSenseVision Intelligence · IoT · AI

amiSense

Vision Intelligence for the Modern Pet Home

amiSense is an AI-powered IoT platform that uses computer vision to identify individual animals without tags, collars, or microchips. A personalized EfficientNetV2B0 model is trained on your pets, deployed to a Cloud-hosted edge processor, and continuously monitors your space — triggering precise, per-animal automations in real time.

EfficientNetV2B0Google Cloud RunESP32-CAMFirebase AuthMQTT / TLS
The Problem

One feeder. Three pets. Three very different needs.

Managing multiple pets individually is hard. Scheduled feeders don't know which pet showed up. Microchip feeders require expensive hardware on every animal. Cameras record everything but tell you nothing.

amiSense was born from a real problem: three cats with completely different dietary needs, all sharing the same space. The solution wasn't more collars or more chips — it was smarter software that sees the difference.

Activity Feed

Today's detections

Live
L

Luna

Feeder opened

97%

2s ago

M

Mochi

Feeder opened

93%

4m ago

S

Shadow

Logged detection

89%

12m ago

L

Luna

Feeder opened

96%

1h ago

47

Detections

93%

Avg Conf.

3

Pets

How It Works

Train it once. Let it work forever.

01

Train your model

Record short videos of each pet using the amiSense mobile app. Upload them to the cloud. Our AI training pipeline (EfficientNetV2B0 on Google Cloud) builds a personalized recognition model trained on your lighting, your environment, your animals.

02

Install the smart camera

The amiSense Smart Camera (ESP32-CAM) connects to your WiFi via Bluetooth provisioning from the mobile app. No complicated setup — point it at the feeding area and it starts streaming to our edge AI processor.

03

Automate any action

Define what happens when each pet is detected. Open the feeder for exactly 3 seconds. Log a recognition event. Trigger a GPIO output. Schedule actions by day and time. The system decides, the camera executes — you just watch the activity feed.

Features

Everything you need. Nothing you don't.

Edge-to-Cloud Recognition

EfficientNetV2B0 trained on your animals, deployed to the edge

amiSense builds a high-precision, per-customer recognition model using EfficientNetV2B0 transfer learning — trained on your pets in your specific lighting and environment. The trained TFLite model is deployed to a Cloud-hosted edge processor that delivers low-latency inference at 1 frame per second, continuously.

  • EfficientNetV2B0 — proven lightweight deep learning architecture
  • Per-customer isolated model training pipeline on Google Cloud
  • Background class suppresses false-positive detections
  • "Wrong pet?" correction triggers automatic model retraining
  • 2-phase transfer learning: frozen base → fine-tune
  • >90% validation accuracy per trained model
Current Status

Built, tested, and heading to alpha.

All five components — backend, mobile, training, edge processor, and device firmware — have completed Phase 2 development.

Phase 1Complete

Auth, devices, pets, activity feed, live streaming, BLE provisioning

Phase 2Complete

AI training pipeline, catalog actions, scheduling, corrections feedback loop, push notifications

Phase 3Coming Soon

Alpha release, onboarding improvements, multi-device enhancements, public availability

Phase 4Q4 Roadmap

Multi-species recognition (dogs, birds, small mammals) and veterinary API integrations for health event logging

Technology Stack

Built on proven infrastructure

React Native / ExpoPython / FlaskTensorFlow / EfficientNetV2B0TFLite Edge InferenceGoogle Cloud RunCloud FirestoreMQTT / HiveMQ CloudFirebase AuthESP32-CAM (Arduino C++)
Early Access

Ready to bring vision intelligence into your home?

amiSense is entering alpha. Register now and be among the first to deploy a personalized, tag-free AI recognition system for your pets. Expanding to multi-species recognition and veterinary API integrations in Q4.

No spam. We'll only reach out when early access opens.