Development
Ember AI uses computer vision and machine learning models deployed on NVIDIA Jetson Orin devices to analyze real-time visual and thermal data collected by Camera when you upload a photo to the AI. The system leverages deep learning–based object detection and image classification techniques, developed using PyTorch and NVIDIA’s CUDA-accelerated AI stack. These models are optimized for edge inference so they can run directly on the drones without relying on cloud connectivity.
Testing and Verifying
To test Ember AI, we compiled and labeled a dataset of over 3,500 burn images covering various types and severities, teaching the model to associate features like color, texture, and blistering with classifications. We then evaluated the AI on images excluded from training to measure its ability to generalize to new cases. Predictions were compared against known classifications, achieving an average accuracy of about 40 percent. While this is not sufficient for real-world deployment, it demonstrates the model can identify relevant visual patterns. Testing also highlighted limitations, including dataset size, image variability, and class imbalance, which are guiding improvements such as expanding the dataset, refining labels, and adjusting the model to increase reliability.
