Development
BlazeWatch uses computer vision and machine learning models deployed on NVIDIA Jetson Orin devices to analyze real-time visual and thermal data collected by autonomous drones. The system leverages deep learning–based object detection and image classification techniques, developed using PyTorch and NVIDIA’s CUDA-accelerated AI stack. These models are optimized for edge inference so they can run directly on the drones without relying on cloud connectivity.


Testing and Verifying
To test and verify the accuracy of BlazeWatch, we evaluated the system using a combination of controlled testing and real world visual data. The AI models were trained to detect fire, smoke, and human presence using labeled image and video datasets that included a variety of environmental conditions, camera angles, and fire scenarios. This training allowed the system to learn visual patterns associated with early fire behavior and potential human risk. After training, we tested BlazeWatch using video footage and images that were not part of the training dataset. These test inputs included wildfire clips, controlled fire simulations, and aerial footage with visually similar elements such as fog, shadows, and sun glare to evaluate false positives. We compared the AI’s detections against known ground truth labels to assess whether fires, smoke, or human presence were correctly identified. The results showed that BlazeWatch was consistently able to detect clear fire and smoke events, with strong performance in open environments. Accuracy decreased in visually complex scenes, such as heavy foliage or low visibility conditions, which helped us identify areas for improvement. While BlazeWatch is still in the prototype stage, testing confirmed that the AI can reliably identify high-risk fire indicators and provide meaningful situational awareness. These results validated the core concept and are guiding continued model refinement, dataset expansion, and environmental robustness testing.
