Hi all,
I’m trying to use TFOD with the pretrained zoo models for an application of defect detection. In essence, I’m trying to use high-resolution satellite-type imagery to detect certain known and common defects. A couple of tricky things I’m seeing are:
-
The images are high resolution, so when they’re resized only small fractions of the information are kept, meaning some of the smaller things I’m looking for are lost in compression. To get around this, I divide the images into N rectangles of even spaces and pass each one individually (N is not set, I’m playing around to optimize it). In doing so I keep almost all of the original information and use an aspect ratio that is almost the same as the original images. Intuitively I would think this would do the trick, but I get an enormous regularization loss (using a resized SSD MobileNet V1 FPN model). When I do the image division described in 1, my results are worse by a long shot. I think this is a result of overfitting, which is why the reg. loss becomes so large.
-
My “defects” are not always the same. Say I’m looking for defect 1, it could be 1/4 of the total picture, 1/2 the total picture, sometimes the whole picture, etc… Furthermore the defects may be different colors or not always look exactly the same, although there are always patterns that exist that machine is usually able to detect, I would just like to make it better. An example of this is trees that overhang roadways. The detector should be able to find the tree regardless of the type of tree, type of road (city, urban, paved, parking lot, etc,…) or the color of the leaves.
I know this is a general question, but I thought I’d post it here in case anyone has had experience with something similar and is able to share some insight or make any suggestions. Any tips/tricks are appreciated!!
Thank you,
Derek