Perhaps I did not think this through in the best way possible and looking for suggestions on how best to handle this. I stumbled upon this post which sounds like similar problem:
I’m doing large imports in batches using quite mode with duplicate_action: skip. So I have a lot of whole Album directories that are full of files and are duplicates I don’t want. Don’t worry I have multiple backups elsewhere.
Short of scouring the log file for every line that starts with “duplicate-skip /PathForThatBatch/SomeArtist/SomeAlbum” and manually deleting it is there an elegant solution. I’ve got about 3+ days of batches and multiple directories with many duplicates to deal with. However, the plain “skipped” because they didn’t have a good enough match are mixed in. So I can’t just delete everything.
My thought was to get the “good” stuff first and go back and deal with lesser matches “later”. Well I’m at later not and scratching my head.
What flags/config did you import with? I think with a combination of incremental and incremental_skip_later you should be able to just import the “good” tracks (i.e. non-duplicates and matched tracks) in a non-interactive way, and then do a second interactive import (still with duplicate_action: skip) which should only try and import the skipped non-duplicate tracks (just being the ones which didn’t match).
This was my import section for the last few days. Looks like I need to read up more on incremental.
quiet: yes #yes #Testing this come back to it!!!
write: yes #Writes tags to files/tracks
delete: yes # Be careful!!!