Summary:
During the incremental analysis using the Buck distributed cache, if there is a cache hit for a given module, the output jars for the intermediate targets are not necessarily dowloaded. We therefore need to filter the jar files that are present on disk before loading the analysis artifacts from it.
This will also be neccesary when combined with the --keep-going option of Buck
Reviewed By: sblackshear
Differential Revision: D3941853
fbshipit-source-id: befda63