摘要

The transferability of adversarial examples under the black-box attack setting has attracted extensive attention from the community. Advanced optimization algorithms are one of the most successful ways to improve the transferability among all methods proposed recently. However, existing advanced optimization algorithms either can only slightly enhance the transferability of adversarial examples or take a large amount of computation time. We propose the future momentum and future transformation (FMFT) method to balance the transferability and computation overhead. The FMFT method incorporates two parts, future momentum (FM) and future transformation (FT). FM is inspired by the looking ahead property and updates adversarial examples with the future N-th step momentum during each iteration. FT, on the other hand, makes use of the input transformations during the progress of future momentum calculation to obtain a more robust gradient and reduce computation overhead. Additionally, we propose a new input transformation called random block scaling, which divides the image into various blocks and then scales the blocks differently. Em pirical evaluations on the standard ImageNet dataset demonstrate the superiority of our FMFT.