Black-box attacks play a crucial role in identifying vulnerabilities in deep neural networks by generating adversarial examples that remain imperceptible to human observers. Transfer-based attacks, which leverage surrogate models to craft adversarial examples for attacking victim models, have become a dominant technique in black-box settings due to their practicality and high effectiveness. However, as deep learning models continue to grow in complexity, the resulting disparities between different architectures make generating adversarial examples with strong transferability increasingly difficult. To address this challenge, we propose a novel hybrid query-transfer method called Dual Generalization Attack (DGA). Specifically, DGA enhances model generalization by incorporating a Model Filter (MF) that injects posterior information from the victim model through a limited number of queries, thereby reducing the gap between the surrogate and victim models. Additionally, DGA introduces a Multi-granularity Local Shuffle (MLS) technique to improve input generalization, enhancing input diversity while preserving consistency with the original images. This dual approach reduces the dependency of adversarial examples on both surrogate models and input configurations, thereby enhancing transferability and attack performance on unknown victim models. Extensive experiments demonstrate that DGA significantly outperforms existing black-box attacks, achieving a substantial improvement in adversarial transferability.