ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features

Georgia Tech 1 , Virginia Tech 2 , IBM Research 3
MY ALT TEXT

ConceptAttention produces saliency maps that precisely localize the presence of textual concepts in images.

Abstract

Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduce ConceptAttention, a novel method that leverages the expressive power of DiT attention layers to generate high-quality saliency maps that precisely locate textual concepts within images. Without requiring additional training, ConceptAttention repurposes the parameters of DiT attention layers to produce highly contextualized concept embeddings, contributing the major discovery that performing linear projections in the output space of DiT attention layers yields significantly sharper saliency maps compared to commonly used cross-attention mechanisms. Remarkably, ConceptAttention even achieves state-of-the-art performance on zero-shot image segmentation benchmarks, outperforming 11 other zero-shot interpretability methods on the ImageNet-Segmentation dataset and on a single-class subset of PascalVOC. Our work contributes the first evidence that the representations of multi-modal DiT models like Flux are highly transferable to vision tasks like segmentation, even outperforming multi-modal foundation models like CLIP.

MY ALT TEXT

ConceptAttention can generate high-quality saliency maps for multiple concepts simultaneously. Additionally, our approach is not restricted to concepts in the prompt vocabulary.

MY ALT TEXT

ConceptAttention produces higher fidelity raw scores and saliency maps than alternative methods, sometimes surpassing in quality even the ground truth saliency map provided by the ImageNet-Segmentation task. Top row shows the soft predictions of each method and the bottom shows the binarized predictions.

BibTeX


        @misc{helbling2025conceptattentiondiffusiontransformerslearn,
          title={ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features}, 
          author={Alec Helbling and Tuna Han Salih Meral and Ben Hoover and Pinar Yanardag and Duen Horng Chau},
          year={2025},
          eprint={2502.04320},
          archivePrefix={arXiv},
          primaryClass={cs.CV},
          url={https://arxiv.org/abs/2502.04320}, 
      }