Face spoofing attacks have become an increasingly critical concern when face recognition is widely applied. However, attacking materials have been made visually similar to real human faces, making spoof clues hard to be reliably detected. Previous methods have shown that auxiliary information extracted from the raw RGB data, including depth map, rPPG signal, HSV color space, etc., are promising ways to highlight the hidden spoofing details. In this paper, we consider extracting novel auxiliary information to expose hidden spoofing clues and remove scenarios specific, so as to help the neural network improve the generalization and interpretability of the model’s decision. Considering that presenting faces from spoof mediums will introduce 3D geometry and texture differences, we propose a spoof-guided face decomposition network to disentangle a face image into the components of normal, albedo, light, and shading, respectively. Besides, we design a multi-stream fusion network, which effectively extracts features from the inherent imaging components and captures the complementarity and discrepancy between them. We evaluate the proposed method on various databases, i.e. CASIA-MFSD, Replay-Attack, MSU-MFSD, and OULU-NPU. The results show that our proposed method achieves competitive performance in both intra-dataset and inter-dataset evaluation protocols.