Among the various sources of information signalled by faces, two major types of cue can be distinguished: invariant and variant facial cues. These are believed to be processed via units that are separate from those used in face recognition, and the processing of these categorical cues is believed to take place in parallel (Bruce & Young, 1986). Moreover, the processing of invariant and variant cues is said to tap independent systems in terms of neural architecture (Haxby et al., 2000, 2002). Yet there is evidence suggesting interactions between invariant and variant cues, although there are some inconsistencies in this literature. There is, for instance, evidence that gender information, an invariant cue, interferes with emotion categorisations asymmetrically, such that emotion categorisations were slowed when sex cues varied but gender classifications were not impacted by the emotional cues (Atkinson et al., 2005). This can be interpreted as suggesting that the processing of facial gender cues precedes the processing of emotional cues (Atkinson et al., 2005). Others, however, have found a symmetrical pattern (Aguado et al., 2009) or have suggested strictly independent processing of gender and emotion (LeGal & Bruce, 2002). The main aim of the current thesis is to investigate the nature of interactions between invariant (race, age, and sex cues) and variant (emotional expression) facial cues. Via the Garner Paradigm (Garner, 1974) - a paradigm that is typically used to assess the interaction between multiple, independent dimensions of an object -, a series of studies in the current project revealed, firstly, that race, age, and sex cues (invariant features) interfered asymmetrically with emotion processing, implying that processing of these invariant cues is obligatory during emotional categorisations. However, when the holistic processing of these cues was disrupted by inversion (Yin, 1969) the asymmetrical interaction between sex cues and emotion categorisations was no longer evident (Karnadewi & Lipp, 2011). Further investigation in Study 2 revealed that the interaction of sex and emotion is sensitive to other factors, such as the experimental design, facial databases used (reflecting mainly on the display of teeth in smiling faces), and the familiarity of individual posers. The symmetrical interaction between sex and emotional cues diminished with increased familiarity. Despite the fact that distinct invariant cues had different effects on emotion categorisations (i.e., asymmetric interference for race and age cues was observed for upright and inverted faces , whereas the impact of sex cues was eliminated by inversion), there was no clear evidence for interactions among invariant features. Finally, the last series of studies aimed to investigate whether the gender and expression dimensions postulated in the Face Space (Valentine, 1991) model of face processing are independent of one another. Study 4 adopted the face after-effects paradigm in order to assess whether interaction between gender and expressions occurred at the level of face encoding. The current results suggested that gender and expression are not represented independently - an emotion categorization bias was evident when androgynous-expressionless faces were preceded by gendered faces. Conversely, a gender categorization bias for androgynous-expressionless faces ensued following exposure to expressive faces. This may point to an effect of higher order processes, like gender stereotypes, on the early perceptual process that mediates structural face encoding.