The ability to recognize our own physical appearance is known as visual self-recognition (i.e., VSR). Despite extensive research we do not know the neural processes involved in this ability. In this thesis three ways were considered to address this issue. The first was to investigate whether recognizing images of self and other people involves different neural processes associated with face recognition. Prior studies have been unable to do this effectively because of their failure to ensure that participants have been similarly exposed to the other faces used as comparisons. This was addressed here by recruiting non-identical twins. Event related potentials (i.e., ERPs) reflecting various stages of face processing were recorded as a proxy for neural activity. No differences were found between photographs of self and twin for the ERPs reflecting the encoding of facial features (i.e., the P100), the configuration of these features (i.e., the N170), and matching these features with previously stored representations of the same person (i.e., the P250). However, differences emerged for the ERP reflecting the mnemonic retrieval of semantic information (i.e., the N400). Recognizing images of self and others does appear to involve different neural processes associated with face recognition, but only in relation to those responsible for recalling semantic information. The second approach used to investigate the neural processes involved in VSR was to see whether prior findings – based upon photographs taken from one moment in time - can be generalized to other situations when we see our own image. More specifically, we consistently recognize ourselves in photographs originating from different time periods (e.g., last week, 10 years ago, when we were children, etc.) and in different media (i.e., photographs and mirrors). The influence of time period was addressed by comparing ERPs to photographs of participants when they were 5-15, 16-25, and 26-45 years old. Differences occurred between these time periods for all ERP components with the exception of the one reflecting featural encoding. The influence of media was tested by comparing participants’ ERPs for faces when viewing themselves in a mirror and photograph. Galvanic skin responses (i.e., GSRs) were also included as a proxy for the neural processes associated with the affective appraisal of faces. Differences emerged between images seen in mirrors and photographs for all ERP components, but not the GSR component. These findings suggest that researchers should be cautious in generalizing their results based upon photographs from one period of time to instances when VSR involves different time periods or mirrors. The final approach used to investigate VSR was to compare the brains of those species that do show VSR (i.e., hominids: humans, bonobos, chimpanzees, gorillas, and orangutans) with their closest living relatives that do not (i.e., hylobatids: gibbons and siamangs). Evolutionary parsimony suggests that hominids share the neural features for VSR because of common descent. Hylobatids are the next closely related out-group. Therefore, the search space for the neural features involved in VSR can be reduced by identifying those that are shared by all hominids, yet set hominids apart from hylobatids (Suddendorf & Collier-Baker, 2009). The available comparative neurological data was reviewed and features meeting these criteria were identified. The findings outlined in this thesis - and the paradigms upon which they are based - offer important insights into VSR relative to its development, evolution, function, and associated psychological and neurological mechanisms.