Things You Have No Idea About Temozolomide

Матеріал з HistoryPedia
Перейти до: навігація, пошук

(2010), who adopted emotional facial pictures (from the Chinese Facial Affective Picture System, CFAPS) as stimuli, proposed a three-stage model of facial expression processing? In this model, the brain distinguishes negative emotional facial expressions from positive and neutral facial expressions in the first stage, which explains why the posterior P1 and anterior N1 amplitudes elicited by fearful faces are larger than the amplitudes elicited by neutral and happy faces. In the second stage, the brain distinguishes emotional from non-emotional facial expressions, which explains why the N170 and vertex positive potential (VPP) amplitudes elicited by fearful and happy faces are larger than those elicited by neutral faces. In the third stage, the brain classifies different types of facial expressions, thus explaining why the P3 and N3 amplitudes elicited by fearful, happy, and neutral faces are different from one another. This three-stage model may help us to understand the time courses of emotional facial expression processing. Below, we introduce several representative ERP components that are involved in emotion processing. When it comes to early emotional facial expressions processing, we cannot ignore two representative ERP components, namely P1 and N1. Both of these components are indicators of early processing (Brown et al., 2012) and represent comparatively automatic mechanisms of selective attention (Dennis et al., 2009). P1 is a positive-going potential that peaks around 80�C130 ms after stimulus onset, and is presumed to indicate early visual processing (Jessen and Grossmann, 2014). Furthermore, it reaches its maximal amplitude over the occipital areas in emotional word and facial expression processing (Van Hooff et al., 2008; Cunningham et al., 2012). Moreover, P1 is related to the selective attention to emotional stimuli, i.e., the P1 amplitudes elicited by attended stimuli were higher than the P1 amplitudes to stimuli that were unattended (Dennis et al., 2009). Additionally, the P1 amplitudes elicited by negative emotional pictures and words were larger than the P1 amplitudes elicited by positive ones (Bernat et al., 2001; Smith et al., 2003; Delplanque et al., 2004). N1, a negative-going potential, appears shortly after P1, and is sensitive to the characteristics of facial expressions; for example, Eimer and Holmes (2002) found that fearful faces induced a shorter N1 Moroxydine latency than did neutral faces. Another important component is P2. P2 is an attention-related component, with a typical peak latency of about 200�C250 ms (Ferreira-Santos et al., 2012), which reflects the detection of visual features during the perceptual stage of processing (Luck and Hillyard, 1994; Carreti�� and Iglesias, 1995). Moreover, P2 is regarded as indexing some aspects of the stimulus categorization process (Garc��a-Larrea et al., 1992).