A CAL-101 Each Of Your Buddys Is Speaking Of

Матеріал з HistoryPedia
Версія від 19:23, 6 травня 2017, створена Drawer9parade (обговореннявнесок) (Створена сторінка: , 2002): when the social interaction partner (the virtual human) looked at the participants, the latter maintained [http://en.wikipedia.org/wiki/SKAP1 SKAP1] g...)

(різн.) ← Попередня версія • Поточна версія (різн.) • Новіша версія → (різн.)
Перейти до: навігація, пошук

, 2002): when the social interaction partner (the virtual human) looked at the participants, the latter maintained SKAP1 greater interpersonal distance than when the social interaction partner was not looking at them. In the same vein, Hoyt et al. (2003) used IVET to replicate classic social psychology findings on social inhibition. They trained a group of participants in a specific task and subsequently asked them to perform it either in the presence of virtual humans or alone. In accordance with the classic social inhibition finding (Buck et al., 1992), participants performed worse when in the presence of virtual humans. Relatedly, the presence of a social interaction partner often increases arousal in real social interactions (Patterson, 1976) and the same was true in an IVE. Slater et al. (2006b) found that participants had higher arousal, measured through physiological responses such as heart-rate and galvanic skin response, when they were in a virtual environment with virtual humans present (i.e., a bar) compared to a lone training session in the IVE. Also, the closer the virtual human approached participants, the higher their physiological arousal (Llobera et al., 2010). Giannopoulos et al. (2010) investigated handshakes by asking participants to take part in a virtual cocktail party. They had to shake virtual humans�� hands by using a haptic device controlled either by an algorithm created to produce realistic movements or by a real human. Results showed that virtual handshakes operated by a robot were rated similarly as handshakes operated by humans. Dyck et al. (2008) used the Facial Action Coding System (Ekman and Friesen, 1978) to artificially create facial expressions of six basic emotions on virtual humans that closely matched those displayed by real actors. Specific facial action units used in natural expressions were implemented in virtual humans. Results showed that virtual facial expressions of emotions displayed by virtual humans were overall recognized as accurately, and for some emotions (i.e., sadness and fear) even more accurately, as natural expressions displayed by real human actors. This study suggests that virtual humans can be reliably used to communicate emotions, although some technical advancement is needed to improve the perceived quality of some specific emotions (e.g., disgust). In the same vein, Qu et al. (2014) asked participants to have a conversation with a virtual woman who displayed either positive or negative facial expressions both while speaking and listening to the participants. Results showed that the emotions (positive or negative) displayed by the virtual woman during the interaction, and especially in the speaking phase, evoked a congruent emotional state in the participants. The same effect was observed in real social interactions (Hess and Blairy, 2001; Hess and Fischer, 2013). Santos-Ruiz et al.