Affluent Savvy
Photo: Monstera
All the findings suggest that green is a positively valenced color, signifying pleasantness, calmness and happiness.
Savings by age 30: the equivalent of your annual salary saved; if you earn $55,000 per year, by your 30th birthday you should have $55,000 saved....
Read More »
According to surveys, a reasonable number of handbags a classy woman must have is six or seven. There is a handbag particularly curated for every...
Read More »
The simple yet scientifically proven Wealth DNA method laid out in the report allows you to effortlessly start attracting the wealth and abundance you deserve.
Learn More »Abstract There is a growing body of literature to show that color can convey information, owing to its emotionally meaningful associations. Most research so far has focused on negative hue–meaning associations (e.g., red) with the exception of the positive aspects associated with green. We therefore set out to investigate the positive associations of two colors (i.e., green and pink), using an emotional facial expression recognition task in which colors provided the emotional contextual information for the face processing. In two experiments, green and pink backgrounds enhanced happy face recognition and impaired sad face recognition, compared with a control color (gray). Our findings therefore suggest that because green and pink both convey positive information, they facilitate the processing of emotionally congruent facial expressions (i.e., faces expressing happiness) and interfere with that of incongruent facial expressions (i.e., faces expressing sadness). Data also revealed a positive association for white. Results are discussed within the theoretical framework of emotional cue processing and color meaning. Citation: Gil S, Le Bigot L (2014) Seeing Life through Positive-Tinted Glasses: Color–Meaning Associations. PLoS ONE 9(8): e104291. https://doi.org/10.1371/journal.pone.0104291 Editor: Matthew Longo, Birkbeck, University of London, United Kingdom Received: April 2, 2014; Accepted: July 9, 2014; Published: August 6, 2014 Copyright: © 2014 Gil, Le Bigot. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper and its Supporting Information files. (See Table S1 and Table S2) Funding: This research was supported by a grant from the French National Research Agency (ANR), within the framework of the Emotion(s), Cognition and Behavior call for projects (EMCO-01-2011). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Introduction Colors carry information that goes far beyond esthetics, owing to their emotionally meaningful associations. They can therefore transcend their physical nature and take on a psychological meaning [1]. As such, colors can be regarded as a relevant informational context in the processing of all types of stimuli. The present study was designed to investigate the positive meaning of two colors (i.e., green and pink), and its influence on the everyday mechanism of emotional face processing. Emotional facial expressions provide critical cues in human interactions, as they convey information both on other people’s states and on the environment (e.g., [2]–[4]). According to many emotion theorists, each discrete emotion is expressed via a specific pattern of facial muscles, and can be accurately understood by humans [5]–[7]. Undoubtedly, the ability to understand these nonverbal communication cues allows humans to engage in harmonious interactions and implement adaptive behaviors. The automaticity perspective on emotion recognition is driven by the basic emotion approach (e.g., [8]–[10]). Psychology researchers have therefore focused mainly on the human ability to understand these nonverbal messages, highlighting extremely efficient processes in adults that seem to be both automatic and effortless (e.g., [11]–[14]). A relatively recent criticism of these classic studies concerns their nonecological design: previous investigations have mainly focused on emotional faces perceived in isolation, whereas in everyday life, humans see these faces in context. Here, context refers to contextual information, in other words, stimuli that are present alongside a target stimulus, and which can modulate (i.e., constrain or facilitate) the processing of that target [15]. Research findings are increasingly showing that emotional contextual information influences affective face processing in different types of context [16], [17]. As such, the influence of external features (i.e., features that are not intrinsically linked to the expresser’s body and face) has been a recent focus of interest, and has so far been tackled in three different ways. The first involves the manipulation of a context described verbally (e.g., [18], [19]). For example, in an fMRI investigation, participants who looked passively at faces expressing surprise (i.e., an ambiguous emotion per se that can be either positively or negatively interpreted) showed a difference in amygdala activation reflecting the negative or positive interpretation of the faces as a function of the valence of a priming sentence [20]. The second involves examining the influence of faces forming the context for the interpretation of a target face. This has revealed a decrease in emotional face recognition performance when a conflicting emotional expression is in its periphery [21], and an increase when a congruent one is present [22]. The third focuses on the influence of emotional scenes forming the background in emotional face recognition tasks [23]–[27]. For instance, ambiguous fearful facial expressions are more efficiently categorized as displaying fear when they are displayed with an emotional scene conveying the same (i.e., negative) emotion, rather than a neutral or positive one [24], thus demonstrating the so-called congruency effect: 1/facilitated face recognition when the face and contextual features convey the same emotion, leading participants to be faster and/or more accurate in recognizing the emotional face; or, conversely, 2/impaired recognition performance when the facial and contextual features convey contradictory emotions, creating interference. Color is one of the most ubiquitous features of the environment. It can be intrinsically linked to the processing of any object that is perceptible to humans, and can thus be assumed to influence information processing. Therefore, it makes sense for researchers to investigate the impact of color on behavior and psychological functioning [28]. Studies in different fields have revealed how color can influence our perception, affect and cognition, demonstrating, for instance, the existence of perceptual confusion between color and odor (e.g., [29], [30]), or the impact of color on internet use (e.g., [31], [32]). In the present study, we reasoned that color (or more precisely hue, when controlling for lightness and saturation) has an impact on psychological functioning because it carries information arising from its emotionally meaningful associations. While research on the link between color and psychological functioning is nothing new (e.g., [33]), Elliot and colleagues recently developed a theoretical model of this link [34], [35]. This model hypothesizes that color conveys a specific message that can be explained either phylogenetically (i.e., colors convey biologically-based messages), or ontogenetically (i.e., experiences can endow emotions with meaningful associations). Moreover, color involves raw evaluative processes that influence psychological functioning in an automatic way and take place below the individual’s level of consciousness. There are a great many findings to support this theoretical perspective, with the color red being particularly well documented. Studies adopting a range of different procedures have revealed that even the very subtle presence of a red feature in the environment can be detrimental in an achievement context [35]–[38]. The explanation for this negative effect is that red is negatively valenced, and linked to danger, failure (e.g., [39]) and anger (e.g., [40]). Green is a color that has received less attention, but as it lies directly opposite red in the color spectrum [41], it has often been used in experimental studies as a control or to convey the opposite meaning to red (e.g., [32], [35]). All the findings suggest that green is a positively valenced color, signifying pleasantness, calmness and happiness. For example, red has been shown to enhance memory for negative words, whereas green increases it for positive ones [42]. Other findings also support the positive meaning of green, showing that it promotes creativity [43], or seems to evoke safety [44]. Similarly, it has been suggested that green is associated with growth and fertile natural environments–an association that Akers and collaborators illustrated with the green exercise effect [45]. These authors showed that participants engaged in exercise (i.e., cycling) were in a more positive mood, and felt they were making less effort when they were exposed to a video presenting a green outdoor environment as opposed to a gray- or red-filtered one. The originality of the present study lay in its focus on positive hue–meaning associations for green, consistent with previous studies, as well as for another color that has so far been neglected in empirical research, even though it has a strong symbolism in Western societies, namely pink. Pink has a strong symbolic association with femininity that is frequently exploited in the arts and marketing [46]. This femininity marker is thought to be related to sweetness, and as suggested in many languages and illustrated by the popular song La Vie en Rose (Piaf, 1947), pink also seems to be linked to hope, optimism, happiness and affiliation. Although it is not well documented, there are some findings to back these associations up. For instance, after being exposed to violent and tragic stories, participants tend to be less upset when they fill out a questionnaire on pink paper than when they fill one out on blue or white paper [47]. Along the same lines, pink is seen as referring to desire, happiness and wellbeing [30]. To our knowledge, only two studies have so far examined the influence of colored backgrounds on the perception of facial expressions. Young and colleagues showed that, compared with a green or an achromatic one, a red background facilitates the categorization of angry faces [48]. By contrast, they failed to show that green facilitates the categorization of happy ones. Frühholz and colleagues (2011) examined the impact of color on the recognition of facial expressions of fear, happiness and neutrality, after participants had undergone a learning phase designed to artificially induce a specific association between color and emotion (Experiments 1 and 2) [49]. Importantly for our purpose, the authors stressed that the face–color associations they created were not random, but based on shared emotional properties (i.e., arousal and valence). Their findings revealed a general interference effect in the valenced face categorization task, with increased response times and decreased response accuracy in incongruent trials (i.e., where face and color had not been emotionally associated in the learning phase) compared with congruent ones. They therefore ran a third experiment, in which they switched the face–color combinations around. In other words, the face–color associations created in the learning phase no longer had shared emotional properties. In this condition, results failed to reveal any significant interference effect. Taken together, these three experiments yielded evidence that color, which has low-level perceptual properties [50], can interfere with emotional face processing, and that this interference stems not from those perceptual features but from the color’s emotional charge. As the authors commented, “if the interference effect has been solely elicited by a ‘non-emotional’ violation of an expected face-color pairing, we probably would have found comparable effects in all experiments irrespective of emotional expressions” (p. 22). The aim of the present study was to investigate the positive meanings of two colors–green in Experiment 1, and pink in Experiment 2–, using the contextual information effect on face perception. Participants performed a forced, two-choice task, in which they had to indicate whether a morphed face shown against a colored background expressed neutrality or a specific emotion. Ambiguous facial expressions are more liable to be influenced by contextual features (i.e., [20], [24], [49]). According to the hue, saturation, lightness (HSL) color system, our two colors of interest differed only on hue. Because we were examining the positive hue–meaning associations that are assumed to be intrinsically present in green and pink, we used faces that expressed two contrasting discrete emotions formalized in several well-established models of emotion (e.g., [51]), namely happiness and sadness. As with the interference effect observed by Frühholz and colleagues, we argued that if green and pink are indeed positively emotionally charged, then compared with a control color, they would facilitate the identification of happiness (i.e., congruent condition) more than sadness (i.e., incongruent condition). Moreover, we chose two achromatic control backgrounds (i.e., white and gray), as they had been used as control colors in previous studies. We also collected subjective ratings for each color from both discrete (e.g., [51]) and dimensional perspectives (e.g., [52]), via five bipolar Osgood scales (Fear vs. Anger; Sadness vs. Happiness; Negative vs. Positive (i.e. valence); Calm vs. Arousing (i.e. arousal); Unattractiveness vs. Attractiveness). This subjective task was exactly the same for both Experiments, and participants assessed all four colors of interest. Experiment 1: Green, White, and Gray Method Participants. Thirty-eight women students (mean age = 18.8, SD = 1.37) gave their written informed consent – as required by the “Ouest III” Statutory Ethics Committee (CPP) which approved this research - to take part in the study in exchange for course credits. All participants were screened for normal color vision with the short form (i.e., 9 plates) of the Ishihara Color Vision Test [53]. Participants were randomly assigned to one of the two experimental emotion conditions: neutrality–happiness continuum or neutrality–sadness continuum. Material. A PC controlled the experimental events using E-Prime 1.2 software (Psychology Software Tools, Pittsburg, PA). The screen was placed at a distance of approximately 60 cm, and the “D” and “K” keys on a keypad were used for the responses. Stimuli consisted of faces enclosed in an oval frame so as to exclude the perception of hair. They were displayed in the center of the screen against a color background. The faces were taken from the empirically valid and reliable Pictures of Facial Affect [54]: the same five female faces featuring a neutral affect (0% expression) gradually morphed into a prototypical emotional expression (i.e., 100% expression) of either happiness (neutrality–happiness continuum) or sadness (neutrality–sadness continuum). Extreme expression values were used in the training phases (i.e. 0% and 100%), and faces gradually morphed from 20% to 80% in seven 10% increments were used in the test phases. Color backgrounds were created according to the three HSL dimensions. We based their characterization of colors on the HSL system, and did not use a spectrophotometer. Green corresponded to pure green (i.e., 120° on the color wheel), and lightness and saturation were strictly controlled so that they both corresponded to 100%. For the achromatic backgrounds, gray corresponded to 50% lightness and 0% saturation; and white to 100% and 0% respectively. Participants rated the emotional aspects of the four colors on five 9-point Osgood scales: Fear vs. Anger, Sadness vs. Happiness, Negative vs. Positive (i.e., valence), Calm vs. Arousing (i.e., arousal), and Attractiveness vs. Unattractiveness. These scales were randomly ordered across participants. For the purpose of these ratings, participants were provided with plates featuring the four colors of interest (green, pink, white, gray). The colors were labeled with letters (A–D) and their order was counterbalanced (i.e., a total of 24 boards were created). Procedure. Participants sat at a table facing the computer screen in an isolated room. They began by performing a forced-choice task in which they had to decide as quickly as possible whether the facial expression on the screen was more similar to an emotion (i.e., happiness or sadness, according to the experimental condition) or to neutrality, pressing the key that corresponded to their response. Key responses were counterbalanced across participants. There were five blocks of trials, each block featuring the facial expressions of one of the five women. Each block comprised a training phase, in which participants performed ten trials featuring extreme emotional expressions displayed against a white background: five neutral (0%) expressions and five with the prototypical emotional face (100% expression). The same woman’s face was used throughout. This phase occurred in the presence of the experimenter, and allowed participants to become familiar with both the task and the woman’s face. Each trial began with a fixation cross (1000 ms), followed by the onset of the stimulus, which disappeared when the participant gave a response. The intertrial interval was 500 ms. Only data from the test phase were used. The test phase took place in the same conditions as the training phase, except that the experimenter left the room and the participant was shown the same woman’s faces four times for each of the three color backgrounds (i.e., green, white and gray) and for each of the seven morphed expressions (i.e., from 20% to 80%). For each block, participants therefore completed 84 test trials (i.e., 4×3×7), leading to a total of 420 test trials. The order of the blocks was counterbalanced across participants, and trials in both phases were administered in a random order. After this main task, the participants performed the emotional color ratings. The color plate appeared on the PC screen and they had to indicate how far they thought each color expressed the different emotional dimensions, by placing the corresponding letters on the five Osgood scales. In other words, they had to rate each of the four colors on five emotional dimension continuums. Last, they had to name the four colors from left to right. It should be noted that all the participants described the green, white and gray as green, white and gray. Results and Discussion Using logistic mixed models [55], [56], we analyzed the emotion responses (i.e., happiness in the neutrality–happiness continuum condition, and sadness in the neutrality–sadness one) with SAS version 9.4 (GLIMMIX procedure). As analyses run on reaction times failed to reveal either main or interactive effects of color, we only report results corresponding to emotion responses. This point is mentioned in the discussion of Experiment 1. To accommodate the dependence caused by repeated measures, initial models are parameterized with random intercepts and slopes, along with the covariance between the variance components. In the initial model, emotion response curves were fitted by polynomials of morphing percentage, and with emotion and color and the interaction between these two factors as fixed effects, a random intercept for each participant, and random slopes for women’s pictures, color, and morphing percentage (linear, quadratic, and cubic). Emotion could not be entered as a slope as it was a between-participants term. To reduce the multicollinearity of the effects related to the morphing percentages, this variable was centered so that the median value (50°) corresponded to zero. This variable was not (centered-) reduced in order to interpret the odds ratios (ORs). Random slopes are not included in the model when the variable is between-participants (and all the interactions are with a between-participants term). As the participants’ sensitivity cannot vary as a function of a between-subjects factor, they are only submitted to one modality of this type of variable [57]. In addition, the covariance matrix must be specified in mixed models. The covariance matrix used by default is the variance components matrix (its structure is analogous to that of an ANOVA). The inclusion of random effects in this kind of model sometimes involves the nonconvergence of the model. If convergence problems result from the covariance matrix, then the associated variance with at least one random effect is either null or negative, which means that this effect does not significantly contribute to the best model. The GLIMMIX procedure identifies the problematic effect for convergence and it can thus be removed from the model without affecting the quality of the analysis [58]. Moreover, in order to specify the interactions, we ran multiple comparison tests for each significant result using the least-squares means (LSMEANS) option of the MIXED procedure with Bonferroni adjustment, and the error degrees of freedom were row adjusted (ADJDFE = ROW option). All other things being equal, the trend in the morphing percentage data was described by a third-order polynomial, or cubic trend (S-curve). The linear, F(1, 37) = 24.90, p<.001, quadratic, F(1, 15654) = −5.66, p<.001, and cubic, F(1, 15654) = −10.09, p<.001, trends significantly described the pattern of the data across the morphing percentages. The OR for a 10% change was 4.751, 95% CI [4.162, 5.423]. Figure 1 shows this typical identification curve for the emotional faces, both curves rising along the continuum of expression. This pattern of data suggested first that participants correctly performed the task, and second that they gave emotion responses more frequently in the neutrality–happiness continuum condition than in the neutrality–sadness one. PPT PowerPoint slide
While New Hampshire does have a high cost of living, it also has excellent health care. All of these factors make it the best place to retire if...
Read More »
Online Surveys Swagbucks. SurveyJunkie. TimeBucks Rewards. LifePoints. Zen Surveys.
Read More »
The simple yet scientifically proven Wealth DNA method laid out in the report allows you to effortlessly start attracting the wealth and abundance you deserve.
Learn More »TIFF original image Download: Figure 1. Mean proportion of emotion responses (i.e., happiness or sadness) as a function of the faces’ degree of emotional expression. (Experiment 1). https://doi.org/10.1371/journal.pone.0104291.g001 More importantly for our purpose, the statistical model revealed that emotion, color, and the interaction between the two significantly predicted emotion responses, F(1, 15654) = 15.59, p<.001, F(2, 72) = 5.36, p = .007, and F(2, 15654) = 9.84, p<.001 (see Table 1 for the parameters of the logistic mixed model analysis). As Figure 2 illustrated, even if happy faces prompted more emotion responses, this effect was modulated by color. More precisely, multiple comparisons tests revealed that green and white involved significantly more emotion responses for happy faces compared to sad faces (all ps<.001), whereas it was not the case for gray (p = .60). PPT PowerPoint slide
Can you do mudras while lying down? According to Painuly, mudras can be practiced lying down. “There's no harm if you fall asleep while holding the...
Read More »
For every year that you delay claiming past full retirement age, your monthly benefits will get an 8% “bonus.” That amounts to a whopping 24% if...
Read More »
The simple yet scientifically proven Wealth DNA method laid out in the report allows you to effortlessly start attracting the wealth and abundance you deserve.
Learn More »Acknowledgments We thank Mélissa Godreau and Jérémy Birot for their assistance with the data collection. Author Contributions Conceived and designed the experiments: SG LL. Performed the experiments: SG. Analyzed the data: LL. Contributed reagents/materials/analysis tools: LL SG. Contributed to the writing of the manuscript: SG LL.
At around 18 weeks of pregnancy, your unborn baby will start being able to hear sounds in your body like your heartbeat. At 27 to 29 weeks (6 to 7...
Read More »
Kalich tints the skin of the big names in biblical history. Adam and Eve, David and Goliath, Moses, Solomon, Job, Mary, Jesus and an all-purpose...
Read More »
The simple yet scientifically proven Wealth DNA method laid out in the report allows you to effortlessly start attracting the wealth and abundance you deserve.
Learn More »
Despite the powers it was said to possess, the Ark was eventually lost to the sands of time. The last Biblical mention of the Ark comes just before...
Read More »
The color purple has been associated with royalty, power and wealth for centuries. Jun 3, 2011
Read More »