Skip to main content
  • Research article
  • Open access
  • Published:

Psychometric properties of an English short version of the Trier Inventory for chronic Stress



Although a variety of instruments are available that capture stress experience, the assessment of chronic stress has been hindered by the lack of economical screening instruments. Recently, an English-language version of the Trier Inventory for Chronic Stress (TICS-EN) consisting of 57 items according to a systemic-requirement-resource model of health in nine subdomains of the chronic stress experience has been introduced.


We constructed a new 9-item short version of the TICS covering all nine subdomains and evaluated it in two samples (total N = 685). We then used confirmatory factor analysis to check factorial validity.


This version showed a highly satisfactory model fit, was invariant across participant gender, demonstrated a very high correlation with the original TICS (r = .94), and showed a moderate correlation (r = .58) with a measure of perceived stress in the past month.


Therefore, this theoretically driven instrument can be recommended as a short version of the TICS in English language.

Peer Review reports


According to the Federal Institute for Occupational Safety and Health, the levels of perceived stress have significantly increased since the early 2000s [1]. Consequences of long-lasting chronic stress constitute an increased risk for impaired psychological wellbeing and acute physical illnesses [2, 3]. Associations between psychosocial stress and depression, cardiovascular disease, sleep disorders or chronic pain are well-established [4,5,6,7,8,9]. Elevated stress levels are also a factor in susceptibility to upper respiratory tract infections, asthma, herpes viral infections, autoimmune diseases, and delayed wound-healing [9]. Accordingly, measuring chronic stress with a brief and efficient instrument is valuable to multiple disciplines [10].

An in-depth overview of general- and area-specific stress instruments is provided by Cohen and colleagues [3]. However, the focus on acute stress has often overlooked chronic, long-term stress. For chronic stress, the Perceived Stress Questionnaire [11, 12] provides a general retrospective one-year evaluation in addition to the last four, six, and 8 weeks without area specificity. The Trier Inventory for Chronic Stress (TICS) [13] is an instrument that targets explicitly area-specific chronic stress including Work Overload, Social Overload, Pressure to Perform, Work Discontent, Excessive Demands from Work, Lack of Social Recognition, Social Tensions, Social Isolation, and Chronic Worrying. These nine domains of chronic stress are measured by 57 items [13, 14], which were selected in accordance with the systemic-requirement-resource model of health [15]. Schulz, Schlotz, and Becker [13] developed the TICS scales based on this model and thus assumed high content validity as a rational response. In a representative German study the confirmatory factor analysis (CFA) provided evidence for a good factorial validity [14].

A 12-item version of the TICS (Short Screening Scale for Chronic Stress, SSCS) was developed by the original authors [13] to meet the needs for a brief chronic stress instrument for applied research and practitioners. Items were selected based on factor loadings of the strong, unrotated first factor (explained variance 28.4%). However, this empirical item selection included only five of the original nine stress domains from the full 57-item scale in the SSCS. With the stress domains represented in the SSCS (Chronic Worry, Work Overload, Excessive Demands from Work, and Lack of Social Recognition) the SSCS correlated moderately to highly with the original nine subscales of the 57-item (r = .68–.87). Social Overload retained one item for the SSCS but correlated low with the subscale of the long version of the TICS (r = .45) [13]. The four scales not represented in the SSCS, Pressure to Perform, Work Discontent, Social Tensions, and Social Isolation, showed low correlations as well with the 12-item SSCS (r = .40–.56) [13]. Therefore, four of the nine theoretically proposed areas of chronic stress are underrepresented in the SSCS. Due to the item selection procedure, the content domain was reduced and thus the validity of the SSCS was weakened [16]. The strength and uniqueness of the original 57-item TICS lies in its area-specific chronic stress assessment based on the theoretical model of health, unfortunately this strength is no longer represented in the short 12-item version (TICS-12, SSCS).

In order to represent the theoretically and empirically supported nine areas of chronic stress, a new short version of the TICS was identified. Petrowski et al. [17]. A representative German sample of N = 2473 was used to construct and test the new 9-item TICS. Nine items based on the alphamax algorithm were chosen to represent the nine areas of chronic stress from the original TICS (TICS-57). The one-factor-model of TICS-9 provided a good fit for the latent construct and showed good internal consistency (α = .88). As the original 57-item and 9-items TICS were developed and tested in German, we sought to replicate the findings with the English TICS-EN version [18] and evaluate the psychometric properties of an English-language short form TICS-9.


Participants and procedure

The data of Sample 1 was collected at two college campuses in the Eastern and Southwestern region of the USA. Participants were undergraduate introductory psychology students who contributed in return for course credit. The final pooled sample consisted of n = 501. They received a data protection declaration that is in agreement with the Helsinki Declaration. The study was approved by the institutional review boards of the involved university institutions and all participants provided written informed consent.

Participants of Sample 2 were recruited in Spring 2019 via Amazon’s Mechanical Turk [19], a crowd-sourcing website. MTurk is an international online platform that allows researchers to post tasks or questionnaires that participants complete in return for payment. In the current study, participants signed up via MTurk and were then directed to the online survey to complete the questionnaire. This survey was only available to participants who were located in the USA and their MTurk approval rating greater than 95%. The questionnaire took approximately 5 min to complete and participants were compensated $0.50 for their time. Sample 2 was collected in order to evaluate the factorial structure in a study. The final sample consisted of N = 184 participants.

The study was approved by the ethic review boards of Landesärztekammer Rheinland-Pfalz, Germany, and all participants provided informed consent online by agreeing to take part in the study (Table 1).

Table 1 Sample description


The Trier Inventory for Chronic Stress (TICS) is a standardized German questionnaire that has been tested with respect to its factorial structure and psychometric properties, showing good to very good reliability [14]. Internal consistency (Cronbach’s Alpha, α) was good to very good with values ranging from .84 to .91 (mean of α = .87) [13]. Nine interrelated factors of chronic stress are assessed: Work Overload; Social Overload; Pressure to Perform; Work Discontent; Excessive Demands at Work; Lack of Social Recognition; Social Tensions; Social Isolation; Chronic Worrying. The nine factors were derived from 57 items rated on a five-point rating scale (1–5, labeled as: “never”, “rarely”, “sometimes”, “frequently”, “always”). Participants rated the occurrence/frequency of specific situations with a recall period of the previous 3 months. The 12 items with the highest loadings constitute the short version by the original authors [13]. In addition, a new short version of the TICS was developed based on the alphamax algorithm representing the nine areas of chronic stress of the original TICS. The one-factor-model of this new short version provided a good fit for the latent construct and showed good reliability (α = .88) [17]. After the translation state-of-the-art (see Petrowski et al. [18]), the English version of the Trier Inventory of Chronic Stress (TICS-EN) with 57 items was used in the present study [18].

The Perceived Stress Scale (PSS-10) is the most widely used psychological instrument for measuring perceived stress [20]. Respondents report the degree to which situations in one’s life have been unpredictable, uncontrollable and overloaded in the past month on a 5-point scale (0 = never, 1 = almost never, 2 = sometimes, 3 = fairly often, 4 = very often).

Statistical analyses

We conducted the analyses in R, using the packages lavaan, lordif, MBESS, and semTools [21,22,23,24]. Participants with missing values on any of the TICS-9 items were excluded from the analysis: seven and eleven participants. In addition, we excluded participants who failed the attention checks utilized in Sample 2 (n = 28). Very few participants (less than 5% across all items in both samples) chose the highest response option, making the items essentially ordinally scaled. Previous research suggests that conventional maximum likelihood estimation tends to be inaccurate with four or fewer response categories [25, 26]. Therefore, we used the robust diagonally weighted least squares estimation method [27].

To evaluate model fit we considered the following measures and cutoff values [28,29,30]: The χ2-statistic should ideally be non-significant and is calculated by χ2 divided by the degrees of freedom of the model, which should < than 2 to indicate good, or < 3 to indicate acceptable, fit. The Comparative Fit Index (CFI) and the Tucker-Lewis Index (TLI), which should be > .95 for good, or > .90 for acceptable fit, and finally, the Root Mean Square Error of Approximation (RMSEA) and the Standardized Root Mean Square Residual (SRMR), which should be lower than .08 to indicate acceptable, or < .05 to indicate good fit. Additionally, we report the 90% confidence interval for the RMSEA. In line with Dunn et al. [31], we tested reliability using McDonald’s ω [32], accompanied by a 95% confidence interval.

In order to test for measurement invariance across gender groups, we used the approach of comparing increasingly constrained models as described by Milfont et al. [33]. Since we were dealing with ordered categorical data we modified the procedure in the way described by Wu et al. [34]: First, we compared the unconstrained (configural) model to a model with item thresholds fixed to be equal across groups. Second, the threshold-invariant model was compared to the metric model (item loadings constrained). Third, the metric model was compared to the scalar model (item intercepts constrained). To evaluate the model comparisons, we primarily used the differences in CFI and gamma hat (GH) between models – which should not exceed .01. Additionally, we tested for significant differences in χ2. To avoid selecting a non-invariant marker variable we estimated all factor loadings freely and set the variance of the latent variable to 1 instead. In addition, we analyzed differential item functioning using a logistic ordinal regression framework to be able to pinpoint the origin of whatever instances of measurement non-invariance we encountered [35,36,37].


Item descriptive statistics

We report descriptive statistics for the TICS-9 items in Table 2 and Fig. 1 and mean scores by sociodemographic group membership in Table 1. All corrected item-total correlations were above the commonly used cutoff value of .50.

Table 2 Descriptive statistics for the TICS-9 (Sample 1)
Fig. 1
figure 1

Boxplots of the TICS-9 item distributions. Diamonds represent the item score means

CFA and reliability analysis

We report the results of the CFA in Table 3. Model fit was acceptable in both samples: Only the χ2-test indicated a significant deviation from the theoretical model. All fit indices presented evidence for acceptable, even good model fit. Internal consistency was satisfactory in both samples: ωSample1 = .868 [.850; .887], ωSample2 = .872 [.816; .927]. In comparison, the 57-item long version, which was included in Sample 1, had a substantially higher reliability coefficient, ω = .969 [.965; .974].

Table 3 Model fit in both samples

Measurement invariance

Next, we tested for measurement invariance across gender using Sample 1. Since not all items in all groups had sufficient frequencies for all response options, we collapsed the two highest replies “4” and “5”, putting the items on a four-point scale for the analysis of invariance. Utilizing the procedure described above we found evidence for invariance across gender. Only the χ2 statistic showed a significant deviation, which was limited to the final comparison. CFI and GH never exceeded the cutoff of .01 between levels of constraints, indicating that women and men do not differ meaningfully in terms of their response behavior to the TICS-9.

When considering the indices of non-compensatory differential item functioning (NCDIF) presented in Table 2, it becomes clear that most of the gender-specific differences are attributable to Items 3 and 7, with both of them exceeding the cutoff of NCDIF ≤0.054 for four-point items [38]. Thus, item-specific comparisons – specifically for these two items – are discouraged. However, considering the entire scale, differential test functioning (the sum of compensatory differential item functioning (CDIF), which accounts for the different directions of DIF; DTF = 0.2614), was below the critical value of 9 * .054 = 0.486. Thus, overall there was a slight trend for women to choose higher response options than men, given the same trait level of stress (see Fig. 2). However, this difference was so small that it was unlikely to meaningfully influence interpretation of test scores. This was also evident from Fig. 2, which shows response behavior on the scale level across trait values (Table 4).

Fig. 2
figure 2

Gender-specific test characteristic curves. The scale range differs from the empirical distribution because items were rescaled to minima of 0 and groups of insufficient size were collapsed

Table 4 Analysis of measurement invariance across gender (Sample 1)


In Sample 1, we found a very high correlation between the TICS-9 and the TICS-57, r (499) = .944, p < .001. Additionally, in Sample 2, there was a moderately high association of TICS-9 with PSS-10, rSample2(182) = .583, p < .001.


A reliable nine-item version of the English TICS-EN was constructed for the assessment of chronic stress. The TICS-9-EN captures all areas of the systemic-requirement-resource model of health. Furthermore, the approach used to develop the English version TICS-9 benefited from the recommendations of Smith et al. [39], thus avoiding common methodological sins in developing short form scales [13, 15, 39]. Scales developed using only those items with the highest item-total correlations for a given factor [39,40,41,42] may uphold a high internal consistency estimate of reliability, but the content domain is unintentionally constricted and the validity of the short form is diminished [39, 42]. For that reason, the alphamax algorithm was used to maximize reliability while maintaining all nine domains of chronic stress, thus avoiding this common sin. In order to avoid the challenge of impaired content coverage for a construct and inadequate validity from fewer items, TICS-9-EN covered all nine domains of the systemic-requirement-resource model of health [15] with one item each. The strength and uniqueness of the original TICS-57 was based on the theoretical model of health [13, 15] and by maintaining content coverage, this advantage was also conveyed to the TICS-9-EN.

From a practical perspective, our short form combines the validity of the full-length version with an efficient version that is particularly suitable for large multivariable studies. It demonstrations a very good factorial structure and is highly correlated with the 57-item version. Meaningful interpretations can be made based on gender and age differences due to the invariance of the scale. Furthermore, the psychometric properties of the TICS-9-EN are similar to the original German long as well as short version of the TICS [17, 18]: Model fit is good, so is reliability, and the scale can be considered invariant across pertinent sociodemographic groups. Future studies could complement the existing analyses by investigating cultural invariance between the two versions of the instrument.

The study has the strength that separate samples were drawn in order to replicate the factorial structure. However, some limitations need to be acknowledged. While the item selection algorithm maximizes Cronbach’s alpha, at the same time, it increases model fit specifically for a one-factor solution. The improved psychometric properties of the new TICS-9 compared to the SSCS is, consequently, partially a result of statistical methods, see Petrowski et al. [17]. Another limitation is the comparison with the PSS only. Associations with additional instruments for negative affect and chronic stress assessment should be implemented in future studies to fully examine convergent and divergent validity. Future research should demonstrate criterion validity by exploring the relationship. With external ratings of chronic stress. A longitudinal design or a study with repeated measurements would provide opportunities for assessing the factor structures over time, determining retest-reliability, and testing for potential cohort effects.


In conclusion, our study presents a brief 9-item version of the TICS for measuring chronic stress that is theoretically based and shows strong reliability and emerging evidence for validity. Additionally, it features measurement invariance across participant gender allowing for meaningful interpretations of age and gender differences. Therefore, it can be recommended as an economical screening instrument for multivariable studies in psychology, medicine, and epidemiology.

Availability of data and materials

Data and materials are available from the corresponding author upon reasonable request.



Trier Inventory for Chronic Stress


Confirmatory Factor Analysis


Perceived Stress Scale


Comparative Fit Index


Tucker-Lewis Index


Root Mean Squared Error of Approximation


Standardized Root Mean Squared Residual


Gamma Hat


Compensatory Differential Item Functioning


Non-compensatory Differential Item Functioning


Differential Item Functioning


Differential Test Functioning


  1. Lohmann-Haislah A. Stressreport Deutschland 2012 - Psychische Anforderungen. Dortmund: Ressourcen und Befinden; 2012. Available from:

    Google Scholar 

  2. Becker P, Schulz P, Schlotz W. Persönlichkeit, chronischer Stress und körperliche Gesundheit. Zeitschrift Gesundheitspsychol. 2004;12(1):11–23.

    Article  Google Scholar 

  3. Cohen S, Kessler RC, Gordon LU. Measuring stress : a guide for health and social scientists. New York: Oxford University Press; 1995.

    Google Scholar 

  4. Cohen S, Janicki-Deverts D, Miller GE. Psychological stress and disease. J Am Med Assoc. 2007;298(14):1685–7.

    Article  CAS  Google Scholar 

  5. Schulz P, Hellhammer J, Schlotz W. Arbeitsstress, sozialer Stress und Schlafqualität: Differentielle Effekte unter Berücksichtigung von Alter, Besorgnisneigung und Gesundheit. Zeitschrift Gesundheitspsychol. 2003;11(1):1–9.

    Article  Google Scholar 

  6. Chandola T, Brunner E, Marmot M. Chronic stress at work and the metabolic syndrome: prospective study. BMJ. 2006;332(7540):521–4.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Roohafza H, Talaei M, Sadeghi M, Mackie M, Sarafzadegan N. Association between acute and chronic life events on acute coronary syndrome. J Cardiovasc Nurs. 2010;25(5):E1–7.

    Article  PubMed  Google Scholar 

  8. Ehrström S, Kornfeld D, Rylander E, Bohm-Starke N. Chronic stress in women with localised provoked vulvodynia. J Psychosom Obstet Gynecol. 2009;30(1):73–9.

    Article  Google Scholar 

  9. Vedhara K, Irwin MR. Human Psychoneuroimmunology. New York: Oxford University Press; 2005.

    Book  Google Scholar 

  10. Jackson JS, Knight KM, Rafferty JA. Race and unhealthy behaviors: chronic stress, the HPA Axis, and physical and mental health disparities over the life course. Am J Public Health. 2010;100(5):933–9.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Levenstein S, Prantera C, Varvo V, Scribano ML, Berto E, Luzi C, et al. Development of the perceived stress questionnaire: a new tool for psychosomatic research. J Psychosom Res. 1993;37(1):19–32.

    Article  CAS  PubMed  Google Scholar 

  12. Fliege H, Rose M, Arck P, Walter OB, Kocalevent RD, Weber C, et al. The perceived stress questionnaire (PSQ) reconsidered: validation and reference values from different clinical and healthy adult samples. Psychosom Med. 2005;67(1):78–88.

    Article  PubMed  Google Scholar 

  13. Schulz P, Schlotz W, Becker P. Trierer Inventar zum chronischen Stress (TICS). Göttingen: Hogrefe; 2004. Available from:

    Google Scholar 

  14. Petrowski K, Paul S, Albani C, Brähler E. Factor structure and psychometric properties of the trier inventory for chronic stress (TICS) in a representative German sample. BMC Med Res Methodol. 2012;12:42.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Becker P. Prävention und Gesundheitsförderung. In: Schwarzer R, editor. Gesundheitspsychologie: ein Lehrbuch. 2nd ed. Göttingen: Hogrefe-Verlag; 1997. p. 517–34.

    Google Scholar 

  16. Beck AT, Steer RA, Brown GK. Manual for the Beck depression inventory-II. San Antonio: Psychological Corporation; 1996.

    Google Scholar 

  17. Petrowski K, Kliem S, Albani C, Hinz A, Brähler E. Norm values and psychometric properties of the short version of the Trier Inventory for Chronic Stress (TICS) in a representative German sample. PLoS One. 2019;14(11):e0222277.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Petrowski K, Kliem S, Sadler M, Meuret AE, Ritz T, Brähler E. Factor structure and psychometric properties of the english version of the trier inventory for chronic stress (TICS-E). BMC Med Res Methodol. 2018;18:18.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Amazon Mechanical Turk [Internet]. [cited 2020 Sep 21]. Available from:

  20. Cohen S, Kamarck T, Mermelstein R. A global measure of perceived stress. J Health Soc Behav. 1983;24(4):385–96.

    Article  CAS  PubMed  Google Scholar 

  21. Choi SW, Gibbons LE, Crane PK. Lordif: an R package for detecting differential item functioning using iterative hybrid ordinal logistic regression/item response theory and Monte Carlo simulations. J Stat Softw. 2011;39(8):1–30 Available from:

    Article  PubMed  PubMed Central  Google Scholar 

  22. Jorgensen TD, Pornprasertmanit S, Schoemann AM, Rosseel Y. semTools: Useful tools for structural equation modeling. 2019. Available from:

    Google Scholar 

  23. Kelley K. MBESS: the MBESS R package. 2019. Available from:

    Google Scholar 

  24. Rosseel Y. Lavaan: an R package for structural equation modeling. J Stat Softw. 2012;48(2):1–36.

    Article  Google Scholar 

  25. Beauducel A, Herzberg PY. On the performance of maximum likelihood versus means and variance adjusted weighted least squares estimation in CFA. Struct Equ Model A Multidiscip J. 2006;13(2):186–203.

    Article  Google Scholar 

  26. Flora DB, Curran PJ. An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychol Methods. 2004;9(4):466–91.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Li C-H. The performance of ML, DWLS, and ULS estimation with robust corrections in structural equation models with ordinal variables. Psychol Methods. 2016;21(3):369–87.

    Article  PubMed  Google Scholar 

  28. Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the fit of structural equation models: tests of significance and descriptive goodness-of-fit measures. Methods Psychol Res Online. 2003;8(2):23–74.

    Google Scholar 

  29. Hu L-T, Bentler PM. Fit indices in covariance structure modeling: sensitivity to Underparameterized model misspecification. Psychol Methods. 1998;3(4):424–53.

    Article  Google Scholar 

  30. Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model A Multidiscip J. 1999;6(1):1–55.

    Article  Google Scholar 

  31. Dunn TJ, Baguley T, Brunsden V. From alpha to omega: a practical solution to the pervasive problem of internal consistency estimation. Br J Psychol. 2014;105(3):399–412.

    Article  PubMed  Google Scholar 

  32. McDonald RP. Test theory: a unified approach. Mahwah, New Jersey: Lawrence Erlbaum Associates, Inc.; 1999. p. 485.

    Google Scholar 

  33. Milfont TL, Fischer R. Testing measurement invariance across groups: applications in cross-cultural research. Int J Psychol Res. 2010;3(1):111–30.

    Article  Google Scholar 

  34. Wu H, Estabrook R. Identification of confirmatory factor analysis models of different levels of invariance for ordered categorical outcomes. Psychometrika. 2016;81(4):1014–45.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Kleinman M, Teresi JA. Differential item functioning magnitude and impact measures from item response theory models. Psychol Test Assess Model. 2016;58(1):79–98.

    PubMed  PubMed Central  Google Scholar 

  36. Raju NS, Fortmann-Johnson KA, Kim W, Morris SB, Nering ML, Oshima T. The item parameter replication method for detecting differential functioning in the Polytomous DFIT framework. Appl Psychol Meas. 2009;33(2):133–47.

    Article  Google Scholar 

  37. Raju NS, Van der Linden WJ, Fleer PF. IRT-based internal measures of differential functioning of items and tests. Appl Psychol Meas. 1995;19(4):353–68.

    Article  Google Scholar 

  38. Raju NS. DFITP5: a Fortran program for calculating dichotomous DIF/DTF. Chicago: Illinois Institute of Technology; 1999.

    Google Scholar 

  39. Smith GT, McCarthy DM, Anderson KG. On the sins of short-form development. Psychol Assess. 2000;12(1):102–11.

    Article  CAS  PubMed  Google Scholar 

  40. Recklitis CJ, Yap L, Noam GG. Development of a short form of the adolescent version of the defense mechanisms inventory. J Pers Assess. 1995;64(2):360–70.

    Article  CAS  PubMed  Google Scholar 

  41. Whitley RJ, Hromadka TV. Evaluating uncertainty in design storm runoff predictions. Water Resour Res. 1991;27(10):2779–84.

    Article  Google Scholar 

  42. Haynes SN, Richard DCS, Kubany ES. Content validity in psychological assessment: a functional approach to concepts and methods. Psychol Assess. 1995;7(3):238–47.

    Article  Google Scholar 

Download references


Not applicable.


This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and Affiliations



KP and TR supervised the process of creating this paper and with CB collected and provided data. EB and MS contributed substantially to conception and design. All authors have made substantial contributions to analysis and interpretation of data. BS and AH executed the statistical analyses. BS, KP, CB and TR drafted the manuscript and all authors revised it critically for important intellectual content. All authors read and approved the final manuscript.

Corresponding author

Correspondence to K. Petrowski.

Ethics declarations

Ethics approval and consent to participate

All participants volunteered and received a data protection declaration in agreement with the Helsinki Declaration. The gave both, written and verbal, informed consent. The study was approved according to the ethical guidelines by the Ethical commission of the Medical Faculty of the Technische Universität Dresden (EK 79032011).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Petrowski, K., Braehler, E., Schmalbach, B. et al. Psychometric properties of an English short version of the Trier Inventory for chronic Stress. BMC Med Res Methodol 20, 306 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: