Skip to main content
  • Research article
  • Open access
  • Published:

Determinants of abstract acceptance for the Digestive Diseases Week – a cross sectional study

Abstract

Background

The Digestive Diseases Week (DDW) is the major meeting for presentation of research in gastroenterology. The acceptance of an abstract for presentation at this meeting is the most important determinant of subsequent full publication. We wished to examine the determinants of abstract acceptance for this meeting.

Methods

A cross-sectional study was performed, based on abstracts submitted to the DDW. All 17,205 abstracts submitted from 1992 to 1995 were reviewed for acceptance, country of origin and research type (controlled clinical trials (CCT), other clinical research (OCR), basic science (BSS)). A random sub-sample (n = 1,000) was further evaluated for formal abstract quality, statistical significance of study results and sample size.

Results

326 CCT, 455 OCR and 219 BSS abstracts were evaluated in detail. Abstracts from N/W Europe (OR 0.4, 95% CI 0.3–0.6), S/E Europe (OR 0.4, 95% CI 0.2–0.6) and non-Western countries (OR 0.3, 95% CI 0.2–0.5) were less likely to be accepted than North-American contributions when controlling for research type. In addition, the OR for the acceptance for studies with negative results as compared to those with positive results was 0.4 (95% CI 0.3–0.7). A high abstract quality score was also weakly associated with acceptance rates (OR 1.4, 95% CI 1.0–2.0).

Conclusions

North-American contributions and reports with statistically positive results have higher acceptance rates at the AGA. Formal abstract quality was also predictive for acceptance.

Peer Review reports

Background

The annual Digestive Disease Week (DDW) is an important opportunity for the presentation of research in gastroenterology, attracting investigators and clinicians from around the world. Every year, several thousand abstracts are submitted to this meeting which is jointly organized by the American Gastroenterological Association (AGA), the American Association for the Study of Liver Diseases (AASDL), the American Society for Gastrointestinal Endoscopy (ASGE) and the Surgical Society for the Alimentary Tract (SSAT). Only about one half of these submitted abstracts will eventually be published as full papers, a rate comparable to other medical meetings (A Timmer, RJ Hilsden, J Cole, D Hailey and LR Sutherland, unpublished data, 2001) [14].

From surveys on unpublished research there is evidence that the failure to publish is, in the majority of cases, due to non-submission rather than to manuscript rejection [4, 5]. Several studies have shown that abstract acceptance at a scientific meeting significantly influences the decision to write up and submit a complete manuscript for publication [4, 68]. However, given the importance of meeting reports in scientific communication, the determinants of abstract acceptance are understudied [9]. While expert review processes aim at selecting abstracts based on the potential scientific impact of a contribution, other factors which are not easily identified during the usual review process also play a role.

A factor of particular concern is the potential for preferential acceptance of studies with statistically significant results as compared to "negative" results. Positive outcome bias in the publication process ("publication bias"), has been acknowledged as a problem primarily in the context of meta-analyses of published clinical trials [10, 11]. However, this phenomenon is suspected to apply to any type of research, and may be even more prominent in observational or in less rigorously controlled interventional designs [5].

Other potential determinants of abstract acceptance other than the scientific impact include the origin of the abstract, the research type, and the formal abstract quality, including the quality of the language. Therefore, given the influence of abstract acceptance on subsequent publication and the lack of research on this issue, the objective of this study was to identify the determinants of abstract acceptance for the DDW.

Materials and methods

All abstracts submitted to the DDW between 1992 to 1995 (inclusive), including those rejected for presentation at the meeting, were screened for research type, country of origin and acceptance for presentation. These abstracts are available in print from the annual abstract volume of Gastroenterology. Country of origin was based on the affiliation of the first author as reported in the abstract. Research type was categorized as controlled clinical trial (CCT), other clinical research (OCR) or basic science study (BSS). CCT accounted for about 5% of all submissions, and comprised all clinical trials where a controlled parallel or cross-over design was used. In other words, besides Phase III clinical trials, controlled Phase II and Phase IV studies were included in this group [12]. OCR reports comprised therapeutic/diagnostic studies (i.e. studies other than CCT, examining the effects of therapeutic or diagnostic procedures), epidemiologic research (i.e. studies assessing frequencies of disease states or risk factors for their occurrence), and physiologic studies in humans, i.e. studies devoted to the elucidation of disease processes in men, healthy or diseased. BSS were defined as any study, performed in a laboratory setting, where the unit of analysis was not the intact human (animal studies, and studies in biological material). Abstracts that did not report empirical data or that could not be classified as clinical research or basic science studies were excluded. A random sample of 1,000 abstracts was selected based on computer-generated random numbers. Since we considered positive outcome bias and aspects of formal quality to be particularly relevant in CCT, we used a disproportionately stratified sampling procedure based on research type to increase the proportion of CCT in the sample (n = 400). The examination of abstract characteristics in BSS had a more exploratory character. Therefore, a smaller proportion of BSS was sampled (n = 200, OCR: n = 400). The sampling was also stratified by year of submission.

All sample abstracts were evaluated in random order by a single observer (AT) who was blinded to abstract authors, affiliations and acceptance status. The research type (CCT, OCR, or BSS) was reviewed, and revised where necessary. To detect any systematic errors during the abstract assessment, of every 100 consecutively evaluated abstracts, 10% were evaluated a second time by the same investigator following an interval of 4–6 weeks (test-retest reliability). An additional 10% were evaluated by a second reviewer (RJH), and the two ratings were tested for inter rater agreement. Items recorded were study design, sample size, statistical significance of study results and formal abstract quality.

Formal abstract quality was defined as a combination of the completeness of reporting and the internal validity of the study. Because no measurement instrument was available that would enable a uniform approach across the large variety of research types, designs and topics, a rating system was developed based on previously validated quality scoring instruments for full papers [13, 14] with modifications based on recommendations for the reporting of structured abstracts [15, 16]. In this rating system, items such as the definition of a research objective, the reporting and appropriateness of the statistical methods, the appropriateness of the sample size, and (where applicable) subject selection criteria, use of control groups, use of randomization, blinding, and control of confounding are combined into a single summary score. The instrument is described in more detail in a separate manuscript which has been submitted for publication. Quality of language was assessed separately by native speakers of North American English currently working in the field of gastroenterology. It was graded as "none or minor errors" (such as typographical errors) or "major errors" (such as grammatical errors or inappropriate vocabulary).

Sample size was recorded as the number of subjects per treatment arm or study group. The distributions of sample size and formal abstract quality score were assessed separately for the three different research types. Within each type, the variables were categorized into tertiles.

Large sample size was defined as a sample size within the highest tertile. Similarly, high formal abstract quality was defined by a formal quality score within the highest tertile of summary scores.

Statistical significance of the results was assumed if p < 0.05 with respect to the main outcome, if the 95% confidence interval (95% CI) excluded the reference value, or if the report stated that statistical significance was achieved. Results were considered negative when statistical significance was not achieved. Results were considered "equivocal" when statements on statistical significance were absent or when multiple results with mixed significance were reported.

Abstract productivity of a country was defined as the number of submitted abstracts per 1,000 physicians in the country, based on published statistical information on population size and physician density [17]. Countries were grouped after data exploration, depending on similarities in the distribution of research type, acceptance rates and geographical region.

Statistical analysis

For the assessment of test-retest and inter rater reliability of the rating system, intra class correlation coefficients (ICCC) were calculated [18].

The sample size was calculated to allow for separate analyses by research type (controlled clinical trials, other clinical research, basic science research), to detect a minimum relevant OR of 2.0 with a power of > 80%. Multiple logistic regression was used to calculate adjusted odds ratios for abstract acceptance. Chi squared tests and Spearman's correlation coefficient were used in the explorative analysis. Statistical significance was assumed on a 95% level of confidence. Publication rates are presented stratified by research type, statistical significance of the abstract results, and region of origin.

Determinants of abstract acceptance were estimated using multiple logistic regression analysis, including only analytical studies (studies including a control group or before-after comparisons). To account for the disproportionate sampling, analyses were performed stratified by research type (separate models for each group). In a combined model, effects of interaction were examined by the inclusion of interaction terms. All data were entered as categorical variables. Model fit was examined by the Hosmer-Lemeshow-test [19].

Results

Description of abstracts and acceptance rates, all abstracts

During the four year period, 17,205 abstracts were submitted. Contributions were divided approximately evenly between clinical and basic science research (Table 1). Only 5.4% of all abstracts were CCT. The percentage of unclassifiable studies was below 1%.

Table 1 Distribution and crude acceptance by country of origin, all abstracts

Abstracts were submitted from 65 different countries. However, the top 10 contributors accounted for 90% of all submitted abstracts. Thirty nine percent of the abstracts originated in the US, followed by Japan (10%) and Germany (8%). In 503 abstracts (2.9%), authors from more than one country were reported. Taking into account population size and physician density, Irish investigators were the most productive, with an average of 9.0 submissions per 103 physicians per year (2nd Canada, 3.1/103, 3rd UK, 3.1/103).

The overall acceptance rate was 62.2% and varied by research type (CCT 66.9%, OCR 56.3%, BSS 70.5%). International contributions were more often accepted than those involving only one country (73.8% vs. 61.9%). Crude acceptance rates for the top 10 contributor countries by country of first author are presented in Table 1. There were no substantial changes in acceptance rates by country when the analysis was based on the country of the last author as compared to the country of the first author.

Description of sample abstracts

After revision of the research type, overall, 326 CCT, 455 OCR and 219 BSS were included in the random sample and evaluated in more detail. The evaluation procedure was found to be consistent over time (test-retest reliability, ICCC 0.85) and between the two raters (inter rater reliability, ICCC 0.69). The abstract characteristics are shown in Table 2. OCR studies were evenly divided between therapeutic or diagnostic studies (T&D, n = 159), epidemiology (epid., n = 135) and human physiology (physiol., n = 161). Within these subgroups, there was no difference in acceptance rates or in the percentage of negative studies (overall 12%). It is of note that only 4% of BSS reported negative results (CCT: 30%). Negative results were associated with lower sample size in CCT (p = 0.04) but not in OCR or BSS/animal studies (p = 0.6 and p = 0.8). There were no differences in the distribution of sample size categories based on region when stratified by research type (p = 0.3). Major language errors occurred in 7% of the abstracts (US/Canada 2%, N/W-Europe 9%, S/E-Europe 12%, other 11%). Comprehensibility was compromised by language errors in < 1% of the abstracts.

Table 2 Distribution and baseline characteristics of sample abstracts

Predictors of abstract acceptance, sample abstracts

The region of origin was associated with acceptance rates for all research types. The statistical significance and the formal quality were significantly associated with abstract acceptance in CCT and OCR but not in BSS (Table 3). The strong association between abstract origin and acceptance rates was not explained by differences in formal quality (Table 4). Abstracts containing major errors were on crude analysis less likely to be accepted than abstracts with no or minor errors (52.1% vs. 64.2%, p = 0.06). However, when stratified by region, this effect disappeared (p = 0.7). Furthermore, the association between origin and acceptance rates was sustained when abstracts with major language errors were excluded from the analysis.

Table 3 Acceptance rates by region, statistical significance and formal quality
Table 4 Acceptance rates by region, stratified by formal quality

Logistic regression analysis for the sample abstracts confirmed country region of origin, statistical significance of the results and study type as significant predictors of abstract acceptance (Table 5). Contributions from North America were more likely to be accepted when compared with submissions from Europe or other countries, and this effect was independent of the inclusion of other variables. Reports with negative results were less likely to be accepted as compared to studies with positive results, and BSS were less likely to be accepted than clinical research. As to be suspected from the stratified analysis, language errors were not associated with reduced chances for acceptance after controlling for other variables. No effect was found for one vs. two or more countries (national vs. international studies), and for sample size. Interaction between research type and statistical significance of the study results or origin could not be demonstrated by the inclusion of product terms.

Table 5 Predictors of abstract acceptance (n = 831a, adjusted for year of submission)

Estimates from a model restricted to CCT were of similar strength as those of the overall model, with the statistical significance and formal abstract quality being significantly associated with abstract acceptance. In addition, the use of parallel controls as opposed to cross over designs contributed significantly to the prediction of acceptance. Differences based on the country of origin were apparent; however, confidence intervals included the reference value.

For OCR, country of origin, statistical significance of the study results and formal abstract quality were significantly associated with acceptance. In BSS, the gradient based on the origin of the abstract was particularly strong as compared to the other sub-models. Animal studies had lower acceptance rates than studies in biological material.

The inclusion of formal quality as a continuous score rather than a dichotomized variable increased the precision of the models. Due to the narrower confidence intervals in this approach, country of origin was a significant predictor of abstract acceptance in all submodels. Otherwise, no substantially different results were found. The point estimates were similar as compared to the models using only categorical variables.

Discussion

This study identified two possible areas of bias in the process of abstract selection for the annual DDW: a bias in favor of North-American contributions, and a bias based on the statistical significance of the results.

Bias based on national origin has been reported in various aspects of the dissemination and appreciation of research, e.g. with respect to citation-publication ratios, journals' peer review processes, inclusion of trials into meta-analyses and even the distribution of Nobel prizes [2027]. In our study, the association between the origin of the abstract and the chance for presentation at the meeting was strong and consistent across research types and formal quality categories, although there is an impression that the association is more pronounced in BSS as opposed to CCT. The lack of statistical significance for country of origin in the model restricted to CCT is most likely due to a type II error. The variable became significant when formal quality was entered as a continuous variable, increasing the precision of the estimates. The proportion of abstracts with major language errors was too low to have a significant impact on acceptance rates. The finding of country bias is in contrast to the results of a similar study which has recently been published in Gastrointestinal Endoscopy by Eloubeidi et al [28]. In this study, the determinants of publication and abstract acceptance were examined based on abstracts submitted to the American Society for Gastrointestinal Endoscopy. No effect was found for country of origin (US vs. non US) with respect to abstract acceptance for the meeting. However, these abstracts represented a very limited area of gastroenterology and comprised almost exclusively clinical research. Also, almost all of the non US submissions were of North-West European origin, while at the DDW, Japan was the second most frequent country of origin. Of note, in the study by Eloubeidi, non US submissions were followed by full publications in a significantly higher proportion as compared to abstracts from the United States. Possibly, there are European centers which are particularly strong in endoscopy research. The low subsequent full publication rate in the study by Eloubeidi (25%) is in contrast to publication rates published for other medical specialities (most often around 50% [3]) and may indicate that the results are not representative of more general meetings.

A limitation of our study was the inability to take into account originality and relevance which can only be assessed by expert reviewers. However, experts cannot be unaware to the origin of the abstracts, even if, other than in the DDW review process, attempts for blinding were made. It has been shown that abstract reviewers were able to correctly guess the responsible research lab in about 40% of submissions in blinded review [29, 30]. Also, inter-rater reliability has been shown to be poor among expert reviewers [9, 31]. Bias based on the identity, the origin, or the affiliation of an author can thus never be completely excluded in expert peer review. On the other hand, the scientific impact of a contribution can not be consistently gauged over a heterogeneous set of research. We would suggest the two approaches (expert review vs. formal assessment) are complementary.

We found evidence for publication bias, i.e. for preferential acceptance of abstracts with positive results, which was demonstrated for clinical trials as well as for "other clinical research". As well, the very low proportion of negative outcome reports in BSS (4%), may represent another form of positive outcome bias at the pre-submission stage.

Publication bias is particularly important and well demonstrated in meta-analyses of clinical trials or of studies on causality [3234]. The effects of positive outcome bias in the selection procedures of scientific meetings, a phenomenon described before for other societies [4, 35], may be more insidious, and possibly no less damaging, e.g. in view of the role of abstract acceptance as possibly the most important predictor of subsequent publication [4, 68]. Potential reasons for lack of statistical significance may be inappropriate sample size or premature reporting of ongoing studies – both should rightly decrease the chances for acceptance. On the other hand, even if sufficient power (of ≥ 0.80) and a truly present effect was assumed for all studies, at least 20% of the reports should be expected to show negative results. Given the very low medians for sample size (Table 2), it is the more surprising how low the proportion of submitted negative studies was, even in CCT. There seems to be, in addition to the bias shown for abstract acceptance, a significant preference for statistically significant results influencing abstract submission.

A more reassuring finding in our study was the importance of formal abstract quality for abstract acceptance for the DDW. This is in accordance with the findings of other investigators who came to the conclusion that how one writes is at least as important as what one writes, or that attention to form reflects attention to content [36]. Possibly, formal quality is most important, as well as most easily assessed in CCT. The rating system we used is less responsive in BSS due to the lower number of applicable items. Formal abstract quality could in fact not be shown to be associated with abstract acceptance in this group. On the other hand, frequent omissions included basic, essential items such as the definition of a research objective or the number of animals studied. The lack of association with abstract acceptance indicates that formal requirements are considered to be less important in this type of research. Therefore, insufficient control for confounding by formal quality is unlikely to explain the preferential acceptance by country of origin which was particularly strong in BSS. Another potential limitation of our study is the inability to adequately control for sample size, as this can not be directly compared across different designs and research questions. Lastly, one needs to acknowledge that we examined associations in an observational design. Causal inference must, therefore, be drawn with caution. An experimental design such as presenting sets of identical abstracts modifying only certain variables, such as the identity of the author, or the statistical significance of the study results, would be helpful to further substantiate the findings. Such an experiment has for example been performed to test the peer review process in the educational sciences, and clear evidence for confirmatory bias (i.e. the tendency to prefer experiences which confirm the ideas of the reviewer) was found [37].

In conclusion, in the DDW abstract review process there is a remarkable preference for abstracts of North-American origin, and for abstracts presenting statistically significant results. More research is needed to further elucidate the underlying mechanisms.

Abbreviations

BSS:

basic science studies

CCT:

controlled clinical trial

DDW:

Digestive Diseases Week; epid. epidemiology

N/W:

North and West

OCR:

other clinical research; physiol. physiology

S/E:

South and East

T&D:

therapy and diagnostics

References

  1. Duchini A, Genta RM: From abstract to peer-reviewed article: the fate of abstracts submitted to the DDW. Gastroenterology. 1997, 112: A12-

    Google Scholar 

  2. Timmer A, Blum T, Lankisch PG: Publication bias in gastroenterological research. Pancreas. 2001, 23: 212-215. 10.1097/00006676-200108000-00012.

    Article  CAS  PubMed  Google Scholar 

  3. Scherer RW, Dickersin K, Langenberg P: Full publication of results initially presented in abstracts. A meta-analysis. JAMA. 1994, 272: 158-162. 10.1001/jama.272.2.158.

    Article  CAS  PubMed  Google Scholar 

  4. De Bellefeuille C, Morrison CA, Tannock IF: The fate of abstracts submitted to a cancer meeting: factors which influence presentation and subsequent publication. Ann Oncol. 1992, 3: 187-191.

    CAS  PubMed  Google Scholar 

  5. Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith HJ: Publication bias and clinical trials. Control Clin Trials. 1987, 8: 343-353. 10.1016/0197-2456(87)90155-3.

    Article  CAS  PubMed  Google Scholar 

  6. McCormick MC, Holmes JH: Publication of research presented at the pediatric meetings. Change in selection. Am J Dis Child. 1985, 139: 122-126.

    Article  CAS  PubMed  Google Scholar 

  7. Goldman L, Loscalzo A: Fate of cardiology research originally published in abstract form. N Engl J Med. 1980, 303: 255-259.

    Article  CAS  PubMed  Google Scholar 

  8. Callaham ML, Wears RL, Weber EJ, Barton C, Young G: Positive-outcome bias and other limitations in the outcome of research abstracts submitted to a scientific meeting. JAMA. 1998, 280: 254-257. 10.1001/jama.280.3.254.

    Article  CAS  PubMed  Google Scholar 

  9. Conn HO: An experiment in blind program selection. Clin Res. 1974, 22: 128-134.

    Google Scholar 

  10. Dickersin K: The existence of publication bias and risk factors for its occurrence. JAMA. 1990, 263: 1385-1389. 10.1001/jama.263.10.1385.

    Article  CAS  PubMed  Google Scholar 

  11. Begg CB, Berlin JA: Publication bias: a problem in interpreting medical data. J R Stat A. 1988, 151: 419-463.

    Article  Google Scholar 

  12. Friedman LM, Furberg CD, DeMets DL: Fundamentals of clinical trials. St. Louis: Mosby,. 1996

    Google Scholar 

  13. Moher D, Cook DJ, Jadad AR, et al: Assessing the quality of reports of randomised trials: implications for the conduct of meta-analyses. Health Technol Assess. 1999, 3 (i-iv): 1-98.

    Google Scholar 

  14. Cho MK, Bero LA: Instruments for assessing the quality of drug studies published in the medical literature. JAMA. 1994, 272: 101-104. 10.1001/jama.272.2.101.

    Article  CAS  PubMed  Google Scholar 

  15. Ad Hoc Working Group for Critical Appraisal of the Medical Literature: A proposal for more informative abstracts of clinical studies. Ann Int Med. 1987, 106: 598-604.

    Article  Google Scholar 

  16. Squires BP: Structured abstracts of original research and review articles. Can Med Ass J. 1990, 143: 619-622.

    CAS  Google Scholar 

  17. [Harenberg dictionary of countries, '94/95]. Dortmund: Harenberg Kommunikation Verlags- und Mediengesellschaft mbH &Co KG,. 1994

  18. Streiner DL, Norman GR: Health measurement scales. A practical guide to their development and use. Oxford, New York, Tokyo: Oxford University Press,. 1989

    Google Scholar 

  19. Kleinbaum DG, Kupper LL, Muller KE: Applied regression analysis and other multivariate methods. Belmont, California: Duxbary Press,. 1988

    Google Scholar 

  20. Campbell FM: National bias: a comparison of citation practices by health professionals. Bull Med Libr Assoc. 1990, 78: 376-382.

    CAS  PubMed  PubMed Central  Google Scholar 

  21. Nylenna M, Riis P, Karlsson Y: Multiple blinded reviews of the same two manuscripts. Effects of referee characteristics and publication language. JAMA. 1994, 272: 149-151. 10.1001/jama.272.2.149.

    Article  CAS  PubMed  Google Scholar 

  22. Egger M, Zellweger-Zaehner T, Schneider M, Junker C, Lengeler C, Antes G: Language bias in randomised controlled trials published in English and German. Lancet. 1997, 350: 326-329. 10.1016/S0140-6736(97)02419-7.

    Article  CAS  PubMed  Google Scholar 

  23. Sanberg PR, Borlongan CV, Nishino H: Beyond the language barrier. Nature. 1996, 384: 608-10.1038/384608a0.

    Article  CAS  PubMed  Google Scholar 

  24. Frame JD, Francis N: The international distribution of biomedical publications. Fed Proc. 1977, 36: 1790-1795.

    CAS  PubMed  Google Scholar 

  25. Gregoire G, Derderian F, Le Lorier J: Selecting the language of the publications included in a meta-analysis: is there a Tower of Babel bias?. J Clin Epidemiol. 1995, 48: 159-163. 10.1016/0895-4356(94)00098-B.

    Article  CAS  PubMed  Google Scholar 

  26. Day M: The price of prejudice. The New Scientist. 1997, 22-23.

    Google Scholar 

  27. Carter-Sigglow J: Beyond the language barrier. Nature. 1997, 385: 764-10.1038/385764d0.

    Article  CAS  PubMed  Google Scholar 

  28. Eloubeidi MA, Wade SB, Provenzale D: Factors associated with acceptance and full publication of GI endoscopic research originally published in abstract form. Gastrointest Endosc. 2001, 53: 275-282. 10.1067/mge.2001.113383.

    Article  CAS  PubMed  Google Scholar 

  29. Yankauer A: How blind is blind review?. Am J Public Health. 1991, 81: 843-845.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Fisher M, Friedman SB, Strauss B: The effects of blinding on acceptance of research papers by peer review. JAMA. 1994, 272: 143-146. 10.1001/jama.272.2.143.

    Article  CAS  PubMed  Google Scholar 

  31. Sacks JJ, Peterson DE: Improving conference abstract selection. Epidemiology. 1994, 5: 636-637.

    Article  CAS  PubMed  Google Scholar 

  32. Begg CB, Berlin JA: Review: publication bias and dissemination of clinical research. J Natl Cancer Inst. 1989, 81: 107-115.

    Article  CAS  PubMed  Google Scholar 

  33. Bero LA, Glantz SA, Rennie D: Publication bias and public health policy on environmental tobacco smoke. JAMA. 1994, 272: 133-136. 10.1001/jama.272.2.133.

    Article  CAS  PubMed  Google Scholar 

  34. Vandenbroucke JP: Passive smoking and lung cancer: a publication bias?. Br Med J. 1988, 296: 319-320.

    Google Scholar 

  35. Koren G, Klein : Bias against negative studies in newspaper reports of medical research. JAMA. 1991, 266: 1824-10.1001/jama.266.13.1824.

    Article  CAS  PubMed  Google Scholar 

  36. Panush RS, Delafuente JC, Connelly CS, et al: Profile of a meeting: how abstracts are written and reviewed. J Rheumatol. 1989, 16: 145-147.

    CAS  PubMed  Google Scholar 

  37. Mahoney MJ: Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cog Ther Res. 1977, 1: 161-175.

    Article  Google Scholar 

Pre-publication history

Download references

Acknowledgments

Grant support

A. Timmer was a fellow of the German Academic Exchange Service (DAAD). The study was supported by a grant from the Calgary Regional Health Authority R&D and by Searle Canada

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antje Timmer.

Additional information

Competing interests

None declared.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Timmer, A., Hilsden, R.J. & Sutherland, L.R. Determinants of abstract acceptance for the Digestive Diseases Week – a cross sectional study. BMC Med Res Methodol 1, 13 (2001). https://doi.org/10.1186/1471-2288-1-13

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2288-1-13

Keywords