-
A free and open platform for sharing MRI, MEG, EEG, iEEG, ECoG, ASL, and PET data—OpenNeuro. (n.d.). OpenNeuro. Retrieved 9 July 2021, from
https://openneuro.org/
-
-
Abele-Brehm, A. E., Gollwitzer, M., Steinberg, U., & Schönbrodt, F. D. (2019). Attitudes Toward Open Science and Public Data Sharing: A Survey Among Members of the German Psychological Society.
Social Psychology,
50(4), 252–260.
https://doi.org/10.1027/1864-9335/a000384
-
-
Aczel, B., Szaszi, B., Nilsonne, G., Van den Akker, O., Albers, C. J., van Assen, M. A. L. M., Bastiaansen, J. A., Benjamin, D. J., Boehm, U., Botvinik-Nezer, R., Bringmann, L. F., Busch, N., Caruyer, E., Cataldo, A. M., Cowan, N., Delios, A., van Dongen, N. N. N., Donkin, C., van Doorn, J., … Wagenmakers, E.-J. (2021).
Guidance for conducting and reporting multi-analyst studies [Preprint]. MetaArXiv.
https://doi.org/10.31222/osf.io/5ecnh
-
-
Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, Š., Benjamin, D., Chambers, C. D., Fisher, A., Gelman, A., Gernsbacher, M. A., Ioannidis, J. P., Johnson, E., Jonas, K., Kousta, S., Lilienfeld, S. O., Lindsay, D. S., Morey, C. C., Munafò, M., Newell, B. R., … Wagenmakers, E.-J. (2020). A consensus-based transparency checklist.
Nature Human Behaviour,
4(1), 4–6.
https://doi.org/10.1038/s41562-019-0772-6
-
-
Albayrak-Aydemir, N. (2018a, April 16). Diversity helps but decolonisation is the key to equality in higher education.
Contemporary Issues in Teaching and Learning.
https://lsepgcertcitl.wordpress.com/2018/04/16/diversity-helps-but-decolonisation-is-the-key-to-equality-in-higher-education/
-
-
Albayrak-Aydemir, N. (2018b, November 29). Academics’ role on the future of higher education: Important but unrecognised.
Contemporary Issues in Teaching and Learning.
https://lsepgcertcitl.wordpress.com/2018/11/29/academics-role-on-the-future-of-higher-education-important-but-unrecognised/
-
-
Albayrak-Aydemir, N. (2020, February 20). ‘The hidden costs of being a scholar from the Global South’ is locked The hidden costs of being a scholar from the Global South.
LSE Higher Education.
https://blogs.lse.ac.uk/highereducation/2020/02/20/the-hidden-costs-of-being-a-scholar-from-the-global-south/
-
-
Albayrak-Aydemir, N., & Okoroji, C. (n.d.). Facing the challenges of postgraduate study as a minority student (A Guide for Psychology Postgraduates: Surviving Postgraduate Study, pp. 63–66). The British Psychological Society.
-
-
Ali, M. J. (2021). Understanding the Altmetrics.
Seminars in Ophthalmology, 1–3.
https://doi.org/10.1080/08820538.2021.1930806
-
-
ALLEA - All European Academies. (2017).
The European Code of Conduct for Research Integrity (Revised Edition). ALLEA.
https://allea.org/code-of-conduct/
-
-
American Psychological Association,Task Force on Socioeconomic Status. (2007). Report of the APA task force on Socioeconomic status. American Psychological Association.
-
-
Anderson, A. A., Scheufele, D. A., Brossard, D., & Corley, E. A. (2012). The Role of Media and Deference to Scientific Authority in Cultivating Trust in Sources of Information about Emerging Technologies.
International Journal of Public Opinion Research,
24(2), 225–237.
https://doi.org/10.1093/ijpor/edr032
-
-
Angrist, J. D., & Pischke, J.-S. (2010). The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics.
Journal of Economic Perspectives,
24(2), 3–30.
https://doi.org/10.1257/jep.24.2.3
-
-
Arslan, R. C. (2019). How to Automatically Document Data With the
codebook Package to Facilitate Data Reuse.
Advances in Methods and Practices in Psychological Science,
2(2), 169–187.
https://doi.org/10.1177/2515245919838783
-
-
Australian Reproducibility Network. (n.d.). Australian Reproducibility Network. Retrieved 10 July 2021, from
http://www.aus-rn.org/
-
-
-
-
-
-
Azevedo, F., & Jost, J. T. (2021). The ideological basis of antiscientific attitudes: Effects of authoritarianism, conservatism, religiosity, social dominance, and system justification.
Group Processes & Intergroup Relations,
24(4), 518–549.
https://doi.org/10.1177/1368430221990104
-
-
Bak, H.-J. (2001). Education and Public Attitudes toward Science: Implications for the ‘Deficit Model’ of Education and Support for Science and Technology.
Social Science Quarterly,
82(4), 779–795.
https://www.jstor.org/stable/42955760
-
-
Banks, G. C., Rogelberg, S. G., Woznyj, H. M., Landis, R. S., & Rupp, D. E. (2016). Editorial: Evidence on Questionable Research Practices: The Good, the Bad, and the Ugly.
Journal of Business and Psychology,
31(3), 323–338.
https://doi.org/10.1007/s10869-016-9456-7
-
-
Barba, L. A. (2018). Terminologies for Reproducible Research.
ArXiv:1802.03311 [Cs].
http://arxiv.org/abs/1802.03311
-
-
Bardsley, N. (2018). What lessons does the “replication crisis” in psychology hold for experimental economics? In A. Lewis (Ed.), The Cambridge Handbook of Psychology and Economic Behavior (2nd ed.). CAMBRIDGE UNIVERSITY PRESS.
-
-
Barnes, R. M., Johnston, H. M., MacKenzie, N., Tobin, S. J., & Taglang, C. M. (2018). The effect of ad hominem attacks on the evaluation of claims promoted by scientists.
PLOS ONE,
13(1), e0192025.
https://doi.org/10.1371/journal.pone.0192025
-
-
Bartoš, F., & Schimmack, U. (2020).
Z-Curve.2.0: Estimating Replication Rates and Discovery Rates [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/urgtn
-
-
Bateman, I., Kahneman, D., Munro, A., Starmer, C., & Sugden, R. (2005). Testing competing models of loss aversion: An adversarial collaboration.
Journal of Public Economics,
89(8), 1561–1580.
https://doi.org/10.1016/j.jpubeco.2004.06.013
-
-
Baturay, M. H. (2015). An Overview of the World of MOOCs.
Procedia - Social and Behavioral Sciences,
174, 427–433.
https://doi.org/10.1016/j.sbspro.2015.01.685
-
-
Bazeley, P. (2003). Defining ‘Early Career’ in Research.
Higher Education,
45(3), 257–279.
https://doi.org/10.1023/A:1022698529612
-
-
Beffara Bret, B., Beffara Bret, A., & Nalborczyk, L. (2021). A fully automated, transparent, reproducible, and blind protocol for sequential analyses.
Meta-Psychology,
5.
https://doi.org/10.15626/MP.2018.869
-
-
Behrens, J. T. (1997). Principles and procedures of exploratory data analysis.
Psychological Methods,
2(2), 131–160.
https://doi.org/10.1037/1082-989X.2.2.131
-
-
Beller, S., & Bender, A. (2017). Theory, the Final Frontier? A Corpus-Based Analysis of the Role of Theory in Psychological Articles.
Frontiers in Psychology,
8, 951.
https://doi.org/10.3389/fpsyg.2017.00951
-
-
Benoit, K., Conway, D., Lauderdale, B. E., Laver, M., & Mikhaylov, S. (2016). Crowd-sourced Text Analysis: Reproducible and Agile Production of Political Data.
American Political Science Review,
110(2), 278–295.
https://doi.org/10.1017/S0003055416000058
-
-
Bhopal, R., Rankin, J., McColl, E., Thomas, L., Kaner, E., Stacy, R., Pearson, P., Vernon, B., & Rodgers, H. (1997). The vexed question of authorship: Views of researchers in a British medical faculty.
BMJ,
314(7086), 1009–1009.
https://doi.org/10.1136/bmj.314.7086.1009
-
-
BIAS | Definition of BIAS by Oxford Dictionary on Lexico.com also meaning of BIAS. (n.d.). Lexico Dictionaries | English. Retrieved 9 July 2021, from
https://www.lexico.com/definition/bias
-
-
BIDS. (2020a).
About BIDS. Brain Imaging Data Structure.
https://bids.neuroimaging.io/
-
-
BIDS. (2020b).
Modality agnostic files—Brain Imaging Data Structure v1.6.0. Brain Imaging Data Structure.
https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html
-
-
Bik, E. M., Casadevall, A., & Fang, F. C. (2016). The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications.
MBio,
7(3).
https://doi.org/10.1128/mBio.00809-16
-
-
Bilder, G. (2013, September 20).
DOIs unambiguously and persistently identify published, trustworthy, citable online scholarly literature. Right? [Website]. Crossref.
https://www.crossref.org/blog/dois-unambiguously-and-persistently-identify-published-trustworthy-citable-online-scholarly-literature-right/
-
-
Bishop, D. V. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research: The 47th Sir Frederic Bartlett Lecture.
Quarterly Journal of Experimental Psychology,
73(1), 1–19.
https://doi.org/10.1177/1747021819886519
-
-
Björneborn, L., & Ingwersen, P. (2004). Toward a basic framework for webometrics.
Journal of the American Society for Information Science and Technology,
55(14), 1216–1227.
https://doi.org/10.1002/asi.20077
-
-
Blohowiak, B. B., Cohoon, J., de-Wit, L., Eich, E., Farach, F. J., Hasselman, F., Holcombe, A. O., Humphreys, M., Lewis, M., & Nosek, B. A. (2013).
Badges to Acknowledge Open Practices.
https://osf.io/tvyxz/
-
-
BMJ. (2015, September 22).
Introducing ‘How to write and publish a Study Protocol’ using BMJ’s new eLearning programme: Research to Publication. BMJ Open.
https://blogs.bmj.com/bmjopen/2015/09/22/introducing-how-to-write-and-publish-a-study-protocol-using-bmjs-new-elearning-programme-research-to-publication/
-
-
Boivin, A., Richards, T., Forsythe, L., Grégoire, A., L’Espérance, A., Abelson, J., & Carman, K. L. (2018). Evaluating patient and public involvement in research.
BMJ, k5147.
https://doi.org/10.1136/bmj.k5147
-
-
Bol, T., de Vaan, M., & van de Rijt, A. (2018). The Matthew effect in science funding.
Proceedings of the National Academy of Sciences,
115(19), 4887–4890.
https://doi.org/10.1073/pnas.1719557115
-
-
Bollen, K. A. (1989). Structural equations with latent variables. Wiley.
-
-
Borenstein, M. (Ed.). (2009). Introduction to meta-analysis. John Wiley & Sons.
-
-
Bornmann, L., Ganser, C., Tekles, A., & Leydesdorff, L. (2019). Does the $h_\alpha$ index reinforce the Matthew effect in science? Agent-based simulations using Stata and R.
ArXiv:1905.11052 [Physics].
http://arxiv.org/abs/1905.11052
-
-
Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The Concept of Validity.
Psychological Review,
111(4), 1061–1071.
https://doi.org/10.1037/0033-295X.111.4.1061
-
-
Borsboom, D., van der Maas, H., Dalege, J., Kievit, R., & Haig, B. (2020).
Theory Construction Methodology: A practical framework for theory formation in psychology [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/w5tp8
-
-
Bortoli, S. (2021, April 1). NIHR Guidance on co-producing a research project.
Learning For Involvement.
https://www.learningforinvolvement.org.uk/?opportunity=nihr-guidance-on-co-producing-a-research-project
-
-
Bourne, P. E., Polka, J. K., Vale, R. D., & Kiley, R. (2017). Ten simple rules to consider regarding preprint submission.
PLOS Computational Biology,
13(5), e1005473.
https://doi.org/10.1371/journal.pcbi.1005473
-
-
Bouvy, J. C., & Mujoomdar, M. (2019). All-Male Panels and Gender Diversity of Issue Panels and Plenary Sessions at ISPOR Europe.
PharmacoEconomics - Open,
3(3), 419–422.
https://doi.org/10.1007/s41669-019-0153-0
-
-
Box, G. E. P. (1976). Science and Statistics.
Journal of the American Statistical Association,
71(356), 791–799.
https://doi.org/10.1080/01621459.1976.10480949
-
-
Bramoulle, Y., & Saint-Paul, G. (2007). Research Cycles.
SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.965816
-
-
Brand, A., Allen, L., Altman, M., Hlava, M., & Scott, J. (2015). Beyond authorship: Attribution, contribution, collaboration, and credit.
Learned Publishing,
28(2), 151–155.
https://doi.org/10.1087/20150211
-
-
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., Perugini, M., Spies, J. R., & van ’t Veer, A. (2014). The Replication Recipe: What makes for a convincing replication?
Journal of Experimental Social Psychology,
50, 217–224.
https://doi.org/10.1016/j.jesp.2013.10.005
-
-
-
-
Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: Unintended consequences of journal rank.
Frontiers in Human Neuroscience,
7.
https://doi.org/10.3389/fnhum.2013.00291
-
-
Brewer, P. R., & Ley, B. L. (2013). Whose Science Do You Believe? Explaining Trust in Sources of Scientific Information About the Environment.
Science Communication,
35(1), 115–137.
https://doi.org/10.1177/1075547012441691
-
-
Breznau, N., Rinke, E. M., Wuttke, A., Adem, M., Adriaans, J., Alvarez-Benjumea, A., Andersen, H. K., Auer, D., Azevedo, F., Bahnsen, O., Balzer, D., Bauer, G., Bauer, P., Baumann, M., Baute, S., Benoit, V., Bernauer, J., Berning, C., Berthold, A., … Nguyen, H. H. V. (2021).
Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty [Preprint]. MetaArXiv.
https://doi.org/10.31222/osf.io/cd5j9
-
-
Breznau, N., Rinke, E. M., Wuttke, A., Nguyen, H. H. V., Adem, M., Adriaans, J., Akdeniz, E., Alvarez-Benjumea, A., Andersen, H. K., Auer, D., Azevedo, F., Bahnsen, O., Bai, L., Balzer, D., Bauer, G., Bauer, P., Baumann, M., Baute, S., Benoit, V., … Żółtak, T. (2021).
How Many Replicators Does It Take to Achieve Reliability? Investigating Researcher Variability in a Crowdsourced Replication [Preprint]. SocArXiv.
https://doi.org/10.31235/osf.io/j7qta
-
-
Brod, M., Tesler, L. E., & Christensen, T. L. (2009). Qualitative research and content validity: Developing best practices based on science and experience.
Quality of Life Research,
18(9), 1263–1278.
https://doi.org/10.1007/s11136-009-9540-9
-
-
Brooks, T. A. (1985). Private acts and public objects: An investigation of citer motivations.
Journal of the American Society for Information Science,
36(4), 223–229.
https://doi.org/10.1002/asi.4630360402
-
-
Brown, J. (2010). An introduction to overlay journals (Repositories Support Project, pp. 1–6). University College London.
-
-
Brown, N. J. L., & Heathers, J. A. J. (2017). The GRIM Test: A Simple Technique Detects Numerous Anomalies in the Reporting of Results in Psychology.
Social Psychological and Personality Science,
8(4), 363–369.
https://doi.org/10.1177/1948550616673876
-
-
Brown, N., Thompson, P., & Leigh, J. S. (2018). Making Academia More Accessible.
Journal of Perspectives in Applied Academic Practice,
6(2), 82–90.
https://doi.org/10.14297/jpaap.v6i2.348
-
-
Brulé, J. F., & Blount, A. (1989). Knowledge acquisition. McGraw-Hill.
-
-
Brunner, J., & Schimmack, U. (2020). Estimating Population Mean Power Under Conditions of Heterogeneity and Selection for Significance.
Meta-Psychology,
4.
https://doi.org/10.15626/MP.2018.874
-
-
Bruns, S. B., & Ioannidis, J. P. A. (2016). P-Curve and p-Hacking in Observational Research.
PLOS ONE,
11(2), e0149144.
https://doi.org/10.1371/journal.pone.0149144
-
-
Budapest Open Access Initiative | Read the Budapest Open Access Initiative. (2002, February 14).
https://www.budapestopenaccessinitiative.org/read
-
-
Busse, C., Kach, A. P., & Wagner, S. M. (2017). Boundary Conditions: What They Are, How to Explore Them, Why We Need Them, and When to Consider Them.
Organizational Research Methods,
20(4), 574–609.
https://doi.org/10.1177/1094428116641191
-
-
Button, K. S., Chambers, C. D., Lawrence, N., & Munafò, M. R. (2020). Grassroots Training for Reproducible Science: A Consortium-Based Approach to the Empirical Dissertation.
Psychology Learning & Teaching,
19(1), 77–90.
https://doi.org/10.1177/1475725719857659
-
-
Button, K. S., Lawrence, N., Chambers, C. D., & Munafò, M. R. (2016). Instilling scientific rigour at the grassroots. The Psychologist, 29(16), 158–167.
-
-
Byrne, J. A., & Christopher, J. (2020). Digital magic, or the dark arts of the 21
st century—How can journals and peer reviewers detect manuscripts and publications from paper mills?
FEBS Letters,
594(4), 583–589.
https://doi.org/10.1002/1873-3468.13747
-
-
Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings.
Psychological Bulletin,
54(4), 297–312.
https://doi.org/10.1037/h0040950
-
-
Campbell, D. T., & Stanley, J. C. (2011). Experimental and quasi-experimental designs for research. Wadsworth.
-
-
Carp, J. (2012). On the Plurality of (Methodological) Worlds: Estimating the Analytic Flexibility of fMRI Experiments.
Frontiers in Neuroscience,
6.
https://doi.org/10.3389/fnins.2012.00149
-
-
Carsey, T. M. (2014). Making DA-RT a Reality.
PS: Political Science & Politics,
47(01), 72–77.
https://doi.org/10.1017/S1049096513001753
-
-
Carter, A., Tilling, K., & Munafo, M. R. (2021).
Considerations of sample size and power calculations given a range of analytical scenarios [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/tcqrn
-
-
Case, C. M. (1928). Scholarship in sociology. Sociology and Social Research, 12, 323–340.
-
-
Cassidy, S. A., Dimova, R., Giguère, B., Spence, J. R., & Stanley, D. J. (2019). Failing Grade: 89% of Introduction-to-Psychology Textbooks That Define or Explain Statistical Significance Do So Incorrectly.
Advances in Methods and Practices in Psychological Science,
2(3), 233–239.
https://doi.org/10.1177/2515245919858072
-
-
Center for Open Science. (n.d.).
Registered Reports. Retrieved 10 July 2021, from
https://www.cos.io/initiatives/registered-reports
-
-
Centre for Open Science. (n.d.).
Show Your Work. Share Your Work. Centre for Open Science.
https://www.cos.io/
-
-
Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex.
Cortex,
49(3), 609–610.
https://doi.org/10.1016/j.cortex.2012.12.016
-
-
Chambers, C. D., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered Reports: Realigning incentives in scientific publishing.
Cortex,
66, A1–A2.
https://doi.org/10.1016/j.cortex.2015.03.022
-
-
Chambers, C. D., & Tzavella, L. (2020).
The past, present, and future of Registered Reports [Preprint]. MetaArXiv.
https://doi.org/10.31222/osf.io/43298
-
-
Chartier, C. R., Riegelman, A., & McCarthy, R. J. (2018). StudySwap: A Platform for Interlab Replication, Collaboration, and Resource Exchange.
Advances in Methods and Practices in Psychological Science,
1(4), 574–579.
https://doi.org/10.1177/2515245918808767
-
-
Chuard, P. J. C., Vrtílek, M., Head, M. L., & Jennions, M. D. (2019). Evidence that nonsignificant results are sometimes preferred: Reverse P-hacking or selective reporting?
PLOS Biology,
17(1), e3000127.
https://doi.org/10.1371/journal.pbio.3000127
-
-
CKAN - The open source data management system. (n.d.). Ckan. Retrieved 9 July 2021, from
https://ckan.org/
-
-
Claerbout, J. F., & Karrenbach, M. (1992). Electronic documents give reproducible research a new meaning.
SEG Technical Program Expanded Abstracts 1992, 601–604.
https://doi.org/10.1190/1.1822162
-
-
Clark, H., Elsherif, M. M., & Leavens, D. A. (2019). Ontogeny vs. phylogeny in primate/canid comparisons: A meta-analysis of the object choice task.
Neuroscience & Biobehavioral Reviews,
105, 178–189.
https://doi.org/10.1016/j.neubiorev.2019.06.001
-
-
Closed access. (n.d.).
CASRAI. Retrieved 9 July 2021, from
https://casrai.org/term/closed-access/
-
-
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review.
The Journal of Abnormal and Social Psychology,
65(3), 145–153.
https://doi.org/10.1037/h0045186
-
-
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). L. Erlbaum Associates.
-
-
Cohn, J. P. (2008). Citizen Science: Can Volunteers Do Real Research?
BioScience,
58(3), 192–197.
https://doi.org/10.1641/B580303
-
-
Collaborative Assessment forTrustworthy Science|The repliCATS project. (n.d.). University of Melbourne. Retrieved 10 July 2021, from
https://replicats.research.unimelb.edu.au/
-
-
Committee on Reproducibility and Replicability in Science, Board on Behavioral, Cognitive, and Sensory Sciences, Committee on National Statistics, Division of Behavioral and Social Sciences and Education, Nuclear and Radiation Studies Board, Division on Earth and Life Studies, Board on Mathematical Sciences and Analytics, Committee on Applied and Theoretical Statistics, Division on Engineering and Physical Sciences, Board on Research Data and Information, Committee on Science, Engineering, Medicine, and Public Policy, Policy and Global Affairs, & National Academies of Sciences, Engineering, and Medicine. (2019).
Reproducibility and Replicability in Science (p. 25303). National Academies Press.
https://doi.org/10.17226/25303
-
-
Confederation Of Open Access Repositories. (2020).
COAR Community Framework for Best Practices in Repositories. (Version 1). Zenodo.
https://doi.org/10.5281/ZENODO.4110829
-
-
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Rand McNally College Pub. Co.
-
-
Corley, K. G., & Gioia, D. A. (2011). Building Theory about Theory Building: What Constitutes a Theoretical Contribution?
Academy of Management Review,
36(1), 12–32.
https://doi.org/10.5465/amr.2009.0486
-
-
Cornwall, A., & Jewkes, R. (1995). What is participatory research?
Social Science & Medicine,
41(12), 1667–1676.
https://doi.org/10.1016/0277-9536(95)00127-S
-
-
Correction or retraction? (2006).
Nature,
444(7116), 123–124.
https://doi.org/10.1038/444123b
-
-
Corti, L. (2019). Managing and sharing research data: A guide to good practice (2nd edition). SAGE Publications.
-
-
Cowan, N., Belletier, C., Doherty, J. M., Jaroslawska, A. J., Rhodes, S., Forsberg, A., Naveh-Benjamin, M., Barrouillet, P., Camos, V., & Logie, R. H. (2020). How Do Scientific Views Change? Notes From an Extended Adversarial Collaboration.
Perspectives on Psychological Science,
15(4), 1011–1025.
https://doi.org/10.1177/1745691620906415
-
-
CRediT - Contributor Roles Taxonomy. (n.d.). Casrai. Retrieved 9 July 2021, from
https://casrai.org/credit/
-
-
Crenshaw, K. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics.
University of Chicago Legal Forum,
1989(1), 8.
https://chicagounbound.uchicago.edu/uclf/vol1989/iss1/8
-
-
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests.
Psychological Bulletin,
52(4), 281–302.
https://doi.org/10.1037/h0040957
-
-
Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices?
Journal of the American Society for Information Science and Technology,
52(7), 558–569.
https://doi.org/10.1002/asi.1097
-
-
Crosetto, P. (2021, April 12). Is MDPI a predatory publisher?
Paolo Crosetto.
https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/
-
-
Crutzen, R., Ygram Peters, G.-J., & Mondschein, C. (2019). Why and how we should care about the General Data Protection Regulation.
Psychology & Health,
34(11), 1347–1357.
https://doi.org/10.1080/08870446.2019.1606222
-
-
Crüwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J. C., Orben, A., Parsons, S., & Schulte-Mecklenbeck, M. (2019). Seven Easy Steps to Open Science: An Annotated Reading List.
Zeitschrift Für Psychologie,
227(4), 237–248.
https://doi.org/10.1027/2151-2604/a000387
-
-
Curran, P. J. (2009). The seemingly quixotic pursuit of a cumulative psychological science: Introduction to the special issue.
Psychological Methods,
14(2), 77–80.
https://doi.org/10.1037/a0015972
-
-
Curry, S. (2012, August 13). Sick of Impact Factors | Reciprocal Space.
Reciprocal Space.
http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/
-
-
d’Espagnat, B. (2008). Is Science Cumulative? A Physicist Viewpoint. In L. Soler, H. Sankey, & P. Hoyningen-Huene (Eds.),
Rethinking Scientific Change and Theory Comparison (pp. 145–151). Springer Netherlands.
https://doi.org/10.1007/978-1-4020-6279-7_10
-
-
Data Management Expert Guide—CESSDA TRAINING. (n.d.). CESSDA. Retrieved 10 July 2021, from
https://www.cessda.eu/Training/Training-Resources/Library/Data-Management-Expert-Guide
-
-
Data management plans | Stanford Libraries. (n.d.). Stanford Libraries. Retrieved 9 July 2021, from
https://library.stanford.edu/research/data-management-services/data-management-plans
-
-
Data protection. (n.d.). [Text]. European Commission - European Commission. Retrieved 9 July 2021, from
https://ec.europa.eu/info/law/law-topic/data-protection_en
-
-
Datacite Metadata Schema. (n.d.). DataCite Schema. Retrieved 9 July 2021, from
https://schema.datacite.org/
-
-
Davies, G. M., & Gray, A. (2015). Don’t let spurious accusations of pseudoreplication limit our ability to learn from natural experiments (and other messy kinds of ecological monitoring).
Ecology and Evolution,
5(22), 5295–5304.
https://doi.org/10.1002/ece3.1782
-
-
Day, S., Rennie, S., Luo, D., & Tucker, J. D. (2020). Open to the public: Paywalls and the public rationale for open access medical research publishing.
Research Involvement and Engagement,
6(1), 8.
https://doi.org/10.1186/s40900-020-0182-y
-
-
-
-
Del Giudice, M., & Gangestad, S. W. (2021). A Traveler’s Guide to the Multiverse: Promises, Pitfalls, and a Framework for the Evaluation of Analytic Decisions.
Advances in Methods and Practices in Psychological Science,
4(1), 251524592095492.
https://doi.org/10.1177/2515245920954925
-
-
Deutsche Forschungsgemeinschaft. (2019).
Guidelines for Safeguarding Good Research Practice. Code of Conduct.
https://doi.org/10.5281/ZENODO.3923602
-
-
DeVellis, R. F. (2017). Scale development: Theory and applications (Fourth edition). SAGE.
-
-
Devezer, B., Navarro, D. J., Vandekerckhove, J., & Ozge Buzbas, E. (2021). The case for formal methodology in scientific reform.
Royal Society Open Science,
8(3), rsos.200805, 200805.
https://doi.org/10.1098/rsos.200805
-
-
Dickersin, K., & Min, Y.-I. (1993). Publication Bias: The Problem That Won’t Go Away.
Annals of the New York Academy of Sciences,
703(1 Doing More Go), 135–148.
https://doi.org/10.1111/j.1749-6632.1993.tb26343.x
-
-
Dienes, Z. (2008).
Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference. Palgrave Macmillan.
https://books.google.ca/books?id=qCQdBQAAQBAJ
-
-
Dienes, Z. (2011). Bayesian Versus Orthodox Statistics: Which Side Are You On?
Perspectives on Psychological Science,
6(3), 274–290.
https://doi.org/10.1177/1745691611406920
-
-
Dienes, Z. (2014). Using Bayes to get the most out of non-significant results.
Frontiers in Psychology,
5.
https://doi.org/10.3389/fpsyg.2014.00781
-
-
Dienes, Z. (2016). How Bayes factors change scientific practice.
Journal of Mathematical Psychology,
72, 78–89.
https://doi.org/10.1016/j.jmp.2015.10.003
-
-
Digital Object Identifier System Handbook. (n.d.). DOI. Retrieved 9 July 2021, from
https://www.doi.org/hb.html
-
-
Directory of Open Access Journals. (n.d.). Retrieved 11 July 2021, from
https://doaj.org/apply/transparency/
-
-
Doll, R., & Hill, A. B. (1954). The Mortality of Doctors in Relation to Their Smoking Habits.
BMJ,
1(4877), 1451–1455.
https://doi.org/10.1136/bmj.1.4877.1451
-
-
Domov | SKRN (Slovak Reproducibility network). (n.d.). SKRN. Retrieved 10 July 2021, from
https://slovakrn.wixsite.com/skrn
-
-
Download JASP. (n.d.). JASP - Free and User-Friendly Statistical Software. Retrieved 9 July 2021, from
https://jasp-stats.org/download/
-
-
Drost, E. A. (2011). Validity and reliability in social science research. Education Research and Perspectives, 38(1), 105–123.
-
-
Du Bois, W. E. B. (2018). The souls of Black folk: Essays and sketches.
-
-
Duval, S., & Tweedie, R. (2000a). A Nonparametric ‘Trim and Fill’ Method of Accounting for Publication Bias in Meta-Analysis.
Journal of the American Statistical Association,
95(449), 89.
https://doi.org/10.2307/2669529
-
-
Duval, S., & Tweedie, R. (2000b). Trim and Fill: A Simple Funnel-Plot-Based Method of Testing and Adjusting for Publication Bias in Meta-Analysis.
Biometrics,
56(2), 455–463.
https://doi.org/10.1111/j.0006-341X.2000.00455.x
-
-
Duyx, B., Swaen, G. M. H., Urlings, M. J. E., Bouter, L. M., & Zeegers, M. P. (2019). The strong focus on positive results in abstracts may cause bias in systematic reviews: A case study on abstract reporting bias.
Systematic Reviews,
8(1), 174.
https://doi.org/10.1186/s13643-019-1082-9
-
-
Eagly, A. H., & Riger, S. (2014). Feminism and psychology: Critiques of methods and epistemology.
American Psychologist,
69(7), 685–702.
https://doi.org/10.1037/a0037372
-
-
Easterbrook, S. M. (2014). Open code for open science?
Nature Geoscience,
7(11), 779–781.
https://doi.org/10.1038/ngeo2283
-
-
Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J. M., Banks, J. B., Baranski, E., Bernstein, M. J., Bonfiglio, D. B. V., Boucher, L., Brown, E. R., Budiman, N. I., Cairo, A. H., Capaldi, C. A., Chartier, C. R., Chung, J. M., Cicero, D. C., Coleman, J. A., Conway, J. G., … Nosek, B. A. (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication.
Journal of Experimental Social Psychology,
67, 68–82.
https://doi.org/10.1016/j.jesp.2015.10.012
-
-
-
-
Eldermire, E. (n.d.).
LibGuides: Measuring your research impact: i10-Index. Retrieved 9 July 2021, from
https://guides.library.cornell.edu/impact/author-impact-10
-
-
Eley, A. R. (Ed.). (2012). Becoming a successful early career researcher. Routledge.
-
-
Ellemers, N. (2021). Science as collaborative knowledge generation.
British Journal of Social Psychology,
60(1), 1–28.
https://doi.org/10.1111/bjso.12430
-
-
Elliott, K. C., & Resnik, D. B. (2019). Making Open Science Work for Science and Society.
Environmental Health Perspectives,
127(7), 075002.
https://doi.org/10.1289/EHP4808
-
-
Elm, E. von, Altman, D. G., Egger, M., Pocock, S. J., Gøtzsche, P. C., & Vandenbroucke, J. P. (2007). Strengthening the reporting of observational studies in epidemiology (STROBE) statement: Guidelines for reporting observational studies.
BMJ,
335(7624), 806–808.
https://doi.org/10.1136/bmj.39335.541782.AD
-
-
Elman, C., Gerring, J., & Mahoney, J. (Eds.). (2020). The production of knowledge: Enhancing progress in social science. Cambridge University Press.
-
-
Elmore, S. A. (2018). Preprints: What Role Do These Have in Communicating Scientific Results?
Toxicologic Pathology,
46(4), 364–365.
https://doi.org/10.1177/0192623318767322
-
-
-
-
Epskamp, S., & Nuijten, M. B. (2018).
statcheck: Extract Statistics from Articles and Recompute p Values (1.3.0) [Computer software].
https://CRAN.R-project.org/package=statcheck
-
-
Esterling, K., Brady, D., & Schwitzgebel, E. (2021).
The Necessity of Construct and External Validity for Generalized Causal Claims [Preprint]. Open Science Framework.
https://doi.org/10.31219/osf.io/2s8w5
-
-
Etz, A., Gronau, Q. F., Dablander, F., Edelsbrunner, P. A., & Baribault, B. (2018). How to become a Bayesian in eight easy steps: An annotated reading list.
Psychonomic Bulletin & Review,
25(1), 219–234.
https://doi.org/10.3758/s13423-017-1317-5
-
-
European Commission. (2021).
European Commission. Responsible Research & Innovation | Horizon 2020.
https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-innovation
-
-
Evans, G., & Durant, J. (1995). The relationship between knowledge and attitudes in the public understanding of science in Britain.
Public Understanding of Science,
4(1), 57–74.
https://doi.org/10.1088/0963-6625/4/1/004
-
-
Evans, O., & Rubin, M. (2021). In a Class on Their Own: Investigating the Role of Social Integration in the Association Between Social Class and Mental Well-Being.
Personality and Social Psychology Bulletin, 014616722110211.
https://doi.org/10.1177/01461672211021190
-
-
-
-
Fanelli, D. (2010). Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data.
PLoS ONE,
5(4), e10271.
https://doi.org/10.1371/journal.pone.0010271
-
-
Fanelli, D. (2018). Opinion: Is science really facing a reproducibility crisis, and do we need it to?
Proceedings of the National Academy of Sciences,
115(11), 2628–2631.
https://doi.org/10.1073/pnas.1708272114
-
-
Farrow, R. (2017). Open education and critical pedagogy.
Learning, Media and Technology,
42(2), 130–146.
https://doi.org/10.1080/17439884.2016.1113991
-
-
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses.
Behavior Research Methods,
41(4), 1149–1160.
https://doi.org/10.3758/BRM.41.4.1149
-
-
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences.
Behavior Research Methods,
39(2), 175–191.
https://doi.org/10.3758/BF03193146
-
-
Ferson, S., Joslyn, C. A., Helton, J. C., Oberkampf, W. L., & Sentz, K. (2004). Summary from the epistemic uncertainty workshop: Consensus amid diversity.
Reliability Engineering & System Safety,
85(1–3), 355–369.
https://doi.org/10.1016/j.ress.2004.03.023
-
-
Fiedler, K., Kutzner, F., & Krueger, J. I. (2012). The Long Way From α-Error Control to Validity Proper: Problems With a Short-Sighted False-Positive Debate.
Perspectives on Psychological Science,
7(6), 661–669.
https://doi.org/10.1177/1745691612462587
-
-
Fiedler, K., & Schwarz, N. (2016). Questionable Research Practices Revisited.
Social Psychological and Personality Science,
7(1), 45–52.
https://doi.org/10.1177/1948550615612150
-
-
Filipe, A., Renedo, A., & Marston, C. (2017). The co-production of what? Knowledge, values, and social relations in health care.
PLOS Biology,
15(5), e2001403.
https://doi.org/10.1371/journal.pbio.2001403
-
-
Findley, M. G., Jensen, N. M., Malesky, E. J., & Pepinsky, T. B. (2016). Can Results-Free Review Reduce Publication Bias? The Results and Implications of a Pilot Study.
Comparative Political Studies,
49(13), 1667–1703.
https://doi.org/10.1177/0010414016655539
-
-
Finlay, L., & Gough, B. (Eds.). (2003). Reflexivity: A practical guide for researchers in health and social sciences. Blackwell Science.
-
-
Flake, J. K., & Fried, E. I. (2020). Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them.
Advances in Methods and Practices in Psychological Science,
3(4), 456–465.
https://doi.org/10.1177/2515245920952393
-
-
Fletcher-Watson, S., Adams, J., Brook, K., Charman, T., Crane, L., Cusack, J., Leekam, S., Milton, D., Parr, J. R., & Pellicano, E. (2019). Making the future together: Shaping autism research through meaningful participation.
Autism,
23(4), 943–953.
https://doi.org/10.1177/1362361318786721
-
-
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. (2013). emcee: The MCMC Hammer.
Publications of the Astronomical Society of the Pacific,
125(925), 306–312.
https://doi.org/10.1086/670067
-
-
Forrt. (2019).
Introducing a Framework for Open and Reproducible Research Training (FORRT) [Preprint]. Open Science Framework.
https://doi.org/10.31219/osf.io/bnh7p
-
-
FORRT - Framework for Open and Reproducible Research Training. (n.d.). FORRT. Retrieved 9 July 2021, from
https://forrt.org/
-
-
Foster, MSLS, E. D., & Deardorff, MLIS, A. (2017). Open Science Framework (OSF).
Journal of the Medical Library Association,
105(2).
https://doi.org/10.5195/JMLA.2017.88
-
-
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer.
Science,
345(6203), 1502–1505.
https://doi.org/10.1126/science.1255484
-
-
Frank, M. C., Bergelson, E., Bergmann, C., Cristia, A., Floccia, C., Gervain, J., Hamlin, J. K., Hannon, E. E., Kline, M., Levelt, C., Lew-Williams, C., Nazzi, T., Panneton, R., Rabagliati, H., Soderstrom, M., Sullivan, J., Waxman, S., & Yurovsky, D. (2017). A Collaborative Approach to Infant Research: Promoting Reproducibility, Best Practices, and Theory-Building.
Infancy,
22(4), 421–435.
https://doi.org/10.1111/infa.12182
-
-
Franzoni, C., & Sauermann, H. (2014). Crowd science: The organization of scientific research in open collaborative projects.
Research Policy,
43(1), 1–20.
https://doi.org/10.1016/j.respol.2013.07.005
-
-
Fraser, H., Bush, M., Wintle, B., Mody, F., Smith, E. T., Hanea, A., Gould, E., Hemming, V., Hamilton, D. G., Rumpff, L., Wilkinson, D. P., Pearson, R., Singleton Thorn, F., Ashton, raquel, Willcox, A., Gray, C. T., Head, A., Ross, M., Groenewegen, R., … Fidler, F. (2021).
Predicting reliability through structured expert elicitation with repliCATS (Collaborative Assessments for Trustworthy Science) [Preprint]. MetaArXiv.
https://doi.org/10.31222/osf.io/2pczv
-
-
Free Our Knowledge. (n.d.).
About. Free Our Knowledge. Retrieved 9 July 2021, from
https://freeourknowledge.org/about/
-
-
Frigg, R., & Hartmann, S. (2020). Models in Science. In E. N. Zalta (Ed.),
The Stanford Encyclopedia of Philosophy (Spring 2020). Metaphysics Research Lab, Stanford University.
https://plato.stanford.edu/archives/spr2020/entries/models-science/
-
-
Frith, U. (2020). Fast Lane to Slow Science.
Trends in Cognitive Sciences,
24(1), 1–2.
https://doi.org/10.1016/j.tics.2019.10.007
-
-
Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: Rethinking the Way We Measure.
Serials Review,
39(1), 56–61.
https://doi.org/10.1080/00987913.2013.10765486
-
-
Garson, G. D. (2012). Testing Statistical Assumptions (2012 edition). North Carolina State University.
-
-
Gelman, A., & Carlin, J. (2014). Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors.
Perspectives on Psychological Science,
9(6), 641–651.
https://doi.org/10.1177/1745691614551642
-
-
Gelman, A., & Loken, E. (2013).
The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time [Doctoral dissertation, Columbia University].
http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf
-
-
Gelman, A., & Stern, H. (2006). The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant.
The American Statistician,
60(4), 328–331.
https://doi.org/10.1198/000313006X152649
-
-
Generalizability. (2018). In B. B. Frey,
The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation. SAGE Publications, Inc.
https://doi.org/10.4135/9781506326139.n284
-
-
Gentleman, R. (2005). Reproducible Research: A Bioinformatics Case Study.
Statistical Applications in Genetics and Molecular Biology,
4(1).
https://doi.org/10.2202/1544-6115.1034
-
-
Get Involved—Creative Commons. (n.d.). Creative Commons. Retrieved 9 July 2021, from
https://creativecommons.org/about/get-involved/
-
-
Geyer, C., J. (2003). Maximum Likelihood in R (pp. 1–9) [Preprint]. Open Science Framework.
-
-
Geyer, C., J. (2007). Stat 5102 Notes: Maximum Likelihood (pp. 1–8) [Preprint]. Open Science Framework.
-
-
Gilroy, P. (2002). The black Atlantic: Modernity and double consciousness (3. impr., reprint). Verso.
-
-
Giner-Sorolla, R., Carpenter, T., Montoya, A., & Neil Lewis, J. (2019).
SPSP Power Analysis Working Group 2019.
https://osf.io/9bt5s/
-
-
Ginsparg, P. (1997). Winners and Losers in the Global Research Village.
The Serials Librarian,
30(3–4), 83–95.
https://doi.org/10.1300/J123v30n03_13
-
-
Ginsparg, P. (2001, February 20).
Creating a global knowledge network. Cornell University.
http://www.cs.cornell.edu/~ginsparg/physics/blurb/pg01unesco.html
-
-
Gioia, D. A., & Pitre, E. (1990). Multiparadigm Perspectives on Theory Building.
Academy of Management Review,
15(4), 584–602.
https://doi.org/10.5465/amr.1990.4310758
-
-
Git—About Version Control. (n.d.). Git. Retrieved 9 July 2021, from
https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control
-
-
Glass, D. J., & Hall, N. (2008). A Brief History of the Hypothesis.
Cell,
134(3), 378–381.
https://doi.org/10.1016/j.cell.2008.07.033
-
-
Gollwitzer, M., Abele-Brehm, A., Fiebach, C., Ramthun, R., Scheel, A. M., Schönbrodt, F. D., & Steinberg, U. (2020).
Data Management and Data Sharing in Psychological Science: Revision of the DGPs Recommendations [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/24ncs
-
-
Goodman, S. N., Fanelli, D., & Ioannidis, J. P. A. (2016). What does research reproducibility mean?
Science Translational Medicine,
8(341), 341ps12-341ps12.
https://doi.org/10.1126/scitranslmed.aaf5027
-
-
Goodman, S. W., & Pepinsky, T. B. (2019). Gender Representation and Strategies for Panel Diversity: Lessons from the APSA Annual Meeting.
PS: Political Science & Politics,
52(4), 669–676.
https://doi.org/10.1017/S1049096519000908
-
-
Gorgolewski, K. J., Auer, T., Calhoun, V. D., Craddock, R. C., Das, S., Duff, E. P., Flandin, G., Ghosh, S. S., Glatard, T., Halchenko, Y. O., Handwerker, D. A., Hanke, M., Keator, D., Li, X., Michael, Z., Maumet, C., Nichols, B. N., Nichols, T. E., Pellman, J., … Poldrack, R. A. (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments.
Scientific Data,
3(1), 160044.
https://doi.org/10.1038/sdata.2016.44
-
-
Graham, I. D., McCutcheon, C., & Kothari, A. (2019). Exploring the frontiers of research co-production: The Integrated Knowledge Translation Research Network concept papers.
Health Research Policy and Systems,
17(1), 88, s12961-019-0501–0507.
https://doi.org/10.1186/s12961-019-0501-7
-
-
GRN · German Reproducibility Network. (n.d.). German Reproducibility Network. Retrieved 10 July 2021, from
https://reproducibilitynetwork.de/
-
-
Grossmann, A., & Brembs, B. (2021). Current market rates for scholarly publishing services.
F1000Research,
10, 20.
https://doi.org/10.12688/f1000research.27468.1
-
-
Grzanka, P. R., Flores, M. J., VanDaalen, R. A., & Velez, G. (2020). Intersectionality in psychology: Translational science for social justice.
Translational Issues in Psychological Science,
6(4), 304–313.
https://doi.org/10.1037/tps0000276
-
-
Guenther, E. A., & Rodriguez, J. K. (2020, October 14).
What’s wrong with ‘manels’ and what can we do about them. The Conversation.
http://theconversation.com/whats-wrong-with-manels-and-what-can-we-do-about-them-148068
-
-
Guest, O. (2017, June 5). @BrianNosek @ctitusbrown @StuartBuck1 @DaniRabaiotti @Julie_B92 @jeroenbosman @blahah404 @OSFramework Thanks! Hopefully this thread & many other similar discussions & blogs will help make it less Bropen Science and more Open Science. *hides* [Tweet].
@o_guest.
https://twitter.com/o_guest/status/871675631062458368
-
-
Guest, O., & Martin, A. E. (2021). How Computational Modeling Can Force Theory Building in Psychological Science.
Perspectives on Psychological Science, 174569162097058.
https://doi.org/10.1177/1745691620970585
-
-
-
-
Haak, L. L., Fenner, M., Paglione, L., Pentz, E., & Ratner, H. (2012). ORCID: A system to uniquely identify researchers.
Learned Publishing,
25(4), 259–264.
https://doi.org/10.1087/20120404
-
-
Hackett, R., & Kelly, S. (2020). Publishing ethics in the era of paper mills.
Biology Open,
9(10), bio056556.
https://doi.org/10.1242/bio.056556
-
-
Hahn, G. J., & Meeker, W. Q. (1993). Assumptions for Statistical Inference.
The American Statistician,
47(1), 1–11.
https://doi.org/10.1080/00031305.1993.10475924
-
-
Hardwicke, T. E., Bohn, M., MacDonald, K., Hembacher, E., Nuijten, M. B., Peloquin, B. N., deMayo, B. E., Long, B., Yoon, E. J., & Frank, M. C. (2021). Analytic reproducibility in articles receiving open data badges at the journal
Psychological Science: An observational study.
Royal Society Open Science,
8(1), 201494.
https://doi.org/10.1098/rsos.201494
-
-
Hardwicke, T. E., Jameel, L., Jones, M., Walczak, E. J., & Magis-Weinberg, L. (2014). Only Human: Scientists, Systems, and Suspect Statistics.
Opticon1826,
16.
https://doi.org/10.5334/opt.ch
-
-
Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N. J., Kern, R., Picus, M., Hoyer, S., van Kerkwijk, M. H., Brett, M., Haldane, A., del Río, J. F., Wiebe, M., Peterson, P., … Oliphant, T. E. (2020). Array programming with NumPy.
Nature,
585(7825), 357–362.
https://doi.org/10.1038/s41586-020-2649-2
-
-
Hart, D., & Silka, L. (n.d.). Rebuilding the Ivory Tower: A Bottom-Up Experiment in Aligning Research With Societal Needs.
Issues in Science and Technology,
36(3), 79–85.
https://issues.org/aligning-research-with-societal-needs/
-
-
Hartgerink, C. H. J., Wicherts, J. M., & van Assen, M. A. L. M. (2017). Too Good to be False: Nonsignificant Results Revisited.
Collabra: Psychology,
3(1), 9.
https://doi.org/10.1525/collabra.71
-
-
Hayes, B. C., & Tariq, V. N. (2000). Gender differences in scientific knowledge and attitudes toward science: A comparative study of four Anglo-American nations.
Public Understanding of Science,
9(4), 433–447.
https://doi.org/10.1088/0963-6625/9/4/306
-
-
Haynes, S. N., Richard, D. C. S., & Kubany, E. S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods.
Psychological Assessment,
7(3), 238–247.
https://doi.org/10.1037/1040-3590.7.3.238
-
-
Healy, K. (2018). Data visualization: A practical introduction. Princeton University Press.
-
-
Heathers, J. A., Anaya, J., van der Zee, T., & Brown, N. J. (2018).
Recovering data from summary statistics: Sample Parameter Reconstruction via Iterative TEchniques (SPRITE) [Preprint]. PeerJ Preprints.
https://doi.org/10.7287/peerj.preprints.26968v1
-
-
Hendriks, F., Kienhues, D., & Bromme, R. (2016). Trust in Science and the Science of Trust. In B. Blöbaum (Ed.),
Trust and Communication in a Digitized World (pp. 143–159). Springer International Publishing.
https://doi.org/10.1007/978-3-319-28059-2_8
-
-
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world?
Behavioral and Brain Sciences,
33(2–3), 61–83.
https://doi.org/10.1017/S0140525X0999152X
-
-
Henrich, J. P. (2020). The WEIRDest people in the world: How the West Became Psychologically Peculiar and Particularly Prosperous. Farrar, Straus and Giroux.
-
-
Herrmannova, D., & Knoth, P. (2016). Semantometrics: Towards Fulltext-based Research Evaluation.
Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, 235–236.
https://doi.org/10.1145/2910896.2925448
-
-
Heyman, T., Moors, P., & Rabagliati, H. (2020). The benefits of adversarial collaboration for commentaries.
Nature Human Behaviour,
4(12), 1217–1217.
https://doi.org/10.1038/s41562-020-00978-6
-
-
Higgins, J. P. T., & Cochrane Collaboration (Eds.). (2020). Cochrane handbook for systematic reviews of interventions (Second edition). Wiley-Blackwell.
-
-
Himmelstein, D. S., Rubinetti, V., Slochower, D. R., Hu, D., Malladi, V. S., Greene, C. S., & Gitter, A. (2019). Open collaborative writing with Manubot.
PLOS Computational Biology,
15(6), e1007128.
https://doi.org/10.1371/journal.pcbi.1007128
-
-
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output.
Proceedings of the National Academy of Sciences,
102(46), 16569–16572.
https://doi.org/10.1073/pnas.0507655102
-
-
Hitchcock, C., Meyer, A., Rose, D., & Jackson, R. (2002). Providing New Access to the General Curriculum: Universal Design for Learning.
TEACHING Exceptional Children,
35(2), 8–17.
https://doi.org/10.1177/004005990203500201
-
-
Hoekstra, R., Kiers, H., & Johnson, A. (2012). Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?
Frontiers in Psychology,
3.
https://doi.org/10.3389/fpsyg.2012.00137
-
-
Hogg, D. W., Bovy, J., & Lang, D. (2010). Data analysis recipes: Fitting a model to data.
ArXiv:1008.4686 [Astro-Ph, Physics:Physics].
http://arxiv.org/abs/1008.4686
-
-
Hoijtink, H., Mulder, J., van Lissa, C., & Gu, X. (2019). A tutorial on testing hypotheses using the Bayes factor.
Psychological Methods,
24(5), 539–556.
https://doi.org/10.1037/met0000201
-
-
Holcombe, A. O. (2019). Contributorship, Not Authorship: Use CRediT to Indicate Who Did What.
Publications,
7(3), 48.
https://doi.org/10.3390/publications7030048
-
-
Holden, R. R. (2010). Face Validity. In I. B. Weiner & W. E. Craighead (Eds.),
The Corsini Encyclopedia of Psychology (p. corpsy0341). John Wiley & Sons, Inc.
https://doi.org/10.1002/9780470479216.corpsy0341
-
-
Home | re3data.org. (n.d.). DataCite Schema. Retrieved 10 July 2021, from
https://www.re3data.org/
-
-
Homepage. (n.d.). Open Science MOOC. Retrieved 9 July 2021, from
https://opensciencemooc.eu/
-
-
Houtkoop, B. L., Chambers, C., Macleod, M., Bishop, D. V. M., Nichols, T. E., & Wagenmakers, E.-J. (2018). Data Sharing in Psychology: A Survey on Barriers and Preconditions.
Advances in Methods and Practices in Psychological Science,
1(1), 70–85.
https://doi.org/10.1177/2515245917751886
-
-
How to Make Inclusivity More Than Just an Office Buzzword. (n.d.). Kellogg Insight. Retrieved 9 July 2021, from
https://insight.kellogg.northwestern.edu/article/how-to-make-inclusivity-more-than-just-an-office-buzzword
-
-
Https://improvingpsych.org/. (n.d.). Retrieved 10 July 2021, from
https://improvingpsych.org/
-
-
Huber, B., Barnidge, M., Gil de Zúñiga, H., & Liu, J. (2019). Fostering public trust in science: The role of social media.
Public Understanding of Science,
28(7), 759–777.
https://doi.org/10.1177/0963662519869097
-
-
Huber, C. (2016a, November 1). The Stata Blog » Introduction to Bayesian statistics, part 1: The basic concepts.
The Stata Blog.
https://blog.stata.com/2016/11/01/introduction-to-bayesian-statistics-part-1-the-basic-concepts/
-
-
Huber, C. (2016b, November 15). Introduction to Bayesian statistics, part 2: MCMC and the Metropolis–Hastings algorithm.
The Stata Blog.
https://blog.stata.com/2016/11/15/introduction-to-bayesian-statistics-part-2-mcmc-and-the-metropolis-hastings-algorithm/
-
-
Huelin, R., Iheanacho, I., Payne, K., & Sandman, K. (2015).
What’s in a Name? Systematic and Non-Systematic Literature Reviews, and Why the Distinction Matters—Evidera (The Evidence Forum, pp. 34–37).
https://www.evidera.com/resource/whats-in-a-name-systematic-and-non-systematic-literature-reviews-and-why-the-distinction-matters/
-
-
Hüffmeier, J., Mazei, J., & Schultze, T. (2016). Reconceptualizing replication as a sequence of different studies: A replication typology.
Journal of Experimental Social Psychology,
66, 81–92.
https://doi.org/10.1016/j.jesp.2015.09.009
-
-
Hunter, J. E., & Schmidt, F. L. (2015). Methods of meta-analysis: Correcting error and bias in research findings (Third edition). SAGE.
-
-
Hurlbert, S. H. (1984). Pseudoreplication and the Design of Ecological Field Experiments.
Ecological Monographs,
54(2), 187–211.
https://doi.org/10.2307/1942661
-
-
ICMJE | Home. (n.d.). International Committee of Medical Journal Editors. Retrieved 11 July 2021, from
http://www.icmje.org/
-
-
Ikeda A., Xu H., Fuji N., Zhu S., & Yamada Y. (2019).
Questionable research practices following pre-registration. 心理学評論刊行会.
https://doi.org/10.24602/sjpr.62.3_281
-
-
Initial revision of ‘git’, the information manager from hell · git/git@e83c516. (n.d.). GitHub. Retrieved 9 July 2021, from
https://github.com/git/git/commit/e83c5163316f89bfbde7d9ab23ca2e25604af290
-
-
International Committee of Medical Journal Editors. (n.d.).
ICMJE | Recommendations | Author Responsibilities—Disclosure of Financial and Non-Financial Relationships and Activities, and Conflicts of Interest. ICJME.
http://www.icmje.org/recommendations/browse/roles-and-responsibilities/author-responsibilities--conflicts-of-interest.html
-
-
INVOLVE – INVOLVE Supporting public involvement in NHS, public health and social care research. (n.d.). Retrieved 9 July 2021, from
https://www.invo.org.uk/
-
-
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False.
PLoS Medicine,
2(8), e124.
https://doi.org/10.1371/journal.pmed.0020124
-
-
Ioannidis, J. P. A., Fanelli, D., Dunne, D. D., & Goodman, S. N. (2015). Meta-research: Evaluation and Improvement of Research Methods and Practices.
PLOS Biology,
13(10), e1002264.
https://doi.org/10.1371/journal.pbio.1002264
-
-
JabRef—Free Reference Manager—Stay on top of your Literature. (n.d.). JabRef. Retrieved 9 July 2021, from
https://www.jabref.org/
-
-
Jacobson, D., & Mustafa, N. (2019). Social Identity Map: A Reflexivity Tool for Practicing Explicit Positionality in Critical Qualitative Research.
International Journal of Qualitative Methods,
18, 160940691987007.
https://doi.org/10.1177/1609406919870075
-
-
Jafar, A. J. N. (2018). What is positionality and should it be expressed in quantitative studies?
Emergency Medicine Journal, emermed-2017-207158.
https://doi.org/10.1136/emermed-2017-207158
-
-
James, K. L., Randall, N. P., & Haddaway, N. R. (2016). A methodology for systematic mapping in environmental sciences.
Environmental Evidence,
5(1), 7.
https://doi.org/10.1186/s13750-016-0059-6
-
-
Jamovi—Stats. Open. Now. (n.d.). Jamovi. Retrieved 9 July 2021, from
https://www.jamovi.org/
-
-
Jannot, A.-S., Agoritsas, T., Gayet-Ageron, A., & Perneger, T. V. (2013). Citation bias favoring statistically significant studies was present in medical research.
Journal of Clinical Epidemiology,
66(3), 296–301.
https://doi.org/10.1016/j.jclinepi.2012.09.015
-
-
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling.
Psychological Science,
23(5), 524–532.
https://doi.org/10.1177/0956797611430953
-
-
Jones, A., Worrall, S., Rudin, L., Duckworth, J. J., & Christiansen, P. (2021). May I have your attention, please? Methodological and analytical flexibility in the addiction stroop.
Addiction Research & Theory, 1–14.
https://doi.org/10.1080/16066359.2021.1876847
-
-
Joseph, T. D., & Hirshfield, L. E. (2011). ‘Why don’t you get somebody new to do it?’ Race and cultural taxation in the academy.
Ethnic and Racial Studies,
34(1), 121–141.
https://doi.org/10.1080/01419870.2010.496489
-
-
Kalliamvakou, E., Gousios, G., Blincoe, K., Singer, L., German, D. M., & Damian, D. (2014). The promises and perils of mining GitHub.
Proceedings of the 11th Working Conference on Mining Software Repositories - MSR 2014, 92–101.
https://doi.org/10.1145/2597073.2597074
-
-
kamraro. (2014, April 1).
Responsible research & innovation [Text]. Horizon 2020 - European Commission.
https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-innovation
-
-
Kathawalla, U.-K., Silverstein, P., & Syed, M. (2021). Easing Into Open Science: A Guide for Graduate Students and Their Advisors.
Collabra: Psychology,
7(1), 18684.
https://doi.org/10.1525/collabra.18684
-
-
Kelley, T. (1927). Interpretation of educational measurements. World Book Co.
-
-
Kerr, J. R., & Wilson, M. S. (2021). Right-wing authoritarianism and social dominance orientation predict rejection of science and scientists.
Group Processes & Intergroup Relations,
24(4), 550–567.
https://doi.org/10.1177/1368430221992126
-
-
Kerr, N. L. (1998). HARKing: Hypothesizing After the Results are Known.
Personality and Social Psychology Review,
2(3), 196–217.
https://doi.org/10.1207/s15327957pspr0203_4
-
-
Kerr, N. L., Ao, X., Hogg, M. A., & Zhang, J. (2018). Addressing replicability concerns via adversarial collaboration: Discovering hidden moderators of the minimal intergroup discrimination effect.
Journal of Experimental Social Psychology,
78, 66–76.
https://doi.org/10.1016/j.jesp.2018.05.001
-
-
Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-S., Kennett, C., Slowik, A., Sonnleitner, C., Hess-Holden, C., Errington, T. M., Fiedler, S., & Nosek, B. A. (2016). Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency.
PLOS Biology,
14(5), e1002456.
https://doi.org/10.1371/journal.pbio.1002456
-
-
Kienzler, H., & Fontanesi, C. (2017). Learning through inquiry: A Global Health Hackathon.
Teaching in Higher Education,
22(2), 129–142.
https://doi.org/10.1080/13562517.2016.1221805
-
-
Kiernan, C. (1999). Participation in Research by People with Learning Disability: Origins and Issues.
British Journal of Learning Disabilities,
27(2), 43–47.
https://doi.org/10.1111/j.1468-3156.1999.tb00084.x
-
-
King, G. (1995). Replication, Replication.
PS: Political Science and Politics,
28(3), 444.
https://doi.org/10.2307/420301
-
-
Kitzes, J., Turek, D., & Deniz, F. (Eds.). (2018). The practice of reproducible research: Case studies and lessons from the data-intensive sciences. University of California Press.
-
-
Kiureghian, A. D., & Ditlevsen, O. (2009). Aleatory or epistemic? Does it matter?
Structural Safety,
31(2), 105–112.
https://doi.org/10.1016/j.strusafe.2008.06.020
-
-
Klein, O., Hardwicke, T. E., Aust, F., Breuer, J., Danielsson, H., Mohr, A. H., IJzerman, H., Nilsonne, G., Vanpaemel, W., & Frank, M. C. (2018). A Practical Guide for Transparency in Psychological Science.
Collabra: Psychology,
4(1), 20.
https://doi.org/10.1525/collabra.158
-
-
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, Š., Bernstein, M. J., Bocian, K., Brandt, M. J., Brooks, B., Brumbaugh, C. C., Cemalcilar, Z., Chandler, J., Cheong, W., Davis, W. E., Devos, T., Eisner, M., Frankowska, N., Furrow, D., Galliani, E. M., … Nosek, B. A. (2014). Investigating Variation in Replicability: A “Many Labs” Replication Project.
Social Psychology,
45(3), 142–152.
https://doi.org/10.1027/1864-9335/a000178
-
-
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., Aveyard, M., Axt, J. R., Babalola, M. T., Bahník, Š., Batra, R., Berkics, M., Bernstein, M. J., Berry, D. R., Bialobrzeska, O., Binan, E. D., Bocian, K., Brandt, M. J., Busching, R., … Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings.
Advances in Methods and Practices in Psychological Science,
1(4), 443–490.
https://doi.org/10.1177/2515245918810225
-
-
Kleinberg, B., Mozes, M., van der Toolen, Y., & Verschuere, B. (2017).
NETANOS - Named entity-based Text Anonymization for Open Science [Preprint]. Open Science Framework.
https://doi.org/10.31219/osf.io/w9nhb
-
-
Knoth, P., & Herrmannova, D. (n.d.). Towards Semantometrics: A New Semantic Similarity Based Measure for Assessing a Research Publication’s Contribution.
D-Lib Magazine,
20(11/12), 8.
https://doi.org/10.1045/november14-knoth
-
-
Koole, S. L., & Lakens, D. (2012). Rewarding Replications: A Sure and Simple Way to Improve Psychological Science.
Perspectives on Psychological Science,
7(6), 608–614.
https://doi.org/10.1177/1745691612462586
-
-
Kreuter, F. (Ed.). (2013).
Improving Surveys with Paradata: Analytic Uses of Process Information. John Wiley & Sons, Inc.
https://doi.org/10.1002/9781118596869
-
-
Kruschke, J. K. (2015). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan (2nd ed.). Academic Press.
-
-
Kuhn, T. S. (1996). The structure of scientific revolutions (3rd ed). University of Chicago Press.
-
-
Kukull, W. A., & Ganguli, M. (2012). Generalizability: The trees, the forest, and the low-hanging fruit.
Neurology,
78(23), 1886–1891.
https://doi.org/10.1212/WNL.0b013e318258f812
-
-
L. Haven, T., & Van Grootel, Dr. L. (2019). Preregistering qualitative research.
Accountability in Research,
26(3), 229–244.
https://doi.org/10.1080/08989621.2019.1580147
-
-
Laakso, M., & Björk, B.-C. (2013). Delayed open access: An overlooked high-impact category of openly available scientific literature.
Journal of the American Society for Information Science and Technology,
64(7), 1323–1329.
https://doi.org/10.1002/asi.22856
-
-
Laine, H. (2017). Afraid of Scooping – Case Study on Researcher Strategies against Fear of Scooping in the Context of Open Science.
Data Science Journal,
16, 29.
https://doi.org/10.5334/dsj-2017-029
-
-
Lakatos, I. (1978). The Methodology of Scientific Research Programs: Vol. I. Cambridge University Press.
-
-
Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses: Sequential analyses.
European Journal of Social Psychology,
44(7), 701–710.
https://doi.org/10.1002/ejsp.2023
-
-
Lakens, D. (2020a, May 11). The 20% Statistician: Red Team Challenge.
The 20% Statistician.
http://daniellakens.blogspot.com/2020/05/red-team-challenge.html
-
-
Lakens, D. (2020b). Pandemic researchers—Recruit your own best critics.
Nature,
581(7807), 121–121.
https://doi.org/10.1038/d41586-020-01392-8
-
-
Lakens, D. (2021a).
Sample Size Justification [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/9d3yf
-
-
Lakens, D. (2021b). The Practical Alternative to the
p Value Is the Correctly Used
p Value.
Perspectives on Psychological Science,
16(3), 639–648.
https://doi.org/10.1177/1745691620958012
-
-
Lakens, D., McLatchie, N., Isager, P. M., Scheel, A. M., & Dienes, Z. (2020). Improving Inferences About Null Effects With Bayes Factors and Equivalence Tests.
The Journals of Gerontology: Series B,
75(1), 45–57.
https://doi.org/10.1093/geronb/gby065
-
-
Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence Testing for Psychological Research: A Tutorial.
Advances in Methods and Practices in Psychological Science,
1(2), 259–269.
https://doi.org/10.1177/2515245918770963
-
-
Largent, E. A., & Snodgrass, R. T. (2016). Blind Peer Review by Academic Journals. In
Blinding as a Solution to Bias (pp. 75–95). Elsevier.
https://doi.org/10.1016/B978-0-12-802460-7.00005-X
-
-
Larivière, V., Desrochers, N., Macaluso, B., Mongeon, P., Paul-Hus, A., & Sugimoto, C. R. (2016). Contributorship and division of labor in knowledge production.
Social Studies of Science,
46(3), 417–435.
https://doi.org/10.1177/0306312716650046
-
-
Lazic, S. E. (2019, September 16). Genuine replication and pseudoreplication: What’s the difference? | BMJ Open Science.
BMJ Open Science.
https://blogs.bmj.com/openscience/2019/09/16/genuine-replication-and-pseudoreplication-whats-the-difference/
-
-
Leavens, D. A., Bard, K. A., & Hopkins, W. D. (2010). BIZARRE chimpanzees do not represent “the chimpanzee”.
Behavioral and Brain Sciences,
33(2–3), 100–101.
https://doi.org/10.1017/S0140525X10000166
-
-
Leavy, P. (2017). Research design: Quantitative, qualitative, mixed methods, arts-based, and community-based participatory research approaches. Guilford Press.
-
-
LeBel, E. P., McCarthy, R. J., Earp, B. D., Elson, M., & Vanpaemel, W. (2018). A Unified Framework to Quantify the Credibility of Scientific Findings.
Advances in Methods and Practices in Psychological Science,
1(3), 389–402.
https://doi.org/10.1177/2515245918787489
-
-
LeBel, E. P., Vanpaemel, W., Cheung, I., & Campbell, L. (2019). A Brief Guide to Evaluate Replications.
Meta-Psychology,
3.
https://doi.org/10.15626/MP.2018.843
-
-
Ledgerwood, A., Hudson, S. T. J., Lewis, N. A., Maddox, K. B., Pickett, C., Remedios, J. D., Cheryan, S., Diekman, A., Dutra, N. B., Goh, J. X., Goodwin, S., Munakata, Y., Navarro, D., Onyeador, I. N., Srivastava, S., & Wilkins, C. L. (2021).
The Pandemic as a Portal: Reimagining Psychological Science as Truly Open and Inclusive [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/gdzue
-
-
Lee, R. M. (1993). Doing research on sensitive topics. Sage Publications.
-
-
Lewandowsky, S., & Bishop, D. (2016). Research integrity: Don’t let transparency damage science.
Nature,
529(7587), 459–461.
https://doi.org/10.1038/529459a
-
-
Lewandowsky, S., & Oberauer, K. (2021). Worldview-motivated rejection of science and the norms of science.
Cognition,
215, 104820.
https://doi.org/10.1016/j.cognition.2021.104820
-
-
Licenses & Standards | Open Source Initiative. (n.d.). Open Source Initiative. Retrieved 9 July 2021, from
https://opensource.org/licenses
-
-
Lin, D., Crabtree, J., Dillo, I., Downs, R. R., Edmunds, R., Giaretta, D., De Giusti, M., L’Hours, H., Hugo, W., Jenkyns, R., Khodiyar, V., Martone, M. E., Mokrane, M., Navale, V., Petters, J., Sierman, B., Sokolova, D. V., Stockhause, M., & Westbrook, J. (2020). The TRUST Principles for digital repositories.
Scientific Data,
7(1), 144.
https://doi.org/10.1038/s41597-020-0486-7
-
-
Lind, F., Gruber, M., & Boomgaarden, H. G. (2017). Content Analysis by the Crowd: Assessing the Usability of Crowdsourcing for Coding Latent Constructs.
Communication Methods and Measures,
11(3), 191–209.
https://doi.org/10.1080/19312458.2017.1317338
-
-
Lindsay, D. S. (2015). Replication in Psychological Science.
Psychological Science,
26(12), 1827–1832.
https://doi.org/10.1177/0956797615616374
-
-
Lindsay, D. S. (2020). Seven steps toward transparency and replicability in psychological science.
Canadian Psychology/Psychologie Canadienne,
61(4), 310–317.
https://doi.org/10.1037/cap0000222
-
-
Lintott, C. J., Schawinski, K., Slosar, A., Land, K., Bamford, S., Thomas, D., Raddick, M. J., Nichol, R. C., Szalay, A., Andreescu, D., Murray, P., & Vandenberg, J. (2008). Galaxy Zoo: Morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey
★.
Monthly Notices of the Royal Astronomical Society,
389(3), 1179–1189.
https://doi.org/10.1111/j.1365-2966.2008.13689.x
-
-
Liu, H., & Priest, S. (2009). Understanding public support for stem cell research: Media communication, interpersonal communication and trust in key actors.
Public Understanding of Science,
18(6), 704–718.
https://doi.org/10.1177/0963662508097625
-
-
Liu, Y., Gordon, M., Wang, J., Bishop, M., Chen, Y., Pfeiffer, T., Twardy, C., & Viganola, D. (2020). Replication Markets: Results, Lessons, Challenges and Opportunities in AI Replication.
ArXiv:2005.04543 [Cs].
http://arxiv.org/abs/2005.04543
-
-
Longino, H. E. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton University Press.
-
-
Longino, H. E. (1992). Taking Gender Seriously in Philosophy of Science.
PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association,
1992(2), 333–340.
https://doi.org/10.1086/psaprocbienmeetp.1992.2.192847
-
-
Lu, J., Qiu, Y., & Deng, A. (2019). A note on Type S/M errors in hypothesis testing.
British Journal of Mathematical and Statistical Psychology,
72(1), 1–17.
https://doi.org/10.1111/bmsp.12132
-
-
Lüdtke, O., Ulitzsch, E., & Robitzsch, A. (2020).
A Comparison of Penalized Maximum Likelihood Estimation and Markov Chain Monte Carlo Techniques for Estimating Confirmatory Factor Analysis Models with Small Sample Sizes [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/u3qag
-
-
Lutz, M. (2019). Programming Python (Fourth edition). O’Reilly.
-
-
Lynch, Jr., J. G. (1982). On the External Validity of Experiments in Consumer Research.
Journal of Consumer Research,
9(3), 225.
https://doi.org/10.1086/208919
-
-
Macfarlane, B., & Cheng, M. (2008). Communism, Universalism and Disinterestedness: Re-examining Contemporary Support among Academics for Merton’s Scientific Norms.
Journal of Academic Ethics,
6(1), 67–78.
https://doi.org/10.1007/s10805-008-9055-y
-
-
Makowski, D., Ben-Shachar, M. S., Chen, S. H. A., & Lüdecke, D. (2019). Indices of Effect Existence and Significance in the Bayesian Framework.
Frontiers in Psychology,
10, 2767.
https://doi.org/10.3389/fpsyg.2019.02767
-
-
Martinez-Acosta, V. G., & Favero, C. B. (2018). A Discussion of Diversity and Inclusivity at the Institutional Level: The Need for a Strategic Plan. Journal of Undergraduate Neuroscience Education: JUNE: A Publication of FUN, Faculty for Undergraduate Neuroscience, 16(3), A252–A260.
-
-
Marwick, B., Boettiger, C., & Mullen, L. (2018). Packaging Data Analytical Work Reproducibly Using R (and Friends).
The American Statistician,
72(1), 80–88.
https://doi.org/10.1080/00031305.2017.1375986
-
-
Masur, P. K. (2020).
Understanding the Effects of Analytical Choices on Finding the Privacy Paradox: A Specification Curve Analysis of Large-Scale Survey Data [Preprint]. Open Science Framework.
https://osf.io/m72gb/
-
-
McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan (2nd ed.). Taylor and Francis, CRC Press.
-
-
McNutt, M. K., Bradford, M., Drazen, J. M., Hanson, B., Howard, B., Jamieson, K. H., Kiermer, V., Marcus, E., Pope, B. K., Schekman, R., Swaminathan, S., Stang, P. J., & Verma, I. M. (2018). Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication.
Proceedings of the National Academy of Sciences,
115(11), 2557–2560.
https://doi.org/10.1073/pnas.1715374115
-
-
Medical Research Centre. (2019).
Identifiability, anonymisation and pseudonymisation. Medical Research Centre.
https://mrc.ukri.org/documents/pdf/gdpr-guidance-note-5-identifiability-anonymisation-and-pseudonymisation/
-
-
Medin, D. L. (2012, February 1). Rigor Without Rigor Mortis: The APS Board Discusses Research Integrity [Blog].
Association for Psychological Science.
https://www.psychologicalscience.org/observer/scientific-rigor
-
-
Melissa S. Anderson, Emily A. Ronning, Raymond De Vries, & Brian C. Martinson. (2010). Extending the Mertonian Norms: Scientists’ Subscription to Norms of Research.
The Journal of Higher Education,
81(3), 366–393.
https://doi.org/10.1353/jhe.0.0095
-
-
Mellers, B., Hertwig, R., & Kahneman, D. (2001). Do Frequency Representations Eliminate Conjunction Effects? An Exercise in Adversarial Collaboration.
Psychological Science,
12(4), 269–275.
https://doi.org/10.1111/1467-9280.00350
-
-
Menke, C. (2015). A Note on Science and Democracy? Robert K. Mertons Ethos of Science. In R. Klausnitzer, C. Spoerhase, & D. Werle (Eds.),
Ethos und Pathos der Geisteswissenschaften. DE GRUYTER.
https://doi.org/10.1515/9783110375008-013
-
-
Mertens, G., & Krypotos, A.-M. (2019). Preregistration of Analyses of Preexisting Data.
Psychologica Belgica,
59(1), 338–352.
https://doi.org/10.5334/pb.493
-
-
Merton, R. K. (1938). Science and the Social Order.
Philosophy of Science,
5(3), 321–337.
https://doi.org/10.1086/286513
-
-
Merton, R. K. (1968). The Matthew Effect in Science: The reward and communication systems of science are considered.
Science,
159(3810), 56–63.
https://doi.org/10.1126/science.159.3810.56
-
-
Meslin, E. M. (2008). Achieving global justice in health through global research ethics: Supplementing Macklin’s ‘top-down’ approach with one from the ‘ground up’. In R. M. Green, A. Donovan, & S. A. Jauss (Eds.), Global bioethics: Issues of conscience for the twenty-first century (pp. 163–177). Clarendon Press ; Oxford University Press.
-
-
Michener, W. K. (2015). Ten Simple Rules for Creating a Good Data Management Plan.
PLOS Computational Biology,
11(10), e1004525.
https://doi.org/10.1371/journal.pcbi.1004525
-
-
Mischel, W. (2009, January 1). Becoming a Cumulative Science.
Association for Psychological Science.
https://www.psychologicalscience.org/observer/becoming-a-cumulative-science
-
-
Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Sham, M. H., Barbour, V., Coriat, A.-M., Foeger, N., & Dirnagl, U. (2020). The Hong Kong Principles for assessing researchers: Fostering research integrity.
PLOS Biology,
18(7), e3000737.
https://doi.org/10.1371/journal.pbio.3000737
-
-
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The PRISMA Group. (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement.
PLoS Medicine,
6(7), e1000097.
https://doi.org/10.1371/journal.pmed.1000097
-
-
Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure.
PLOS Biology,
16(3), e2004089.
https://doi.org/10.1371/journal.pbio.2004089
-
-
Monroe, K. R. (2018). The Rush to Transparency: DA-RT and the Potential Dangers for Qualitative Research.
Perspectives on Politics,
16(1), 141–148.
https://doi.org/10.1017/S153759271700336X
-
-
Morabia, A., Have, T. T., & Landis, J. R. (1997). Interaction Fallacy.
Journal of Clinical Epidemiology,
50(7), 809–812.
https://doi.org/10.1016/S0895-4356(97)00053-X
-
-
Moran, H., Karlin, L., Lauchlan, E., Rappaport, S. J., Bleasdale, B., Wild, L., & Dorr, J. (2020). Understanding Research Culture: What researchers think about the culture they work in.
Wellcome Open Research,
5, 201.
https://doi.org/10.12688/wellcomeopenres.15832.1
-
-
Moretti, M. (2020, August 12).
Beyond Open-washing: Are Narratives the Future of Open Data Portals? | by matteo moretti | Nightingale | Medium. Nightingale.
https://medium.com/nightingale/beyond-open-washing-are-stories-and-narratives-the-future-of-open-data-portals-93228d8882f3
-
-
Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., Lewandowsky, S., Morey, C. C., Newman, D. P., Schönbrodt, F. D., Vanpaemel, W., Wagenmakers, E.-J., & Zwaan, R. A. (2016). The Peer Reviewers’ Openness Initiative: Incentivizing open research practices through peer review.
Royal Society Open Science,
3(1), 150547.
https://doi.org/10.1098/rsos.150547
-
-
Morgan, C. (1998). The DOI (Digital Object Identifier).
Serials: The Journal for the Serials Community,
11(1), 47–51.
https://doi.org/10.1629/1147
-
-
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., Grahe, J. E., McCarthy, R. J., Musser, E. D., Antfolk, J., Castille, C. M., Evans, T. R., Fiedler, S., Flake, J. K., Forero, D. A., Janssen, S. M. J., Keene, J. R., Protzko, J., Aczel, B., … Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network.
Advances in Methods and Practices in Psychological Science,
1(4), 501–515.
https://doi.org/10.1177/2515245918797607
-
-
Moshontz, H., Ebersole, C. R., Weston, S. J., & Klein, R. A. (2021). A guide for many authors: Writing manuscripts in large collaborations.
Social and Personality Psychology Compass,
15(4).
https://doi.org/10.1111/spc3.12590
-
-
Mourby, M., Mackey, E., Elliot, M., Gowans, H., Wallace, S. E., Bell, J., Smith, H., Aidinlis, S., & Kaye, J. (2018). Are ‘pseudonymised’ data always personal data? Implications of the GDPR for administrative data research in the UK.
Computer Law & Security Review,
34(2), 222–233.
https://doi.org/10.1016/j.clsr.2018.01.002
-
-
Muller, J. Z. (2018). The tyranny of metrics. Princeton University Press.
-
-
Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach.
BMC Medical Research Methodology,
18(1), 143.
https://doi.org/10.1186/s12874-018-0611-x
-
-
Muthukrishna, M., Bell, A. V., Henrich, J., Curtin, C. M., Gedranovich, A., McInerney, J., & Thue, B. (2020). Beyond Western, Educated, Industrial, Rich, and Democratic (WEIRD) Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance.
Psychological Science,
31(6), 678–701.
https://doi.org/10.1177/0956797620916782
-
-
Naudet, F., Ioannidis, J. P. A., Miedema, F., Cristea, I. A., Goodman, Steven N., J., & Moher, D. (2018, June 4). Six principles for assessing scientists for hiring, promotion, and tenure.
Impact of Social Sciences Blog.
http://eprints.lse.ac.uk/90753/
-
-
Navarro, D. (2020).
Paths in strange spaces: A comment on preregistration [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/wxn58
-
-
Nelson, L. D., Simmons, J. P., & Simonsohn, U. (2012). Let’s Publish
Fewer Papers.
Psychological Inquiry,
23(3), 291–293.
https://doi.org/10.1080/1047840X.2012.705245
-
-
Neuroskeptic. (2012). The Nine Circles of Scientific Hell.
Perspectives on Psychological Science,
7(6), 643–644.
https://doi.org/10.1177/1745691612459519
-
-
Nichols, T. E., Das, S., Eickhoff, S. B., Evans, A. C., Glatard, T., Hanke, M., Kriegeskorte, N., Milham, M. P., Poldrack, R. A., Poline, J.-B., Proal, E., Thirion, B., Van Essen, D. C., White, T., & Yeo, B. T. T. (2017). Best practices in data analysis and sharing in neuroimaging using MRI.
Nature Neuroscience,
20(3), 299–303.
https://doi.org/10.1038/nn.4500
-
-
Nickerson, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.
Review of General Psychology,
2(2), 175–220.
https://doi.org/10.1037/1089-2680.2.2.175
-
-
Nieuwenhuis, S., Forstmann, B. U., & Wagenmakers, E.-J. (2011). Erroneous analyses of interactions in neuroscience: A problem of significance.
Nature Neuroscience,
14(9), 1105–1107.
https://doi.org/10.1038/nn.2886
-
-
Nimon, K. F. (2012). Statistical Assumptions of Substantive Analyses Across the General Linear Model: A Mini-Review.
Frontiers in Psychology,
3.
https://doi.org/10.3389/fpsyg.2012.00322
-
-
Nisbet, M. C., Scheufele, D. A., Shanahan, J., Moy, P., Brossard, D., & Lewenstein, B. V. (2002). Knowledge, Reservations, or Promise?: A Media Effects Model for Public Perceptions of Science and Technology.
Communication Research,
29(5), 584–608.
https://doi.org/10.1177/009365002236196
-
-
Nittrouer, C. L., Hebl, M. R., Ashburn-Nardo, L., Trump-Steele, R. C. E., Lane, D. M., & Valian, V. (2018). Gender disparities in colloquium speakers at top universities.
Proceedings of the National Academy of Sciences,
115(1), 104–108.
https://doi.org/10.1073/pnas.1708414115
-
-
Nosek, B. A. (2019, June 11). Strategy for Culture Change.
Center for Open Science.
https://www.cos.io/blog/strategy-for-culture-change
-
-
Nosek, B. A., & Bar-Anan, Y. (2012). Scientific Utopia: I. Opening Scientific Communication.
Psychological Inquiry,
23(3), 217–243.
https://doi.org/10.1080/1047840X.2012.692215
-
-
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution.
Proceedings of the National Academy of Sciences,
115(11), 2600–2606.
https://doi.org/10.1073/pnas.1708274114
-
-
Nosek, B. A., & Errington, T. M. (2020). What is replication?
PLOS Biology,
18(3), e3000691.
https://doi.org/10.1371/journal.pbio.3000691
-
-
Nosek, B. A., & Lakens, D. (2014). Registered Reports: A Method to Increase the Credibility of Published Results.
Social Psychology,
45(3), 137–141.
https://doi.org/10.1027/1864-9335/a000192
-
-
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability.
Perspectives on Psychological Science,
7(6), 615–631.
https://doi.org/10.1177/1745691612459058
-
-
Noy, N. F., & Guinness, D. L. (2001).
Ontology Development 101: A Guide to Creating Your First Ontology. Stanford Knowledge Systems Laboratory Technical Report KSL-01-05 and Stanford Medical Informatics Technical Report SMI-2001-0880.
https://protege.stanford.edu/publications/ontology_development/ontology101.pdf
-
-
Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013).
Behavior Research Methods,
48(4), 1205–1226.
https://doi.org/10.3758/s13428-015-0664-2
-
-
Nüst, D., Boettiger, C., & Marwick, B. (2018). How to Read a Research Compendium.
ArXiv:1806.09525 [Cs].
http://arxiv.org/abs/1806.09525
-
-
Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology.
Advances in Methods and Practices in Psychological Science,
3(2), 229–237.
https://doi.org/10.1177/2515245920918872
-
-
Oberauer, K., & Lewandowsky, S. (2019). Addressing the theory crisis in psychology.
Psychonomic Bulletin & Review,
26(5), 1596–1618.
https://doi.org/10.3758/s13423-019-01645-2
-
-
OER Commons. (n.d.). OER Commons. Retrieved 9 July 2021, from
https://www.oercommons.org/
-
-
Open Aire. (n.d.).
Amnesia Anonymization Tool—Data anonymization made easy. High Accuracy Data Anonymisation. Retrieved 9 July 2021, from
https://amnesia.openaire.eu/
-
-
-
-
Open Scholarship Knowledge Base | OER Commons. (n.d.). OER Commons. Retrieved 9 July 2021, from
https://www.oercommons.org/hubs/OSKB
-
-
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science.
Science,
349(6251), aac4716–aac4716.
https://doi.org/10.1126/science.aac4716
-
-
Open Source in Open Science | FOSTER. (n.d.). Foster. Retrieved 9 July 2021, from
https://www.fosteropenscience.eu/foster-taxonomy/open-source-open-science
-
-
Orben, A. (2019). A journal club to fix science.
Nature,
573(7775), 465–465.
https://doi.org/10.1038/d41586-019-02842-8
-
-
ORCID. (n.d.). ORCID. Retrieved 9 July 2021, from
https://orcid.org/
-
-
OSF. (n.d.). Open Science Framework. Retrieved 9 July 2021, from
https://osf.io/
-
-
OSF | StudySwap: A platform for interlab replication, collaboration, and research resource exchange. (n.d.). Retrieved 10 July 2021, from
https://osf.io/meetings/StudySwap/
-
-
Ottmann, G., Laragy, C., Allen, J., & Feldman, P. (2011). Coproduction in Practice: Participatory Action Research to Develop a Model of Community Aged Care.
Systemic Practice and Action Research,
24(5), 413–427.
https://doi.org/10.1007/s11213-011-9192-x
-
-
Our Approach | Co-Production Collective. (n.d.). Co-Production Collective. Retrieved 9 July 2021, from
https://www.coproductioncollective.co.uk/what-is-co-production/our-approach
-
-
Padilla, A. M. (1994). Research news and Comment: Ethnic Minority Scholars; Research, and Mentoring: Current and Future Issues.
Educational Researcher,
23(4), 24–27.
https://doi.org/10.3102/0013189X023004024
-
-
Page, M. J., Moher, D., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … McKenzie, J. E. (2021). PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews.
BMJ, n160.
https://doi.org/10.1136/bmj.n160
-
-
Patience, G. S., Galli, F., Patience, P. A., & Boffito, D. C. (2019). Intellectual contributions meriting authorship: Survey results from the top cited authors across all science categories.
PLOS ONE,
14(1), e0198117.
https://doi.org/10.1371/journal.pone.0198117
-
-
Pautasso, M. (2013). Ten Simple Rules for Writing a Literature Review.
PLoS Computational Biology,
9(7), e1003149.
https://doi.org/10.1371/journal.pcbi.1003149
-
-
Pavlov, Y. G., Adamian, N., Appelhoff, S., Arvaneh, M., Benwell, C., Beste, C., Bland, A., Bradford, D. E., Bublatzky, F., Busch, N., Clayson, P. E., Cruse, D., Czeszumski, A., Dreber, A., Dumas, G., Ehinger, B. V., Ganis, G., He, X., Hinojosa, J. A., … Mushtaq, F. (2020).
#EEGManyLabs: Investigating the Replicability of Influential EEG Experiments [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/528nr
-
-
PCI Registered Reports. (n.d.). PCI. Retrieved 9 July 2021, from
https://rr.peercommunityin.org/about/about
-
-
Peer Community In – A free recommendation process of scientific preprints based on peer-reviews. (n.d.). Retrieved 9 July 2021, from
https://peercommunityin.org/
-
-
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research.
Journal of Experimental Social Psychology,
70, 153–163.
https://doi.org/10.1016/j.jesp.2017.01.006
-
-
Peng, R. D. (2011). Reproducible Research in Computational Science.
Science,
334(6060), 1226–1227.
https://doi.org/10.1126/science.1213847
-
-
Percie du Sert, N., Hurst, V., Ahluwalia, A., Alam, S., Avey, M. T., Baker, M., Browne, W. J., Clark, A., Cuthill, I. C., Dirnagl, U., Emerson, M., Garner, P., Holgate, S. T., Howells, D. W., Karp, N. A., Lazic, S. E., Lidster, K., MacCallum, C. J., Macleod, M., … Würbel, H. (2020). The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research.
PLOS Biology,
18(7), e3000410.
https://doi.org/10.1371/journal.pbio.3000410
-
-
Pernet, C. (2016). Null hypothesis significance testing: A short tutorial.
F1000Research,
4, 621.
https://doi.org/10.12688/f1000research.6963.3
-
-
Pernet, C., Garrido, M. I., Gramfort, A., Maurits, N., Michel, C. M., Pang, E., Salmelin, R., Schoffelen, J. M., Valdes-Sosa, P. A., & Puce, A. (2020). Issues and recommendations from the OHBM COBIDAS MEEG committee for reproducible EEG and MEG research.
Nature Neuroscience,
23(12), 1473–1483.
https://doi.org/10.1038/s41593-020-00709-0
-
-
Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., & Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography.
Scientific Data,
6(1), 103.
https://doi.org/10.1038/s41597-019-0104-8
-
-
Peterson, D., & Panofsky, A. (2020).
Metascience as a scientific social movement [Preprint]. SocArXiv.
https://doi.org/10.31235/osf.io/4dsqa
-
-
Petre, M., & Wilson, G. (2014). Code Review For and By Scientists.
ArXiv:1407.5648 [Cs].
http://arxiv.org/abs/1407.5648
-
-
‘Plan S’ and ‘cOAlition S’ – Accelerating the transition to full and immediate Open Access to scientific publications. (n.d.). Retrieved 9 July 2021, from
https://www.coalition-s.org/
-
-
Poldrack, R. A., Barch, D. M., Mitchell, J. P., Wager, T. D., Wagner, A. D., Devlin, J. T., Cumba, C., Koyejo, O., & Milham, M. P. (2013). Toward open sharing of task-based fMRI data: The OpenfMRI project.
Frontiers in Neuroinformatics,
7.
https://doi.org/10.3389/fninf.2013.00012
-
-
Poldrack, R. A., & Gorgolewski, K. J. (2014). Making big data open: Data sharing in neuroimaging.
Nature Neuroscience,
17(11), 1510–1517.
https://doi.org/10.1038/nn.3818
-
-
Pollet, I. L., & Bond, A. L. (2021). Evaluation and recommendations for greater accessibility of colour figures in ornithology.
Ibis,
163(1), 292–295.
https://doi.org/10.1111/ibi.12887
-
-
Popper, K. (2010). The logic of scientific discovery (Special Indian Edition). Routledge.
-
-
Posselt, J. R. (2020). Equity in science: Representation, culture, and the dynamics of change in graduate education. Stanford University Press.
-
-
Pownall, M., Talbot, C. V., Henschel, A., Lautarescu, A., Lloyd, K., Hartmann, H., Darda, K. M., Tang, K. T. Y., Carmichael-Murphy, P., & Siegel, J. A. (2020).
Navigating Open Science as Early Career Feminist Researchers [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/f9m47
-
-
-
-
Press, W. (2007). Numerical recipes: The art of scientific computing, (3rd ed.). Cambridge University Press.
-
-
Psychological Science Accelerator. (n.d.). Psychological Science Accelerator. Retrieved 9 July 2021, from
https://psysciacc.org/
-
-
Publication bias. (2019, May 2). Catalog of Bias.
https://catalogofbias.org/biases/publication-bias/
-
-
PubPeer—Search publications and join the conversation. (n.d.). Pubpeer. Retrieved 9 July 2021, from
https://www.pubpeer.com/
-
-
R: The R Project for Statistical Computing. (n.d.). R Project. Retrieved 10 July 2021, from
https://www.r-project.org/
-
-
Rabagliati, H., Moors, P., & Heyman, T. (2020). Can Item Effects Explain Away the Evidence for Unconscious Sound Symbolism? An Adversarial Commentary on Heyman, Maerten, Vankrunkelsven, Voorspoels, and Moors (2019).
Psychological Science,
31(9), 1200–1204.
https://doi.org/10.1177/0956797620949461
-
-
Rakow, T., Thompson, V., Ball, L., & Markovits, H. (2015). Rationale and guidelines for empirical adversarial collaboration: A
Thinking & Reasoning initiative.
Thinking & Reasoning,
21(2), 167–175.
https://doi.org/10.1080/13546783.2015.975405
-
-
Recommended Data Repositories | Scientific Data. (n.d.). Retrieved 10 July 2021, from
https://www.nature.com/sdata/policies/repositories
-
-
Replication Markets – Reliable research replicates…you can bet on it. (n.d.). Retrieved 10 July 2021, from
https://www.replicationmarkets.com/
-
-
ReproducibiliTea. (n.d.). ReproducibiliTea. Retrieved 10 July 2021, from
https://reproducibilitea.org/
-
-
Retraction Watch. (n.d.). Retraction Watch. Retrieved 9 July 2021, from
https://retractionwatch.com/
-
-
RIOT Science Club—Riot Science Club. (n.d.). Reproducible, Interpretable, Open, & Transparent Science. Retrieved 10 July 2021, from
http://riotscience.co.uk/
-
-
Rogers, A., Castree, N., & Kitchin, R. (2013). Reflexivity. In
A Dictionary of Human Geography. Oxford University Press.
https://www.oxfordreference.com/view/10.1093/acref/9780199599868.001.0001/acref-9780199599868-e-1530
-
-
Rolls, L., & Relf, M. (2006). Bracketing interviews: Addressing methodological challenges in qualitative interviewing in bereavement and palliative care.
Mortality,
11(3), 286–305.
https://doi.org/10.1080/13576270600774893
-
-
Rose, D. (2000). Universal Design for Learning.
Journal of Special Education Technology,
15(3), 45–49.
https://doi.org/10.1177/016264340001500307
-
-
Rose, D. (2018). Participatory research: Real or imagined.
Social Psychiatry and Psychiatric Epidemiology,
53(8), 765–771.
https://doi.org/10.1007/s00127-018-1549-3
-
-
Rose, D. H., & Meyer, A. (2002). Teaching every student in the Digital Age: Universal design for learning. Association for Supervision and Curriculum Development.
-
-
Ross-Hellauer, T. (2017). What is open peer review? A systematic review.
F1000Research,
6, 588.
https://doi.org/10.12688/f1000research.11369.2
-
-
Rossner, M., Van Epps, H., & Hill, E. (2007). Show me the data.
Journal of Cell Biology,
179(6), 1091–1092.
https://doi.org/10.1083/jcb.200711140
-
-
Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2006). Publication Bias in Meta-Analysis. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.),
Publication Bias in Meta-Analysis (pp. 1–7). John Wiley & Sons, Ltd.
https://doi.org/10.1002/0470870168.ch1
-
-
Rowhani-Farid, A., Aldcroft, A., & Barnett, A. G. (2020). Did awarding badges increase data sharing in
BMJ Open ? A randomized controlled trial.
Royal Society Open Science,
7(3), 191818.
https://doi.org/10.1098/rsos.191818
-
-
Rubin, M. (2021). Explaining the association between subjective social status and mental health among university students using an impact ratings approach.
SN Social Sciences,
1(1), 20.
https://doi.org/10.1007/s43545-020-00031-3
-
-
Rubin, M., Evans, O., & McGuffog, R. (2019). Social Class Differences in Social Integration at University: Implications for Academic Outcomes and Mental Health. In J. Jetten & K. Peters (Eds.),
The Social Psychology of Inequality (pp. 87–102). Springer International Publishing.
https://doi.org/10.1007/978-3-030-28856-3_6
-
-
Sagarin, B. J., Ambler, J. K., & Lee, E. M. (2014). An Ethical Approach to Peeking at Data.
Perspectives on Psychological Science,
9(3), 293–304.
https://doi.org/10.1177/1745691614528214
-
-
Salem, D. N., & Boumil, M. M. (2013). Conflict of Interest in Open-Access Publishing.
New England Journal of Medicine,
369(5), 491–491.
https://doi.org/10.1056/NEJMc1307577
-
-
Sato, T. (1996). Type I and Type II Error in Multiple Comparisons.
The Journal of Psychology,
130(3), 293–302.
https://doi.org/10.1080/00223980.1996.9915010
-
-
Schafersman, S. (1997, January). An Introduction to Science: Scientific Thinking and Scientific Method.
An Introduction to Science.
https://www.geo.sunysb.edu/esp/files/scientific-method.html
-
-
Schmidt, Robert. H. (1987). A Worksheet for Authorship of Scientific Articles on JSTOR.
Bulletin of the Ecological Society of America,
68(1), 8–10.
https://www.jstor.org/stable/20166549
-
-
Schneider, J., Merk, S., & Rosman, T. (2020).
(Re)Building Trust? Investigating the effects of open science badges on perceived trustworthiness in journal articles. https://doi.org/10.17605/OSF.IO/VGBRS
-
-
Schönbrodt, F. (2019). Training students for the Open Science future.
Nature Human Behaviour,
3(10), 1031–1031.
https://doi.org/10.1038/s41562-019-0726-z
-
-
Schönbrodt, F. D., Wagenmakers, E.-J., Zehetleitner, M., & Perugini, M. (2017). Sequential hypothesis testing with Bayes factors: Efficiently testing mean differences.
Psychological Methods,
22(2), 322–339.
https://doi.org/10.1037/met0000061
-
-
Schulz, K. F., & Grimes, D. A. (2005). Multiplicity in randomised trials I: Endpoints and treatments.
The Lancet,
365(9470), 1591–1595.
https://doi.org/10.1016/S0140-6736(05)66461-6
-
-
Schwarz, N., & Strack, F. (n.d.). Does merely going through the same moves make for a “direct” replication? Concepts, contexts, and operationalizations. Social Psychology, 45(4), 305–306.
-
-
Science. (n.d.).
Open Science Badges. Centre for Open Science.
https://www.cos.io/initiatives/badges
-
-
Scopatz, A., & Huff, K. D. (2015). Effective computation in physics (First Edition). O’Reilly Media.
-
-
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2001). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
-
-
Sharma, M., Sarin, A., Gupta, P., Sachdeva, S., & Desai, A. (2014). Journal Impact Factor: Its Use, Significance and Limitations.
World Journal of Nuclear Medicine,
13(2), 146.
https://doi.org/10.4103/1450-1147.139151
-
-
Shepard, B. (2015). Community practice as social activism: From direct action to direct services. SAGE Publications, Inc.
-
-
Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses.
Annual Review of Psychology,
70(1), 747–770.
https://doi.org/10.1146/annurev-psych-010418-102803
-
-
Sijtsma, K. (2016). Playing with Data—Or How to Discourage Questionable Research Practices and Stimulate Researchers to Do Things Right.
Psychometrika,
81(1), 1–15.
https://doi.org/10.1007/s11336-015-9446-0
-
-
Silberzahn, R., Simonsohn, U., & Uhlmann, E. L. (2014). Matched-Names Analysis Reveals No Evidence of Name-Meaning Effects: A Collaborative Commentary on Silberzahn and Uhlmann (2013).
Psychological Science,
25(7), 1504–1505.
https://doi.org/10.1177/0956797614533802
-
-
Silberzahn, R., Uhlmann, E. L., Martin, D. P., Anselmi, P., Aust, F., Awtrey, E., Bahník, Š., Bai, F., Bannard, C., Bonnier, E., Carlsson, R., Cheung, F., Christensen, G., Clay, R., Craig, M. A., Dalla Rosa, A., Dam, L., Evans, M. H., Flores Cervantes, I., … Nosek, B. A. (2018). Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results.
Advances in Methods and Practices in Psychological Science,
1(3), 337–356.
https://doi.org/10.1177/2515245917747646
-
-
Simmons, J., Nelson, L., & Simonsohn, U. (2021). Pre‐registration: Why and How.
Journal of Consumer Psychology,
31(1), 151–162.
https://doi.org/10.1002/jcpy.1208
-
-
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.
Psychological Science,
22(11), 1359–1366.
https://doi.org/10.1177/0956797611417632
-
-
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on Generality (COG): A Proposed Addition to All Empirical Papers.
Perspectives on Psychological Science,
12(6), 1123–1128.
https://doi.org/10.1177/1745691617708630
-
-
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014a). P-curve: A key to the file-drawer.
Journal of Experimental Psychology: General,
143(2), 534–547.
https://doi.org/10.1037/a0033242
-
-
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014b).
p -Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results.
Perspectives on Psychological Science,
9(6), 666–681.
https://doi.org/10.1177/1745691614553988
-
-
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2019). P-curve won’t do your laundry, but it will distinguish replicable from non-replicable findings in observational research: Comment on Bruns & Ioannidis (2016).
PLOS ONE,
14(3), e0213454.
https://doi.org/10.1371/journal.pone.0213454
-
-
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2015). Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications.
SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.2694998
-
-
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2020). Specification curve analysis.
Nature Human Behaviour,
4(11), 1208–1214.
https://doi.org/10.1038/s41562-020-0912-z
-
-
Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science.
Royal Society Open Science,
3(9), 160384.
https://doi.org/10.1098/rsos.160384
-
-
Smith, A. C., Merz, L., Borden, J. B., Gulick, C., Kshirsagar, A. R., & Bruna, E. M. (2020).
Assessing the effect of article processing charges on the geographic diversity of authors using Elsevier’s ‘Mirror Journal’ system [Preprint]. MetaArXiv.
https://doi.org/10.31222/osf.io/s7cx4
-
-
Smith, A. J., Clutton, R. E., Lilley, E., Hansen, K. E. A., & Brattelid, T. (2018). PREPARE: Guidelines for planning animal research and testing.
Laboratory Animals,
52(2), 135–141.
https://doi.org/10.1177/0023677217724823
-
-
Smith, G. T. (2005). On Construct Validity: Issues of Method and Measurement.
Psychological Assessment,
17(4), 396–408.
https://doi.org/10.1037/1040-3590.17.4.396
-
-
Sorsa, M. A., Kiikkala, I., & Åstedt-Kurki, P. (2015). Bracketing as a skill in conducting unstructured qualitative interviews.
Nurse Researcher,
22(4), 8–12.
https://doi.org/10.7748/nr.22.4.8.e1317
-
-
SORTEE. (n.d.).
SORTEE. SORTEE. Retrieved 10 July 2021, from
https://www.sortee.org/
-
-
Spence, J. R., & Stanley, D. J. (2018). Concise, Simple, and Not Wrong: In Search of a Short-Hand Interpretation of Statistical Significance.
Frontiers in Psychology,
9, 2185.
https://doi.org/10.3389/fpsyg.2018.02185
-
-
Spencer, E. A., & Heneghan, C. (2018, April 2).
Confirmation bias. Catalog of Bias.
https://catalogofbias.org/biases/confirmation-bias/
-
-
Steckler, A., & McLeroy, K. R. (2008). The Importance of External Validity.
American Journal of Public Health,
98(1), 9–10.
https://doi.org/10.2105/AJPH.2007.126847
-
-
Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing Transparency Through a Multiverse Analysis.
Perspectives on Psychological Science,
11(5), 702–712.
https://doi.org/10.1177/1745691616658637
-
-
Steup, M., & Neta, R. (2020). Epistemology. In E. N. Zalta (Ed.),
The Stanford Encyclopedia of Philosophy (Fall 2020). Metaphysics Research Lab, Stanford University.
https://plato.stanford.edu/archives/fall2020/entries/epistemology/
-
-
Stewart, N., Chandler, J., & Paolacci, G. (2017). Crowdsourcing Samples in Cognitive Science.
Trends in Cognitive Sciences,
21(10), 736–748.
https://doi.org/10.1016/j.tics.2017.06.007
-
-
Stodden, V. C. (2011).
Trust Your Science? Open Your Data and Code.
https://doi.org/10.7916/D8CJ8Q0P
-
-
Strathern, M. (1997). ‘Improving ratings’: Audit in the British University system.
European Review,
5(3), 305–321.
https://doi.org/10.1002/(SICI)1234-981X(199707)5:3<305::AID-EURO184>3.0.CO;2-4
-
-
-
-
SwissRN. (n.d.). Retrieved 10 July 2021, from
http://www.swissrn.org/
-
-
Syed, M. (2019).
The Open Science Movement is For All of Us [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/cteyb
-
-
Syed, M., & Kathawalla, U.-K. (2020).
Cultural Psychology, Diversity, and Representation in Open Science [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/t7hp2
-
-
Szollosi, A., & Donkin, C. (2021). Arrested Theory Development: The Misguided Distinction Between Exploratory and Confirmatory Research.
Perspectives on Psychological Science, 174569162096679.
https://doi.org/10.1177/1745691620966796
-
-
Team, psyTeachR. (n.d.).
P | Glossary. Retrieved 9 July 2021, from
https://psyteachr.github.io/glossary
-
-
Tennant, J., Beamer, J. E., Bosman, J., Brembs, B., Chung, N. C., Clement, G., Crick, T., Dugan, J., Dunning, A., Eccles, D., Enkhbayar, A., Graziotin, D., Harding, R., Havemann, J., Katz, D. S., Khanal, K., Kjaer, J. N., Koder, T., Macklin, P., … Turner, A. (2019).
Foundations for Open Scholarship Strategy Development [Preprint]. MetaArXiv.
https://doi.org/10.31222/osf.io/b4v8p
-
-
Tennant, J., Bielczyk, N. Z., Greshake Tzovaras, B., Masuzzo, P., & Steiner, T. (2019).
Introducing Massively Open Online Papers (MOOPs) [Preprint]. MetaArXiv.
https://doi.org/10.31222/osf.io/et8ak
-
-
Tenny, S., & Abdelgawad, I. (2021). Statistical Significance. In
StatPearls [Internet]. StatPearls Publishing.
https://www.ncbi.nlm.nih.gov/books/NBK459346/
-
-
The Committee on Publication Ethics. (n.d.).
Transparency & best practice – DOAJ. DOAJ.
https://doaj.org/apply/transparency/
-
-
the CONSORT Group, Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials.
Trials,
11(1), 32.
https://doi.org/10.1186/1745-6215-11-32
-
-
The European Code of Conduct for Research Integrity | ALLEA. (n.d.). Retrieved 10 July 2021, from
https://allea.org/code-of-conduct/
-
-
The Open Definition—Open Definition—Defining Open in Open Data, Open Content and Open Knowledge. (n.d.). Open Knowledge Foundation. Retrieved 9 July 2021, from
https://opendefinition.org/
-
-
The Open Source Definition | Open Source Initiative. (n.d.). Open Source Initiative. Retrieved 9 July 2021, from
https://opensource.org/osd
-
-
The Slow Science Academy. (2010). The Slow Science Manifesto.
SLOW-SCIENCE.Org — Bear with Us, While We Think. http://slow-science.org/
-
-
Thombs, B. D., Levis, A. W., Razykov, I., Syamchandra, A., Leentjens, A. F. G., Levenson, J. L., & Lumley, M. A. (2015). Potentially coercive self-citation by peer reviewers: A cross-sectional study.
Journal of Psychosomatic Research,
78(1), 1–6.
https://doi.org/10.1016/j.jpsychores.2014.09.015
-
-
Tierney, W., Hardy, J., Ebersole, C. R., Viganola, D., Clemente, E. G., Gordon, M., Hoogeveen, S., Haaf, J., Dreber, A., Johannesson, M., Pfeiffer, T., Huang, J. L., Vaughn, L. A., DeMarree, K., Igou, E. R., Chapman, H., Gantman, A., Vanaman, M., Wylie, J., … Uhlmann, E. L. (2021). A creative destruction approach to replication: Implicit work and sex morality across cultures.
Journal of Experimental Social Psychology,
93, 104060.
https://doi.org/10.1016/j.jesp.2020.104060
-
-
Tierney, W., Hardy, J. H., Ebersole, C. R., Leavitt, K., Viganola, D., Clemente, E. G., Gordon, M., Dreber, A., Johannesson, M., Pfeiffer, T., & Uhlmann, E. L. (2020). Creative destruction in science.
Organizational Behavior and Human Decision Processes,
161, 291–309.
https://doi.org/10.1016/j.obhdp.2020.07.002
-
-
Tiokhin, L., Yan, M., & Morgan, T. J. H. (2021). Competition for priority harms the reliability of science, but reforms can help.
Nature Human Behaviour.
https://doi.org/10.1038/s41562-020-01040-1
-
-
Topor, M., Pickering, J. S., Barbosa Mendes, A., Bishop, D. V. M., Büttner, F. C., Elsherif, M. M., Evans, T. R., Henderson, E. L., Kalandadze, T., Nitschke, F. T., Staaks, J., Van den Akker, O., Yeung, S. K., Zaneva, M., Lam, A., Madan, C. R., Moreau, D., O’Mahony, A., Parker, A. J., … Westwood, S. J. (2020).
An integrative framework for planning and conducting Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR) [Preprint]. MetaArXiv.
https://doi.org/10.31222/osf.io/8gu5z
-
-
Transparency: The Emerging Third Dimension of Open Science and Open Data. (2016).
LIBER QUARTERLY,
25(4), 153–171.
https://doi.org/10.18352/lq.10113
-
-
Tscharntke, T., Hochberg, M. E., Rand, T. A., Resh, V. H., & Krauss, J. (2007). Author Sequence and Credit for Contributions in Multiauthored Publications.
PLoS Biology,
5(1), e18.
https://doi.org/10.1371/journal.pbio.0050018
-
-
Tufte, E. R. (2001). The visual display of quantitative information (2nd ed). Graphics Press.
-
-
Tukey, J. W. (1977). Exploratory data analysis. Addison-Wesley Pub. Co.
-
-
Tvina, A., Spellecy, R., & Palatnik, A. (2019). Bias in the Peer Review Process: Can We Do Better?
Obstetrics & Gynecology,
133(6), 1081–1083.
https://doi.org/10.1097/AOG.0000000000003260
-
-
Uhlmann, E. L., Ebersole, C. R., Chartier, C. R., Errington, T. M., Kidwell, M. C., Lai, C. K., McCarthy, R. J., Riegelman, A., Silberzahn, R., & Nosek, B. A. (2019). Scientific Utopia III: Crowdsourcing Science.
Perspectives on Psychological Science,
14(5), 711–733.
https://doi.org/10.1177/1745691619850561
-
-
UK Reproducibility Network. (n.d.). UK Reproducibility Network. Retrieved 10 July 2021, from
https://www.ukrn.org/
-
-
University of Illinois at Urbana-Champaign, Burnette, M., Williams, S., University of Illinois at Urbana-Champaign, Imker, H., & University of Illinois at Urbana-Champaign. (2016). From Plan to Action: Successful Data Management Plan Implementation in a Multidisciplinary Project.
Journal of EScience Librarianship,
5(1), e1101.
https://doi.org/10.7191/jeslib.2016.1101
-
-
van de Schoot, R., Depaoli, S., King, R., Kramer, B., Märtens, K., Tadesse, M. G., Vannucci, M., Gelman, A., Veen, D., Willemsen, J., & Yau, C. (2021). Bayesian statistics and modelling.
Nature Reviews Methods Primers,
1(1), 1.
https://doi.org/10.1038/s43586-020-00001-2
-
-
Vazire, S. (2018). Implications of the Credibility Revolution for Productivity, Creativity, and Progress.
Perspectives on Psychological Science,
13(4), 411–417.
https://doi.org/10.1177/1745691617751884
-
-
Vazire, S., Schiavone, S. R., & Bottesini, J. G. (2020).
Credibility Beyond Replicability: Improving the Four Validities in Psychological Science [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/bu4d3
-
-
Villum, C. (2014, March 10). “Open-washing” – The difference between opening your data and simply making them available – Open Knowledge Foundation blog.
Open Knowledge Foundation.
https://blog.okfn.org/2014/03/10/open-washing-the-difference-between-opening-your-data-and-simply-making-them-available/
-
-
Vlaeminck, S., & Podkrajac, F. (2017). Journals in Economic Sciences: Paying Lip Service to Reproducible Research?
IASSIST Quarterly,
41(1–4), 16.
https://doi.org/10.29173/iq6
-
-
Voracek, M., Kossmeier, M., & Tran, U. S. (2019). Which Data to Meta-Analyze, and How?: A Specification-Curve and Multiverse-Analysis Approach to Meta-Analysis.
Zeitschrift Für Psychologie,
227(1), 64–82.
https://doi.org/10.1027/2151-2604/a000357
-
-
Vuorre, M., & Curley, J. P. (2018). Curating Research Assets: A Tutorial on the Git Version Control System.
Advances in Methods and Practices in Psychological Science,
1(2), 219–236.
https://doi.org/10.1177/2515245918754826
-
-
Wacker, J. G. (1998). A definition of theory: Research guidelines for different theory-building research methods in operations management.
Journal of Operations Management,
16(4), 361–385.
https://doi.org/10.1016/S0272-6963(98)00019-9
-
-
Wagenmakers, E.-J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Love, J., Selker, R., Gronau, Q. F., Šmíra, M., Epskamp, S., Matzke, D., Rouder, J. N., & Morey, R. D. (2018). Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications.
Psychonomic Bulletin & Review,
25(1), 35–57.
https://doi.org/10.3758/s13423-017-1343-3
-
-
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An Agenda for Purely Confirmatory Research.
Perspectives on Psychological Science,
7(6), 632–638.
https://doi.org/10.1177/1745691612463078
-
-
Wagge, J. R., Baciu, C., Banas, K., Nadler, J. T., Schwarz, S., Weisberg, Y., IJzerman, H., Legate, N., & Grahe, J. (2019). A Demonstration of the Collaborative Replication and Education Project: Replication Attempts of the Red-Romance Effect.
Collabra: Psychology,
5(1), 5.
https://doi.org/10.1525/collabra.177
-
-
Walker, P., & Miksa, T. (2019, November 26).
RDA-DMP-Common/RDA-DMP-Common-Standard. GitHub.
https://github.com/RDA-DMP-Common/RDA-DMP-Common-Standard
-
-
Wason, P. C. (1960). On the Failure to Eliminate Hypotheses in a Conceptual Task.
Quarterly Journal of Experimental Psychology,
12(3), 129–140.
https://doi.org/10.1080/17470216008416717
-
-
Wasserstein, R. L., & Lazar, N. A. (2016). The ASA Statement on
p -Values: Context, Process, and Purpose.
The American Statistician,
70(2), 129–133.
https://doi.org/10.1080/00031305.2016.1154108
-
-
Webster, M. M., & Rutz, C. (2020). How STRANGE are your study animals?
Nature,
582(7812), 337–340.
https://doi.org/10.1038/d41586-020-01751-5
-
-
Welcome to Sherpa Romeo—V2.sherpa. (n.d.). Sherpa Romeo. Retrieved 10 July 2021, from
https://v2.sherpa.ac.uk/romeo/
-
-
Wendl, M. C. (2007). H-index: However ranked, citations need context.
Nature,
449(7161), 403–403.
https://doi.org/10.1038/449403b
-
-
-
-
What is a reporting guideline? | The EQUATOR Network. (n.d.). Retrieved 10 July 2021, from
https://www.equator-network.org/about-us/what-is-a-reporting-guideline/
-
-
What is Crowdsourcing? (2021, April 29).
Crowdsourcing Week.
https://crowdsourcingweek.com/what-is-crowdsourcing/
-
-
What is data sharing? | Support Centre for Data Sharing. (n.d.). Support Centre for Data Sharing. Retrieved 11 July 2021, from
https://eudatasharing.eu/what-data-sharing
-
-
What is impact? - Economic and Social Research Council. (n.d.). Economic and Social Research Council. Retrieved 8 July 2021, from
https://esrc.ukri.org/research/impact-toolkit/what-is-impact/
-
-
What is Open Data? (n.d.). Open Data Handbook. Retrieved 9 July 2021, from
https://opendatahandbook.org/guide/en/what-is-open-data/
-
-
What is open education? (n.d.). Opensource.Com. Retrieved 9 July 2021, from
https://opensource.com/resources/what-open-education
-
-
Whitaker, K., & Guest, O. (2020). #bropenscience is broken science. The Psychologist, 33, 34–37.
-
-
Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking.
Frontiers in Psychology,
7.
https://doi.org/10.3389/fpsyg.2016.01832
-
-
Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., … Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship.
Scientific Data,
3(1), 160018.
https://doi.org/10.1038/sdata.2016.18
-
-
Wilson, B., & Fenner, M. (2012, May 9).
Open Researcher & Contributor ID (ORCID): Solving the Name Ambiguity Problem.
https://er.educause.edu/articles/2012/5/open-researcher--contributor-id-orcid-solving-the-name-ambiguity-problem
-
-
Wilson, R. C., & Collins, A. G. (2019). Ten simple rules for the computational modeling of behavioral data.
ELife,
8, e49547.
https://doi.org/10.7554/eLife.49547
-
-
Wingen, T., Berkessel, J. B., & Englich, B. (2020). No Replication, No Trust? How Low Replicability Influences Trust in Psychology.
Social Psychological and Personality Science,
11(4), 454–463.
https://doi.org/10.1177/1948550619877412
-
-
Woelfle, M., Olliaro, P., & Todd, M. H. (2011). Open science is a research accelerator.
Nature Chemistry,
3(10), 745–748.
https://doi.org/10.1038/nchem.1149
-
-
Working Group 1 of the Joint Committee for Guides in Metrology JCGM. (2008).
Evaluation of measurement data—Guide to the expression of uncertainty in measurement (pp. 1–120). JCGM.
https://www.bipm.org/documents/20126/2071204/JCGM_100_2008_E.pdf/cb0ef43f-baa5-11cf-3f85-4dcd86f77bd6
-
-
World Wide Web Consortium. (n.d.).
Home | Web Accessibility Initiative (WAI) | W3C. Web Accessibility Initiative. Retrieved 9 July 2021, from
https://www.w3.org/WAI/
-
-
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The Increasing Dominance of Teams in Production of Knowledge.
Science,
316(5827), 1036–1039.
https://doi.org/10.1126/science.1136099
-
-
Xia, J., Harmon, J. L., Connolly, K. G., Donnelly, R. M., Anderson, M. R., & Howard, H. A. (2015). Who publishes in “predatory” journals?
Journal of the Association for Information Science and Technology,
66(7), 1406–1417.
https://doi.org/10.1002/asi.23265
-
-
Yamada, Y. (2018). How to Crack Pre-registration: Toward Transparent and Open Science.
Frontiers in Psychology,
9, 1831.
https://doi.org/10.3389/fpsyg.2018.01831
-
-
Yarkoni, T. (2020). The generalizability crisis.
Behavioral and Brain Sciences, 1–37.
https://doi.org/10.1017/S0140525X20001685
-
-
Yeung, S. K., Feldman, G., Fillon, A., Protzko, J., Elsherif, M. M., Xiao, Q., & Pickering, J. (n.d.).
Experimental Studies Meta-Analysis Registered Report template: Main manuscript [Preprint]. Hong Kong University.
https://docs.google.com/document/d/1z3QBDYr86S9FxGjptZP94jJnZeeN4aQaBQP3VVT89Ec/edit#
-
-
Zenodo—Research. Shared. (n.d.). Zenodo. Retrieved 9 July 2021, from
https://www.zenodo.org/
-
-
Zurn, P., Bassett, D. S., & Rust, N. C. (2020). The Citation Diversity Statement: A Practice of Transparency, A Way of Life.
Trends in Cognitive Sciences,
24(9), 669–672.
https://doi.org/10.1016/j.tics.2020.06.009
-
-
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making replication mainstream.
Behavioral and Brain Sciences,
41, e120.
https://doi.org/10.1017/S0140525X17001972
-
-
diff --git a/content/glossary/vbeta/reflexivity.md b/content/glossary/vbeta/reflexivity.md
deleted file mode 100644
index f42af9dee6..0000000000
--- a/content/glossary/vbeta/reflexivity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Reflexivity",
- "definition": "The process of reflexivity refers to critically considering the knowledge that we produce through research, how it is produced, and our own role as researchers in producing this knowledge. There are different forms of reflexivity; personal reflexivity whereby researchers consider the impact of their own personal experiences, and functional whereby researchers consider the way in which our research tools and methods may have impacted knowledge production. Reflexivity aims to bring attention to underlying factors which may impact the research process, including development of research questions, data collection, and the analysis.",
- "related_terms": ["Bracketing Interviews", "Qualitative Research"],
- "references": ["Braun and Clarke (2013)", "Finlay and Gough (2008)"],
- "alt_related_terms": [null],
- "drafted_by": ["Claire Melia"],
- "reviewed_by": ["Gilad Feldman", "Annalise A. LaPlume"]
- }
diff --git a/content/glossary/vbeta/registered-report.md b/content/glossary/vbeta/registered-report.md
deleted file mode 100644
index 3df4923501..0000000000
--- a/content/glossary/vbeta/registered-report.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Registered Report",
- "definition": "A scientific publishing format that includes an initial round of peer review of the background and methods (study design, measurement, and analysis plan); sufficiently high quality manuscripts are accepted for in-principle acceptance (IPA) at this stage. Typically, this stage 1 review occurs before data collection, however secondary data analyses are possible in this publishing format. Following data analyses and write up of results and discussion sections, the stage 2 review assesses whether authors sufficiently followed their study plan and reported deviations from it (and remains indifferent to the results). This shifts the focus of the review to the study’s proposed research question and methodology and away from the perceived interest in the study’s results.",
- "related_terms": ["Preregistration", "Publication bias (File Drawer Problem)", "Results-free review", "PCI (Peer Community In)", "Research Protocol"],
- "references": ["Chambers (2013)", "Chambers et al. (2015)", "Chambers and Tzavella (2020)", "Findley et al. (2016)", "https://www.cos.io/initiatives/registered-reports"],
- "alt_related_terms": [null],
- "drafted_by": ["Madeleine Pownall"],
- "reviewed_by": ["Gilad Feldman", "Emma Henderson", "Aoife O’Mahony", "Sam Parsons", "Mariella Paul", "Charlotte R. Pennington", "Eike Mark Rinke", "Timo Roettger", "Olmo van den Akker", "Yuki Yamada", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/registry-of-research-data-repositor.md b/content/glossary/vbeta/registry-of-research-data-repositor.md
deleted file mode 100644
index f62f3cb101..0000000000
--- a/content/glossary/vbeta/registry-of-research-data-repositor.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Registry of Research Data Repositories",
- "definition": "A global registry of research data repositories from different academic disciplines. It includes repositories that enable permanent storage of, description via metadata and access to, data sets by researchers, funding bodies, publishers, and scholarly institutions.",
- "related_terms": ["Metadata", "Open Access", "Open Data", "Open Material", "Repository"],
- "references": ["https://www.re3data.org/ - Registry of Research Data Repositories."],
- "alt_related_terms": [null],
- "drafted_by": ["Aleksandra Lazić"],
- "reviewed_by": ["Mahmoud Elsherif", "Sam Parsons", "Charlotte R. Pennington", "Helena Hartmann"]
- }
diff --git a/content/glossary/vbeta/reliability.md b/content/glossary/vbeta/reliability.md
deleted file mode 100644
index f54befff82..0000000000
--- a/content/glossary/vbeta/reliability.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Reliability",
- "definition": "The extent to which repeated measurements lead to the same results. In psychometrics, reliability refers to the extent to which respondents have similar scores when they take a questionnaire on multiple occasions. Noteworthy, reliability does not imply validity. Furthermore, additional types of reliability besides internal consistency exist, including: test-retest reliability, parallel forms reliability and interrater reliability.",
- "related_terms": ["Consistency", "Internal consistency", "Quality Criteria", "Replicability", "Reproducibility", "Validity"],
- "references": ["Bollen (1989)", "Drost (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Mahmoud Elsherif", "Eduardo Garcia-Garzon", "Kai Krautter", "Olmo van den Akker"]
- }
diff --git a/content/glossary/vbeta/repeatability.md b/content/glossary/vbeta/repeatability.md
deleted file mode 100644
index ed692c6ad4..0000000000
--- a/content/glossary/vbeta/repeatability.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Repeatability",
- "definition": "Synonymous with test-retest reliability. It refers to the agreement between the results of successive measurements of the same measure. Repeatability requires the same experimental tools, the same observer, the same measuring instrument administered under the same conditions, the same location, repetition over a short period of time, and the same objectives (Joint Committee for Guidelines in Metrology, 2008)",
- "related_terms": ["Reliability"],
- "references": ["ISO (1993)", "Stodden (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif, Adam Parker"],
- "reviewed_by": ["Gilad Feldman", "Helena Hartmann", "Joanne McCuaig", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/replicability.md b/content/glossary/vbeta/replicability.md
deleted file mode 100644
index 842098062f..0000000000
--- a/content/glossary/vbeta/replicability.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Replicability",
- "definition": "An umbrella term, used differently across fields, covering concepts of: direct and conceptual replication, computational reproducibility/replicability, generalizability analysis and robustness analyses. Some of the definitions used previously include: a different team arriving at the same results using the original author's artifacts (Barba 2018); a study arriving at the same conclusion after collecting new data (Claerbout and Karrenbach, 1992); as well as studies for which any outcome would be considered diagnostic evidence about a claim from prior research (Nosek & Errington, 2020).",
- "related_terms": ["Conceptual replication", "Direct Replication", "Generalizability", "Reproducibility", "Reliability", "Robustness (analyses)"],
- "references": ["Barba (2018)", "Crüwell et al. (2019)", "King (1996)", "National Academies of Sciences et al. (2011)", "Nosek and Errington (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Jamie P. Cockcroft", "Adrien Fillon", "Gilad Feldman", "Annalise A. LaPlume", "Tina B. Lonsdorf", "Sam Parsons", "Eike Mark Rinke", "Tobias Wingen"]
- }
diff --git a/content/glossary/vbeta/replication-markets.md b/content/glossary/vbeta/replication-markets.md
deleted file mode 100644
index 805ee730ac..0000000000
--- a/content/glossary/vbeta/replication-markets.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Replication Markets ",
- "definition": "A replication market is an environment where users bet on the replicability of certain effects. Forecasters are incentivized to make accurate predictions and the top successful forecasters receive monetary compensation or contributorship for their bets. The rationale behind a replication market is that it leverages the collective wisdom of the scientific community to predict which effect will most likely replicate, thus encouraging researchers to channel their limited resources to replicating these effects.",
- "related_terms": ["Citizen science", "Crowdsourcing", "Replicability", "Reproducibility"],
- "references": ["Liu et al. (2020)", "Tierney et al. (2020)", "Tierney et al. (2021)", "www.replicationmarkets.com"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Mahmoud Elsherif", "Leticia Micheli", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/replicats-project.md b/content/glossary/vbeta/replicats-project.md
deleted file mode 100644
index db1cbf0efd..0000000000
--- a/content/glossary/vbeta/replicats-project.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "RepliCATs project",
- "definition": "Collaborative Assessment for Trustworthy Science. The repliCATS project’s aim is to crowdsource predictions about the reliability and replicability of published research in eight social science fields: business research, criminology, economics, education, political science, psychology, public administration, and sociology.",
- "related_terms": ["Replicability", "Trustworthiness"],
- "references": ["Fraser et al.(2021)", "https://replicats.research.unimelb.edu.au/"],
- "alt_related_terms": [null],
- "drafted_by": ["Tamara Kalandadze"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Gilad Feldman", "Helena Hartmann", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/reporting-guideline.md b/content/glossary/vbeta/reporting-guideline.md
deleted file mode 100644
index 47d2067739..0000000000
--- a/content/glossary/vbeta/reporting-guideline.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Reporting Guideline",
- "definition": "A reporting guideline is a “checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology.” (EQUATOR Network, n.d.). Reporting guidelines provide the minimum guidance required to ensure that research findings can be appropriately interpreted, appraised, synthesized and replicated. Their use often differs per scientific journal or publisher.",
- "related_terms": ["CONSORT", "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", "PRISMA", "STROBE"],
- "references": ["Moher et al. (2009) Schulz et al. (2010)", "Torpor et al. (2021)", "Von Elm et al. (2007)", "https://www.equator-network.org/about-us/what-is-a-reporting-guideline/"],
- "alt_related_terms": [null],
- "drafted_by": ["Aidan Cashin"],
- "reviewed_by": ["Gilad Feldman", "Helena Hartmann", "Joanne McCuaig"]
- }
diff --git a/content/glossary/vbeta/repository.md b/content/glossary/vbeta/repository.md
deleted file mode 100644
index 7cee01be5a..0000000000
--- a/content/glossary/vbeta/repository.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Repository",
- "definition": "An online archive for the storage of digital objects including research outputs, manuscripts, analysis code and/or data. Examples include preprint servers such as bioRxiv, MetaArXiv, PsyArXiv, institutional research repositories, as well as data repositories that collect and store datasets including zenodo.org, PsychData, and code repositories such as Github, or more general repositories for all kinds of research data, such as the Open Science Framework (OSF). Digital objects stored in repositories are typically described through metadata which enables discovery across different storage locations.",
- "related_terms": ["Data sharing", "Github", "Metadata", "Open Access", "Open data", "Open Material", "Open Science Framework", "Open Source", "Preprint"],
- "references": ["https://www.nature.com/sdata/policies/repositories"],
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf"],
- "reviewed_by": ["Gilad Feldman", "Connor Keating", "Mariella Paul", "Charlotte R. Pennington", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/reproducibilitea.md b/content/glossary/vbeta/reproducibilitea.md
deleted file mode 100644
index 113f8822bb..0000000000
--- a/content/glossary/vbeta/reproducibilitea.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "ReproducibiliTea",
- "definition": "A grassroots initiative that helps researchers create local journal clubs at their universities to discuss a range of topics relating to open research and scholarship. Each meeting usually centres around a specific paper that discusses, for example, reproducibility, research practice, research quality, social justice and inclusion, and ideas for improving science.",
- "related_terms": ["Grassroots initiative", "Journal club", "Open science", "Reproducibility"],
- "references": ["https://reproducibilitea.org/", "Orben (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Emma Norris"],
- "reviewed_by": ["Mahmoud Elsherif", "Gilad Feldman", "Connor Keating", "Charlotte R. Pennington", "Sam Parsons", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/reproducibility-crisis-aka-replicab.md b/content/glossary/vbeta/reproducibility-crisis-aka-replicab.md
deleted file mode 100644
index 910c49c6fb..0000000000
--- a/content/glossary/vbeta/reproducibility-crisis-aka-replicab.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Reproducibility crisis (aka Replicability or replication crisis)",
- "definition": "The finding, and related shift in academic culture and thinking, that a large proportion of scientific studies published across disciplines do not replicate (e.g. Open Science Collaboration, 2015). This is considered to be due to a lack of quality and integrity of research and publication practices, such as publication bias, QRPs and a lack of transparency, leading to an inflated rate of false positive results. Others have described this process as a ‘Credibility revolution’ towards improving these practices.",
- "related_terms": ["Credibility crisis", "Publication bias (File Drawer Problem)", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Replicability", "Reproducibility"],
- "references": ["Fanelli (2018)", "Open Science Collaboration (2015)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Helena Hartmann", "Annalise A. LaPlume", "Mariella Paul", "Sonia Rishi", "Lisa Spitzer"]
- }
diff --git a/content/glossary/vbeta/reproducibility-network.md b/content/glossary/vbeta/reproducibility-network.md
deleted file mode 100644
index 51d1f0cbfc..0000000000
--- a/content/glossary/vbeta/reproducibility-network.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Reproducibility Network",
- "definition": "A reproducibility network is a consortium of open research working groups, often peer-led. The groups operate on a wheel-and-spoke model across a particular country, in which the network connects local cross-disciplinary researchers, groups, and institutions with a central steering group, who also connect with external stakeholders in the research ecosystem. The goals of reproducibility networks include; advocating for greater awareness, promoting training activities, and disseminating best-practices at grassroots, institutional, and research ecosystem levels. Such networks exist in the UK, Germany, Switzerland, Slovakia, and Australia (as of March 2021).",
- "related_terms": [null],
- "references": ["https://www.ukrn.org/", "https://reproducibilitynetwork.de/", "https://www.swissrn.org/", "https://slovakrn.wixsite.com/skrn", "https://www.aus-rn.org/"],
- "alt_related_terms": [null],
- "drafted_by": ["Suzanne L. K. Stewart"],
- "reviewed_by": ["Annalise A. LaPlume", "Sam Parsons", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/reproducibility.md b/content/glossary/vbeta/reproducibility.md
deleted file mode 100644
index 0da1196305..0000000000
--- a/content/glossary/vbeta/reproducibility.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Reproducibility",
- "definition": "A minimum standard on a spectrum of activities (\"reproducibility spectrum\") for assessing the value or accuracy of scientific claims based on the original methods, data, and code. For instance, where the original researcher's data and computer codes are used to regenerate the results (Barba, 2018), often referred to as computational reproducibility. Reproducibility does not guarantee the quality, correctness, or validity of the published results (Peng, 2011). In some fields, this meaning is, instead, associated with the term “replicability” or ‘repeatability’.",
- "related_terms": ["Computational reproducibility", "Replicability", "repeatability"],
- "references": ["Barba (2018)", "Cruwell et al. (2019)", "Peng (2011), Stodden (2011)", "Syed (2019)", "National Academies of Sciences, Engineering, and Medicine. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Helena Hartmann", "Annalise A. LaPlume", "Tina B. Lonsdorf", "Sam Parsons", "Charlotte R. Pennington", "Suzanne L. K. Stewart"]
- }
diff --git a/content/glossary/vbeta/research-contribution-metric-p.md b/content/glossary/vbeta/research-contribution-metric-p.md
deleted file mode 100644
index 95afeebecf..0000000000
--- a/content/glossary/vbeta/research-contribution-metric-p.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Research Contribution Metric (p) ",
- "definition": "Type of semantometric measure assessing similarity of publications connected in a citation network. This method uses a simple formula to assess authors’ contributions. Publication p can be estimated based on the semantic distance from the publications cited by p to publications citing p.",
- "related_terms": ["Semantometrics"],
- "references": ["Knoth and Herrmannova (2014)", "Holcombe (2019)", "Larivière et al. (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh"],
- "reviewed_by": ["Michele C. Lim", "Jamie P. Cockcroft", "Micah Vandegrift", "Dominik Kiersz"]
- }
diff --git a/content/glossary/vbeta/research-cycle.md b/content/glossary/vbeta/research-cycle.md
deleted file mode 100644
index 6d9665f21b..0000000000
--- a/content/glossary/vbeta/research-cycle.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Research Cycle",
- "definition": "Describes the circular process of conducting scientific research, with “researchers working at various stages of inquiry, from more tentative and exploratory investigations to the testing of more definitive and well-supported claims” (Lieberman, 2020, p. 42). The cycle includes literature research and hypothesis generation, data collection and analysis, as well as dissemination of results (e.g. through publication in peer-reviewed journals), which again informs theory and new hypotheses/research.",
- "related_terms": ["Research process"],
- "references": ["Bramoullé and Saint Paul (2010)", "Lieberman (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Helena Hartmann"],
- "reviewed_by": ["Jamie P. Cockcroft", "Aleksandra Lazić", "Graham Reid", "Beatrice Valentini"]
- }
diff --git a/content/glossary/vbeta/research-data-management.md b/content/glossary/vbeta/research-data-management.md
deleted file mode 100644
index a1e8e4d9c7..0000000000
--- a/content/glossary/vbeta/research-data-management.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Research Data Management",
- "definition": "Research Data Management (RDM) is a broad concept that includes processes undertaken to create organized, documented, accessible, and reusable quality research data. Adequate research data management provides many benefits including, but not limited to, reduced likelihood of data loss, greater visibility and collaborations due to data sharing, demonstration of research integrity and accountability.",
- "related_terms": ["Data curation", "Data documentation", "Data management plan (DMP)", "Data sharing", "Metadata", "Research data management"],
- "references": ["CESSDA", "Corti et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Micah Vandegrift"],
- "reviewed_by": ["Helena Hartmann", "Tina B. Lonsdorf", "Catia M. Oliveira", "Julia Wolska"]
- }
diff --git a/content/glossary/vbeta/research-integrity.md b/content/glossary/vbeta/research-integrity.md
deleted file mode 100644
index 8a14ccd1ee..0000000000
--- a/content/glossary/vbeta/research-integrity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Research integrity",
- "definition": "Research integrity is defined by a set of good research practices based on fundamental principles: honesty, reliability, respect and accountability (ALLEA, 2017). Good research practices —which are based on fundamental principles of research integrity and should guide researchers in their work as well as in their engagement with the practical, ethical and intellectual challenges inherent in research— refer to areas such as: research environment (e.g., research institutions and organisations promote awareness and ensure a prevailing culture of research integrity), training, supervision and mentoring (e.g., Research institutions and organisations develop appropriate and adequate training in ethics and research integrity to ensure that all concerned are made aware of the relevant codes and regulations), research procedures (e.g., researchers report their results in a way that is compatible with the standards of the discipline and, where applicable, can be verified and reproduced), safeguards (e.g., researchers have due regard for the health, safety and welfare of the community, of collaborators and others connected with their research), data practices and management (e.g., researchers, research institutions and organisations provide transparency about how to access or make use of their data and research materials), collaborative working, publication and dissemination (e.g., authors and publishers consider negative results to be as valid as positive findings for publication and dissemination), reviewing, evaluating and editing (e.g., researchers review and evaluate submissions for publication, funding, appointment, promotion or reward in a transparent and justifiable manner).",
- "related_terms": ["Credibility of scientific claims", "Error detection", "Ethics", "Open research", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Responsible Research Practices", "Rigour", "Transparency", "Trustworthy research"],
- "references": ["ALLEA (2017)", "Medin (2012)", "Moher et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ana Barbosa Mendes", "Flávio Azevedo"],
- "reviewed_by": ["Valeria Agostini", "Bradley Baker", "Gilad Feldman", "Tamara Kalandadze", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/research-protocol.md b/content/glossary/vbeta/research-protocol.md
deleted file mode 100644
index f889ccc42a..0000000000
--- a/content/glossary/vbeta/research-protocol.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Research Protocol",
- "definition": "A detailed document prepared before conducting a study, often written as part of ethics and funding applications. The protocol should include information relating to the background, rationale and aims of the study, as well as hypotheses which reflect the researchers’ expectations. The protocol should also provide a “recipe” for conducting the study, including methodological details and clear analysis plans. Best practice guidelines for creating a study protocol should be used for specific methodologies and fields. It is possible to publicly share research protocols to attract new collaborators or facilitate efficient collaboration across labs (e.g. https://www.protocols.io/). In medical and educational fields, protocols are often a separate article type suitable for publication in journals. Where protocol sharing or publication is not common practice, researchers can choose preregistration.",
- "related_terms": ["Many Labs", "Preregistration"],
- "references": ["BMJ (2015)", "Nosek et al. (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Marta Topor"],
- "reviewed_by": ["Helena Hartmann", "Bethan Iley", "Annalise A. LaPlume", "Charlotte Pennington"]
- }
diff --git a/content/glossary/vbeta/research-workflow.md b/content/glossary/vbeta/research-workflow.md
deleted file mode 100644
index a0d363885e..0000000000
--- a/content/glossary/vbeta/research-workflow.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Research workflow",
- "definition": "The process of conducting research from conceptualisation to dissemination. A typical workflow may look like the following: Starting with conceptualisation to identify a research question and design a study. After study design, researchers need to gain ethical approval (if necessary) and may decide to preregister the final version. Researchers then collect and analyse their data. Finally, the process ends with dissemination; moving between pre-print and post-print stages as the manuscript is submitted to a journal.",
- "related_terms": ["Open Research Workflow", "Research cycle", "Research pipeline"],
- "references": ["Kathawalla et al. (2021)", "Stodden (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["James E Bartlett"],
- "reviewed_by": ["Gilad Feldman", "Helena Hartmann", "Aleksandra Lazić", "Joanne McCuaig", "Timo Roettger", "Sam Parsons", "Steven Verheyen"]
- }
diff --git a/content/glossary/vbeta/researcher-degrees-of-freedom.md b/content/glossary/vbeta/researcher-degrees-of-freedom.md
deleted file mode 100644
index c57eb8b3d0..0000000000
--- a/content/glossary/vbeta/researcher-degrees-of-freedom.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Researcher degrees of freedom",
- "definition": "refers to the flexibility often inherent in the scientific process, from hypothesis generation, designing and conducting a research study to processing the data and analyzing as well as interpreting and reporting results. Due to a lack of precisely defined theories and/or empirical evidence, multiple decisions are often equally justifiable. The term is sometimes used to refer to the opportunistic (ab-)use of this flexibility aiming to achieve desired results —e.g., when in- or excluding certain data— albeit the fact that technically the term is not inherently value-laden.",
- "related_terms": ["Analytic Flexibility", "Garden of forking paths", "Model uncertainty", "Multiverse analysis", "P-hacking", "Robustness (analyses)", "Specification curve analysis"],
- "references": ["Gelman and Loken (2013)", "Simmons et al. (2011)", "Wicherts et al. (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf"],
- "reviewed_by": ["Gilad Feldman", "Helena Hartmann", "Timo Roettger", "Robbie C.M. van Aert", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/responsible-research-and-innovation.md b/content/glossary/vbeta/responsible-research-and-innovation.md
deleted file mode 100644
index 76305213c8..0000000000
--- a/content/glossary/vbeta/responsible-research-and-innovation.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Responsible Research and Innovation",
- "definition": "An approach that considers societal implications and expectations, relating to research and innovation, with the aim to foster inclusivity and sustainability. It accounts for the fact that scientific endeavours are not isolated from their wider effects and that research is motivated by factors beyond the pursuit of knowledge. As such, many parties are important in fostering responsible research, including funding bodies, research teams, stakeholders, activists, and members of the public.",
- "related_terms": ["Citizen Science", "Public Engagement", "Transdisciplinary Research"],
- "references": ["European Commission (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Ana Barbosa Mendes"],
- "reviewed_by": ["Helena Hartmann", "Joanne McCuaig", "Sam Parsons", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/reverse-p-hacking.md b/content/glossary/vbeta/reverse-p-hacking.md
deleted file mode 100644
index 5f5c7b2661..0000000000
--- a/content/glossary/vbeta/reverse-p-hacking.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Reverse p-hacking",
- "definition": "Exploiting researcher degrees of freedom during statistical analysis in order to increase the likelihood of accepting the null hypothesis (for instance, p > .05).",
- "related_terms": ["Analytic flexibility", "HARKing", "P-hacking", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Researcher degrees of freedom", "Selective reporting"],
- "references": ["Chuard et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Robert M. Ross"],
- "reviewed_by": ["Mahmoud Elsherif", "Alexander Hart", "Sam Parsons", "Timo Roettger"]
- }
diff --git a/content/glossary/vbeta/riot-science-club.md b/content/glossary/vbeta/riot-science-club.md
deleted file mode 100644
index fe7c70dad0..0000000000
--- a/content/glossary/vbeta/riot-science-club.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "RIOT Science Club",
- "definition": "The RIOT Science Club is a multi-site seminar series that raises awareness and provides training in Reproducible, Interpretable, Open & Transparent science practices. It provides regular talks, workshops and conferences, all of which are openly available and rewatchable on the respective location’s websites and Youtube.",
- "related_terms": ["Early career researchers (ECRs)", "Interpretability", "Openness", "Reproducibility", "Transparency"],
- "references": ["http://riotscience.co.uk/"],
- "alt_related_terms": [null],
- "drafted_by": ["Tamara Kalandadze"],
- "reviewed_by": ["Helena Hartmann", "Emma Henderson", "Joanne McCuaig", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/robustness-analyses.md b/content/glossary/vbeta/robustness-analyses.md
deleted file mode 100644
index 2f616ebd7c..0000000000
--- a/content/glossary/vbeta/robustness-analyses.md
+++ /dev/null
@@ -1,10 +0,0 @@
-{
- "title": "Robustness (analyses)",
- "definition": "The persistence of support for a hypothesis under perturbations of the methodological/analytical pipeline In other words, applying different methods/analysis pipelines to examine if the same conclusion is supported under analytical different conditions.",
- "related_terms": ["Many Labs", "Multiverse analysis", "Sensitivity analyses", "Specification Curve Analysis"],
- "references": ["Goodman et al. (2016) (alternative)", "Nosek and Errington (2020)"],
- "alt_definition": "“Robustness refers to the stability of experimental conclusions to variations in either baseline assumptions or experimental procedures. It is somewhat related to the concept of generalizability (also known as transportability), which refers to the persistence of an effect in settings different from and outside of an experimental framework [...] Whether a study design is similar enough to the original to be considered a replication, a “robustness test,” or some of many variations of pure replication that have been identified, particularly in the social sciences (for example, conceptual replication, pseudoreplication), is an unsettled question” (Goodman et al., 2016).",
- "alt_related_terms": [null],
- "drafted_by": ["Tina Lonsdorf", "Flávio Azevedo"],
- "reviewed_by": ["Gilad Feldman", "Adrien Fillon", "Helena Hartmann", "Timo Roettger"]
- }
diff --git a/content/glossary/vbeta/salami-slicing.md b/content/glossary/vbeta/salami-slicing.md
deleted file mode 100644
index dd43423e14..0000000000
--- a/content/glossary/vbeta/salami-slicing.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Salami slicing",
- "definition": "A questionable research/reporting practice strategy, often done post hoc, to increase the number of publishable manuscripts by ‘slicing’ up the data from a single study - one example of a method of ‘gaming the system’ of academic incentives. For instance, this may involve publishing multiple studies based on a single dataset, or publishing multiple studies from different data collection sites without transparently stating where the data originally derives from. Such practices distort the literature, and particularly meta-analyses, because it is unclear that the findings were obtained from the same dataset, thereby concealing the dependencies across the separately published papers.",
- "related_terms": ["Gaming (the system)", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Partial publication"],
- "references": ["Fanelli (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Adrien Fillon", "Helena Hartmann", "Matt Jaquiery", "Tamara Kalandadze", "Charlotte R. Pennington", "Graham Reid", "Suzanne L. K. Stewart"]
- }
diff --git a/content/glossary/vbeta/scooping.md b/content/glossary/vbeta/scooping.md
deleted file mode 100644
index 0fe859846c..0000000000
--- a/content/glossary/vbeta/scooping.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Scooping",
- "definition": "The act of reporting or publishing a novel finding prior to another researcher/team. Survey-based research indicates that fear of being scooped is an important fear-related barrier for data sharing in psychology, and agent-based models suggest that competition for priority harms scientific reliability (Tiokhin et al. 2021).",
- "related_terms": ["Novelty", "Open data", "Preregistration"],
- "references": ["Houtkoop et al. (2018)", "Laine (2017)", "Tiokhin et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["William Ngiam"],
- "reviewed_by": ["Ashley Blake", "Thomas Rhys Evans", "Connor Keating", "Graham Reid", "Timo Roettger", "Robert M. Ross", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/semantometrics.md b/content/glossary/vbeta/semantometrics.md
deleted file mode 100644
index 2211b0fcf3..0000000000
--- a/content/glossary/vbeta/semantometrics.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Semantometrics ",
- "definition": "A class of metrics for evaluating research using full publication text to measure semantic similarity of publications and highlighting an article’s contribution to the progress of scholarly discussion. It is an extension of tools such as bibliometrics, webometrics, and altmetrics.",
- "related_terms": ["Bibliometrics", "Contribution(p)"],
- "references": ["Herrmannova and Knoth (2016)", "Knoth and Herrmannova (2014)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh"],
- "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Christopher Graham", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/sensitive-research.md b/content/glossary/vbeta/sensitive-research.md
deleted file mode 100644
index d24fcff9a5..0000000000
--- a/content/glossary/vbeta/sensitive-research.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Sensitive research",
- "definition": "Research that poses a threat to those who are or have been involved in it, including the researchers, the participants, and the wider society. This threat can be physical danger (e.g. suicide) or a negative emotional response (e.g. depression) to those who are involved in the research process. For instance, research conducted on victims of suicide, the researcher might be emotionally traumatised by the descriptions of the suicidal behaviours. Indeed, the communication with the victims might also make them re-experience the traumatic memories, leading to negative psychological responses.",
- "related_terms": ["Anonymity"],
- "references": ["Lee (1993)", "Albayrak-Aydemir (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Nihan Albayrak-Aydemir"],
- "reviewed_by": ["Valeria Agostini", "Mahmoud Elsherif", "Helena Hartmann", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/sequence-determines-credit-approach.md b/content/glossary/vbeta/sequence-determines-credit-approach.md
deleted file mode 100644
index 34a72eb7f0..0000000000
--- a/content/glossary/vbeta/sequence-determines-credit-approach.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Sequence-determines-credit approach (SDC) ",
- "definition": "An authorship system that assigns authorship order based on the contribution of each author. The names of the authors are listed according to their contribution in descending order with the most contributing author first and the least contributing author last.",
- "related_terms": ["Authorship", "First-last-author-emphasis norm (FLAE)"],
- "references": ["Schmidt (1987)", "Tscharntke et al. (2007)"],
- "alt_related_terms": [null],
- "drafted_by": ["Myriam A. Baum"],
- "reviewed_by": ["Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/sherpa-romeo.md b/content/glossary/vbeta/sherpa-romeo.md
deleted file mode 100644
index 15bb8c8d30..0000000000
--- a/content/glossary/vbeta/sherpa-romeo.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Sherpa Romeo",
- "definition": "An online resource that collects and presents open access policies from publishers, from across the world, providing summaries of individual journal's copyright and open access archiving policies.",
- "related_terms": ["Embargo period", "Open access", "Paywall", "Preprint", "Repository"],
- "references": ["https://v2.sherpa.ac.uk/romeo/"],
- "alt_related_terms": [null],
- "drafted_by": ["Aleksandra Lazić"],
- "reviewed_by": ["Mahmoud Elsherif", "Christopher Graham", "Sam Parsons", "Martin Vasilev"]
- }
diff --git a/content/glossary/vbeta/single-blind-peer-review.md b/content/glossary/vbeta/single-blind-peer-review.md
deleted file mode 100644
index d5e1ec76cd..0000000000
--- a/content/glossary/vbeta/single-blind-peer-review.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Single-blind peer review",
- "definition": "Evaluation of research products by qualified experts where the reviewer(s) knows the identity of the author(s), but the reviewer(s) remains anonymous to the author(s).",
- "related_terms": ["Anonymous review", "Double-blind peer review", "Masked review", "Open Peer Review", "Peer review", "Triple-blind peer review"],
- "references": ["Largent and Snodgrass (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Ashley Blake", "Christopher Graham", "Helena Hartmann", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/slow-science.md b/content/glossary/vbeta/slow-science.md
deleted file mode 100644
index 2f5e05cb72..0000000000
--- a/content/glossary/vbeta/slow-science.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Slow science",
- "definition": "Adopting Open Scholarship practices leads to a longer research process overall, with more focus on transparency, reproducibility, replicability and quality, over the quantity of outputs. Slow Science opposes publish-or-perish culture and describes an academic system that allows time and resources to produce fewer higher-quality and transparent outputs, for instance prioritising researcher time towards collecting more data, more time to read the literature, think about how their findings fit the literature and documenting and sharing research materials instead of running additional studies.",
- "related_terms": ["collaboration", "Incentive structure", "Publish or Perish", "research culture", "research quality"],
- "references": ["http://slow-science.org/", "Nelson et al., (2012)", "Frith (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Sonia Rishi"],
- "reviewed_by": ["Adrien Fillon", "Tamara Kalandadze", "Sam Parsons Charlotte R. Pennington", "Robert M Ross", "Timo Roettger"]
- }
diff --git a/content/glossary/vbeta/social-class.md b/content/glossary/vbeta/social-class.md
deleted file mode 100644
index a38b4c013d..0000000000
--- a/content/glossary/vbeta/social-class.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Social class",
- "definition": "Social class is usually measured using both objective and subjective measurements, as recommended by the American Psychological Association (American Psychological Association,Task Force on Socioeconomic Status, 2007). Unlike the conventional concept, which only considers one factor, either education or income (e.g., economic variables), an individual's social class is considered to be a combination of their education, income, occupational prestige, subjective social status, and self-identified social class. Social class is partly a cultural variable, as it is a stable variable and likely to change slowly over the years. Social class can have important implications to academic outcomes. An individual may have a high socio-economic status yet identify as a working class individual. Working class students tend to have different life circumstances and often more restrictive commitments than middle-class students, which make their integration with other students more difficult (Rubin, 2021). The lack of time and money is obstructive to their social experience at university. Working class students are more likely to work to support themselves, resulting in less time for academic activities and for socializing with other students as well as less money to purchase items linked to social experiences (e.g. food).",
- "related_terms": ["Social integration"],
- "references": ["Evans and Rubin (2021)", "Rubin et al. (2019)", "Rubin (2021)", "Saegert et al. (2007)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Leticia Micheli", "Eliza Woodward", "Julika Wolska", "Gerald Vineyard", "Yu-Fang Yang"]
- }
diff --git a/content/glossary/vbeta/social-integration.md b/content/glossary/vbeta/social-integration.md
deleted file mode 100644
index 0dc7bb0230..0000000000
--- a/content/glossary/vbeta/social-integration.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Social integration",
- "definition": "Social integration is a multi-dimensional construct. In an academic context, social integration is related to the quantity and quality of the social interactions with staff and students, as well as the sense of connection and belonging to the university and the people within the institute. To be more specific, social support, trust, and connectedness are all variables that contribute to social integration. Social integration has important implications for academic outcomes and mental wellbeing (Evans & Rubin, 2021). Working class students are less likely to integrate with other students, since they have differing social and economic backgrounds and less disposable income. Thus they are not able to experience as many educational and fiscal opportunities than others. In turn, this can lead to poor mental health and feelings of ostracism (Rubin, 2021).",
- "related_terms": ["Social class"],
- "references": ["Evans and Rubin (2021)", "Rubin et al. (2019)", "Rubin (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Leticia Micheli", "Eliza Woodward", "Julika Wolska", "Gerald Vineyard", "Yu-Fang Yang", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/society-for-open-reliable-and-trans.md b/content/glossary/vbeta/society-for-open-reliable-and-trans.md
deleted file mode 100644
index a2a4a26d51..0000000000
--- a/content/glossary/vbeta/society-for-open-reliable-and-trans.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE)",
- "definition": "SORTEE (https://www.sortee.org/) is an international society with the aim of improving the transparency and reliability of research results in the fields of ecology, evolution, and related disciplines through cultural and institutional changes. SORTEE was launched in December 2020 to anyone interested in improving research in these disciplines, regardless of experience. The society is international in scope, membership, and objectives. As of May 2021, SORTEE comprises of over 600 members.",
- "related_terms": ["Society for the Improvement of Psychological Science (SIPS)"],
- "references": ["https://www.sortee.org/"],
- "alt_related_terms": [null],
- "drafted_by": ["Brice Beffara Bret", "Dominique Roche"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Charlotte R. Pennington", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/society-for-the-improvement-of-psyc.md b/content/glossary/vbeta/society-for-the-improvement-of-psyc.md
deleted file mode 100644
index 1869dd0c37..0000000000
--- a/content/glossary/vbeta/society-for-the-improvement-of-psyc.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Society for the Improvement of Psychological Science (SIPS)",
- "definition": "A membership society founded to further promote improved methods and practices in the psychological research field. The society aims to complete its mission statement by enhancing the training of psychological researchers; by promoting research cultures that are more conducive to better quality research; by quantifying and empirically assessing the impact of such reforms; and by leading outreach events within and outside psychology to better the current state of research norms.",
- "related_terms": ["Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE)"],
- "references": ["https://improvingpsych.org/"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Ashley Blake", "Jade Pickering", "Graham Reid", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/specification-curve-analysis.md b/content/glossary/vbeta/specification-curve-analysis.md
deleted file mode 100644
index f24919b6d8..0000000000
--- a/content/glossary/vbeta/specification-curve-analysis.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Specification Curve Analysis ",
- "definition": "An analytic approach that consists of identifying, calculating, visualising and interpreting results (through inferential statistics) for all reasonable specifications for a particular research question (see Simonsohn et al. 2015). Specification curve analysis helps make transparent the influence of presumably arbitrary decisions during the scientific progress (e.g., experimental design, construct operationalization, statistical models or several of these) made by a researcher by comprehensively reporting all non-redundant, sensible tests of the research question. Voracek et al. (2019) suggest that SCA differs from multiverse analysis with regards to the graphical displays (a specification curve plot rather than a histogram and tile plot) and the use of inferential statistics to interpret findings.",
- "related_terms": ["Multiverse analysis", "Research synthesis", "Robustness (analyses)", "Selective reporting", "Vibration of effects"],
- "references": ["Simonsohn et al. (2015)", "Simonsohn (2020)", "Voracek et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley Baker"],
- "reviewed_by": ["Tina B. Lonsdorf", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/statistical-assumptions.md b/content/glossary/vbeta/statistical-assumptions.md
deleted file mode 100644
index 0c16cd5876..0000000000
--- a/content/glossary/vbeta/statistical-assumptions.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Statistical Assumptions",
- "definition": "Analytical approaches and models assume certain characteristics of one’s data (e.g., statistical independence, random samples, normality, equal variance,...). Before running an analysis, these assumptions should be checked since their violation can change the results and conclusion of a study. Good practice in open and reproducible science is to report assumption testing in terms of the assumptions verified and the results of such checks or corrections applied.",
- "related_terms": ["Null Hypothesis Significance Testing (NHST)", "Statistical Significance", "Statistical Validity", "Transparency", "Type I error", "Type II error", "Type M error", "Type S error"],
- "references": ["Garson (2012)", "Hahn and Meeker (1993)", "Hoekstra et al. (2012)", "Nimon (2012)"],
- "alt_related_terms": [null],
- "drafted_by": ["Graham Reid"],
- "reviewed_by": ["Jamie P. Cockcroft", "Sam Parsons", "Martin Vasilev", "Julia Wolska"]
- }
diff --git a/content/glossary/vbeta/statistical-power.md b/content/glossary/vbeta/statistical-power.md
deleted file mode 100644
index 454d20dc56..0000000000
--- a/content/glossary/vbeta/statistical-power.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Statistical power",
- "definition": "Statistical power is the long-run probability that a statistical test correctly rejects the null hypothesis if the alternative hypothesis is true. It ranges from 0 to 1, but is often expressed as a percentage. Power can be estimated using the significance criterion (alpha), effect size, and sample size used for a specific analysis technique. There are two main applications of statistical power. A priori power where the researcher asks the question “given an effect size, how many participants would I need for X% power?”. Sensitivity power asks the question “given a known sample size, what effect size could I detect with X% power?”.",
- "related_terms": ["Effect Size", "Meta-analysis", "Null Hypothesis Significance Testing (NHST)", "Power Analysis", "Positive Predictive Value", "Quantitative research", "Sample size", "Significance criterion (alpha)", "Type I error", "Type II error"],
- "references": ["Carter et al. (2021)", "Cohen (1962)", "Cohen (1988)", "Dienes (2008)", "Giner-Sorolla et al. (2019)", "Ioannidis (2005)", "Lakens (2021a)"],
- "alt_related_terms": ["Type II Error"],
- "drafted_by": ["Thomas Rhys Evans"],
- "reviewed_by": ["James E. Bartlett", "Jamie P. Cockcroft", "Adrien Fillon", "Emma Henderson", "Tamara Kalandadze", "William Ngiam", "Catia M. Oliveira", "Charlotte R. Pennington", "Graham Reid", "Martin Vasilev", "Qinyu Xiao", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/statistical-significance.md b/content/glossary/vbeta/statistical-significance.md
deleted file mode 100644
index 87cd3fef86..0000000000
--- a/content/glossary/vbeta/statistical-significance.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Statistical significance",
- "definition": "A property of a result using Null Hypothesis Significance Testing (NHST) that, given a significance level, is deemed unlikely to have occurred given the null hypothesis. Tenny and Abdelgawad (2017) defined it as “a measure of the probability of obtaining your data or more extreme data assuming the null hypothesis is true, compared to a pre-selected acceptable level of uncertainty regarding the true answer” (p. 1). Conventions for determining the threshold vary between applications and disciplines but ultimately depend on the considerations of the researcher about an appropriate error margin. The American Statistical Association’s statement (Wasserstein & Lazar, 2016) notes that “Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis, or about the probability that random chance produced the observed data. The p-value is neither. It is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself” (p. 131).",
- "related_terms": ["Alpha error", "Frequentist statistics", "Null hypothesis", "Null Hypothesis Significance Testing (NHST)", "P-value", "Type I error"],
- "references": ["Cassidy et al. (2019)", "Tenny and Abdelgawad (2021)", "Wasserstein and Lazar (2016)"],
- "alt_related_terms": [null],
- "drafted_by": ["Alaa AlDoh", "Flávio Azevedo"],
- "reviewed_by": ["James E. Bartlett", "Alexander Hart", "Annalise A. LaPlume", "Charlotte R. Pennington", "Graham Reid", "Timo Roettger", "Suzanne L. K. Stewart"]
- }
diff --git a/content/glossary/vbeta/statistical-validity.md b/content/glossary/vbeta/statistical-validity.md
deleted file mode 100644
index 70a193afc1..0000000000
--- a/content/glossary/vbeta/statistical-validity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Statistical validity ",
- "definition": "The extent to which conclusions from a statistical test are accurate and reflective of the true effect found in nature. In other words, whether or not a relationship exists between two variables and can be accurately detected with the conducted analyses. Threats to statistical validity include low power, violation of assumptions, reliability of measures, etc, affecting the reliability and generality of the conclusions.",
- "related_terms": ["Power", "Validity", "Statistical assumptions"],
- "references": ["Cook and Campbell (1979)", "Drost (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Annalise A. LaPlume"],
- "reviewed_by": ["Jamie P. Cockcroft, Zoltan Kekecs", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/strange.md b/content/glossary/vbeta/strange.md
deleted file mode 100644
index de365a5213..0000000000
--- a/content/glossary/vbeta/strange.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "STRANGE",
- "definition": "The STRANGE “framework” is a proposal and series of questions to help animal behaviour researchers consider sampling biases when planning, performing and interpreting research with animals. STRANGE is an acronym highlighting several possible sources of sampling bias in animal research, such as the animals’ Social background; Trappability and self-selection; Rearing history; Acclimation and habituation; Natural changes in responsiveness; Genetic make-up, and Experience.",
- "related_terms": ["Bias", "Constraints on Generality (COG)", "Populations", "Sampling bias", "WEIRD"],
- "references": ["Webster and Rutz (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Ben Farrar", "Zoe Flack", "Elias Garcia-Pelegrin", "Charlotte R. Pennington", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/studyswap.md b/content/glossary/vbeta/studyswap.md
deleted file mode 100644
index c824130dcf..0000000000
--- a/content/glossary/vbeta/studyswap.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "StudySwap",
- "definition": "A free online platform through which researchers post brief descriptions of research projects or resources that are available for use (“haves”) or that they require and another researcher may have (“needs”). StudySwap is a crowdsourcing approach to research which can ensure that fewer research resources go unused and more researchers have access to the resources they need.",
- "related_terms": ["Collaboration", "Crowdsourcing", "Team science"],
- "references": ["Chartier et al. (2018)", "https://osf.io/view/StudySwap"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Emma Henderson", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/systematic-review.md b/content/glossary/vbeta/systematic-review.md
deleted file mode 100644
index 5e3986bcf8..0000000000
--- a/content/glossary/vbeta/systematic-review.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Systematic Review",
- "definition": "A form of literature review and evidence synthesis. A systematic review will usually include a thorough, repeatable (reproducible) search strategy including key terms and databases in order to find relevant literature on a given topic or research question. Systematic reviewers follow a process of screening the papers found through their search, until they have filtered down to a set of papers that fit their predefined inclusion criteria. These papers can then be synthesised in a written review which may optionally include statistical synthesis in the form of a meta-analysis as well. A systematic review should follow a standard set of guidelines to ensure that bias is kept to a minimum for example PRISMA (Moher et al., 2009; Page et al., 2021), Cochrane Systematic Reviews (Higgins et al., 2019), or NIRO-SR (Topor et al., 2021).",
- "related_terms": ["Meta-analysis", "CONSORT", "Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR)", "PRISMA"],
- "references": ["Higgins et al. (2019)", "Moher et al. (2009)", "Page et al. (2021)", "Topor et al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Jade Pickering"],
- "reviewed_by": ["Mahmoud Elsherif", "Adam Parker", "Charlotte R. Pennington", "Timo Roettger", "Marta Topor", "Emily A. Williams", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/tenzing.md b/content/glossary/vbeta/tenzing.md
deleted file mode 100644
index e7f71dc706..0000000000
--- a/content/glossary/vbeta/tenzing.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Tenzing",
- "definition": "tenzing is an online webapp and R package that helps researchers to track and report the contributions of each team member using the CRediT taxonomy in an efficient way. Team members of a research project can indicate their contributions to each CRediT role using an online spreadsheet template, and provide any additional authors' information (e.g., name, affiliation, order in publication, email address, and ORCID iD). Upon writing the manuscript, tenzing can automatically create a list of contributors belonging to each CRediT role to be included in the contributions section and create the manuscript’s title page.",
- "related_terms": ["Authorship", "Consortium authorship", "Contributions", "CRediT"],
- "references": ["Holcombe et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Marton Kovacs"],
- "reviewed_by": ["Balazs Aczel", "Mahmoud Elsherif", "Helena Hartmann", "Charlotte R. Pennington", "Graham Reid", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/the-troubling-trio.md b/content/glossary/vbeta/the-troubling-trio.md
deleted file mode 100644
index 6bb4801dd6..0000000000
--- a/content/glossary/vbeta/the-troubling-trio.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "The Troubling Trio",
- "definition": "Described as a combination of low statistical power, a surprising result, and a p-value only slightly lower than .05.",
- "related_terms": ["Replication", "Reproducibility", "Null Hypothesis Significance Testing (NHST)", "P-hacking", "Questionable Research Practices or Questionable Reporting Practices (QRPs)"],
- "references": ["Lindsay (2015)"],
- "alt_related_terms": [null],
- "drafted_by": ["Halil Emre Kocalar"],
- "reviewed_by": ["", "Catia M. Oliveira", "Adam Parker", "Sam Parsons", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/theory-building.md b/content/glossary/vbeta/theory-building.md
deleted file mode 100644
index 6bb961d922..0000000000
--- a/content/glossary/vbeta/theory-building.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Theory building ",
- "definition": "The process of creating and developing a statement of concepts and their interrelationships to show how and/or why a phenomenon occurs. Theory building leads to theory testing.",
- "related_terms": ["Hypothesis", "Model (philosophy)", "Theory", "Theoretical contribution", "Theoretical model"],
- "references": ["Borsboom et al. (2020)", "Corley and Gioia (2011)", "Gioia and Pitrie"],
- "alt_related_terms": [null],
- "drafted_by": ["Filip Dechterenko"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/theory.md b/content/glossary/vbeta/theory.md
deleted file mode 100644
index 297ad1bb28..0000000000
--- a/content/glossary/vbeta/theory.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Theory ",
- "definition": "A theory is a unifying explanation or description of a process or phenomenon, which is amenable to repeated testing and verifiable through scientific investigation, using various experiments led by several independent researchers. Theories may be rejected or deemed an unsatisfactory explanation of a phenomenon after rigorous testing of a new hypothesis that explains the phenomena better or seems to contradict them but is more generalisable to a wider array of findings.",
- "related_terms": ["Hypothesis", "Model (philosophy)", "Theory building"],
- "references": ["Schafersman (1997)", "Wacker (1998)"],
- "alt_related_terms": [null],
- "drafted_by": ["Aoife O’Mahony"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Charlotte R. Pennington", "Graham Reid"]
- }
diff --git a/content/glossary/vbeta/transparency-checklist.md b/content/glossary/vbeta/transparency-checklist.md
deleted file mode 100644
index 0cdcf9c579..0000000000
--- a/content/glossary/vbeta/transparency-checklist.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Transparency Checklist",
- "definition": "The transparency checklist is a consensus-based, comprehensive checklist that contains 36 items that cover the prepregistration, methods, results and discussion and data, code and materials availability. A shortened 12-item version of the checklist is also available. Checklist responses can be submitted alongside a manuscript for review. While the checklist can also work for educational purposes, it mainly aims to support researchers to identify concrete actions that can increase the transparency of their research while a disclosed checklist can help the readers and reviewers gain critical information about different aspects of transparency of the submitted research.",
- "related_terms": ["Credibility of scientific claims", "Open science", "Preregistration", "Reproducibility", "Trustworthiness"],
- "references": ["Aczel et. al. (2021)"],
- "alt_related_terms": [null],
- "drafted_by": ["Barnabas Szaszi"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Mahmoud Elsherif", "Helena Hartmann", "Graham Reid", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/transparency.md b/content/glossary/vbeta/transparency.md
deleted file mode 100644
index 718d64757b..0000000000
--- a/content/glossary/vbeta/transparency.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Transparency",
- "definition": "Having one’s actions open and accessible for external evaluation. Transparency pertains to researchers being honest about theoretical, methodological, and analytical decisions made throughout the research cycle. Transparency can be usefully differentiated into “scientifically relevant transparency” and “socially relevant transparency”. While the former has been the focus of early Open Science discourses, the latter is needed to provide scientific information in ways that are relevant to decision makers and members of the public (Elliott & Resnik, 2019).",
- "related_terms": ["Credibility of scientific claims", "Open science", "Preregistration", "Reproducibility", "Trustworthiness"],
- "references": ["Elliott and Resnik (2019)", "Lyon (2016)", "Syed (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["William Ngiam"],
- "reviewed_by": ["Tamara Kalandadze", "Aoife O’Mahony", "Eike Mark Rinke", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/triple-blind-peer-review.md b/content/glossary/vbeta/triple-blind-peer-review.md
deleted file mode 100644
index a1ceb188c8..0000000000
--- a/content/glossary/vbeta/triple-blind-peer-review.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Triple-blind peer review",
- "definition": "Evaluation of research products by qualified experts where the author(s) are anonymous to both the reviewer(s) and editor(s). “Blinding of the authors and their affiliations to both editors and reviewers. This approach aims to eliminate institutional, personal, and gender biases” (Tvina et al., 2019, p. 1082).",
- "related_terms": ["Double-blind peer review", "Open Peer Review", "Single-blind peer review"],
- "references": ["Largent and Snodgrass (2016)", "Tvina et al. (2019)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Bradley Baker", "Helena Hartmann", "Charlotte R. Pennington", "Christopher Graham"]
- }
diff --git a/content/glossary/vbeta/trust-principles.md b/content/glossary/vbeta/trust-principles.md
deleted file mode 100644
index 3a379e6452..0000000000
--- a/content/glossary/vbeta/trust-principles.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "TRUST Principles",
- "definition": "A set of guiding principles that consider Transparency, Responsibility, User focus, Sustainability, and Technology (TRUST) as the essential components for assessing, developing, and sustaining the trustworthiness of digital data repositories (especially those that store research data). They are complementary to the FAIR Data Principles.",
- "related_terms": ["FAIR principles", "Metadata", "Open Access", "Open Data", "Open Material", "Repository"],
- "references": ["Lin et al. (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Aleksandra Lazić"],
- "reviewed_by": ["Jamie P. Cockcroft", "Mahmoud Elsherif", "Helena Hartmann", "Sam Parsons"]
- }
diff --git a/content/glossary/vbeta/type-i-error.md b/content/glossary/vbeta/type-i-error.md
deleted file mode 100644
index 43988d07a9..0000000000
--- a/content/glossary/vbeta/type-i-error.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Type I error",
- "definition": "“Incorrect rejection of a null hypothesis” (Simmons et al., 2011, p. 1359), i.e. finding evidence to reject the null hypothesis that there is no effect when the evidence is actually in favouring of retaining the null that there is no effect (For example, a judge imprisoning an innocent person). Concluding that there is a significant effect and rejecting the null hypothesis when your findings actually occurred by chance.",
- "related_terms": ["Frequentist statistics", "Null Hypothesis Significance Testing (NHST)", "Null Result", "P value", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Reproducibility crisis (aka Replicability or replication crisis)", "Scientific integrity", "Statistical power", "True positive result", "Type II error"],
- "references": ["Simmons et al., (2011)"],
- "alt_related_terms": [null],
- "drafted_by": ["Lisa Spitzer"],
- "reviewed_by": ["Mahmoud Elsherif", "Adrien Fillon", "Helena Hartmann", "Matt Jaquiery", "Mariella Paul", "Charlotte R. Pennington", "Graham Reid", "Olly Robertson", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/type-ii-error.md b/content/glossary/vbeta/type-ii-error.md
deleted file mode 100644
index 4672ee74fe..0000000000
--- a/content/glossary/vbeta/type-ii-error.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Type II error",
- "definition": "A false negative result occurs when the alternative hypothesis is true in the population but the null hypothesis is accepted as part of the analysis (Hartgerink et al., 2017). That is, finding a non-significant statistical result when the effect is true (For example, a judge passing an innocent verdict on a guilty person). False negatives are less likely to be the subject of replications than positive results (Fiedler et al., 2012), and remain an unresolved issue in scientific research (Hartgerink et al., 2017).",
- "related_terms": ["Effect size", "Null Hypothesis Significance Testing (NHST)", "Questionable Research Practices or Questionable Reporting Practices (QRPs)", "Reproducibility crisis (aka Replicability or replication crisis)", "Scientific integrity", "Statistical power", "True positive result", "Type I error"],
- "references": ["Fiedler et al. (2012)", "Hartgerink et al. (2017)"],
- "alt_related_terms": [null],
- "drafted_by": ["Olly Robertson"],
- "reviewed_by": ["Mahmoud Elsherif", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/type-m-error.md b/content/glossary/vbeta/type-m-error.md
deleted file mode 100644
index e2aead2ec4..0000000000
--- a/content/glossary/vbeta/type-m-error.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Type M error",
- "definition": "A Type M error occurs when a researcher concludes that an effect was observed with magnitude lower or higher than the real one. For example, a type M error occurs when a researcher claims that an effect of small magnitude was observed when it is large in truth or vice versa.",
- "related_terms": ["Statistical power", "Type S error", "Type I error", "Type II error"],
- "references": ["Gelman and Carlin (2014)", "Lu et al.(2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Eduardo Garcia-Garzon"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Graham Reid", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/type-s-error.md b/content/glossary/vbeta/type-s-error.md
deleted file mode 100644
index ad475dea1c..0000000000
--- a/content/glossary/vbeta/type-s-error.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Type S error",
- "definition": "A Type S error occurs when a researcher concludes that an effect was observed with an opposite sign than real one. For example, a type S error occurs when a researcher claims that a positive effect was observed when it is negative in reality or vice versa.",
- "related_terms": ["Statistical power", "Type M error", "Type I error", "Type II error"],
- "references": ["Gelman and Carlin (2014)", "Lu et al. (2018)"],
- "alt_related_terms": [null],
- "drafted_by": ["Eduardo Garcia-Garzon"],
- "reviewed_by": ["Helena Hartmann", "Sam Parsons", "Graham Reid", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/under-representation.md b/content/glossary/vbeta/under-representation.md
deleted file mode 100644
index f68d482441..0000000000
--- a/content/glossary/vbeta/under-representation.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Under-representation",
- "definition": "Not all voices, perspectives, and members of the community are adequately represented. Under-representation typically occurs when the voices or perspectives of one group dominate, resulting in the marginalization of another. This often affects groups who are a minority in relation to certain personal characteristics.",
- "related_terms": ["Equity", "Fairness", "Inequality", "WEIRD"],
- "references": [null],
- "alt_related_terms": [null],
- "drafted_by": ["Madeleine Pownall"],
- "reviewed_by": ["Mahmoud Elsherif", "Helena Hartmann", "Bethan Iley", "Adam Parker", "Charlotte R. Pennington, Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/universal-design-for-learning-udl.md b/content/glossary/vbeta/universal-design-for-learning-udl.md
deleted file mode 100644
index b18da55a58..0000000000
--- a/content/glossary/vbeta/universal-design-for-learning-udl.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Universal design for learning (UDL)",
- "definition": "A framework for improving learning and optimising teaching based upon scientific insights of how humans learn. It aims to make learning inclusive and transformative for all people in which the focus is on catering to the differing needs of different students. It is often regarded as an evidence-based and scientifically valid framework to guide educational practice, consisting of three key principles: engagement, representation, and action and expression. In addition, UDL is included in the Higher Education Opportunity Act of 2008 (Edyburn, 2010).",
- "related_terms": ["Equal opportunities", "Inclusivity", "Pedagogy", "Teaching practice"],
- "references": ["Hitchcock et al. (2002)", "Rose (2000)", "Rose and Meyer (2002)"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Valeria Agostini", "Mahmoud Elsherif", "Graham Reid", "Mirela Zaneva", "Flávio Azevedo"]
- }
diff --git a/content/glossary/vbeta/validity.md b/content/glossary/vbeta/validity.md
deleted file mode 100644
index 8892a572bd..0000000000
--- a/content/glossary/vbeta/validity.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Validity",
- "definition": "Validity refers to the application of statistical principles to arrive at well-founded —i.e., likely corresponding accurately to the real world— concepts, conclusions or measurement. In psychometrics, validity refers to the extent to which something measures what it intends to or claims to measure. Under this generic term, there are different types of validity (e.g., internal validity, construct validity, face validity, criterion validity, diagnostic validity, discriminant validity, concurrent validity, convergent validity, predictive validity, external validity).",
- "related_terms": ["Causality", "Construct validity", "Content validity", "Criterion validity", "External validity", "Face validity", "Internal validity", "Measurement", "Questionable Measurement Practices (QMP)", "Psychometry", "Reliability", "Statistical power", "Statistical validity", "Test"],
- "references": ["Campbell (1957)", "Boorsboom et al. (2004)", "Kelley (1927)"],
- "alt_related_terms": [null],
- "drafted_by": ["Tamara Kalandadze", "Madeleine Pownall", "Flávio Azevedo"],
- "reviewed_by": ["Eduardo Garcia-Garzon", "Halil E. Kocalar", "Annalise A. LaPlume", "Joanne McCuaig", "Adam Parker", "Charlotte R. Pennington"]
- }
diff --git a/content/glossary/vbeta/version-control.md b/content/glossary/vbeta/version-control.md
deleted file mode 100644
index 9f8d4b172d..0000000000
--- a/content/glossary/vbeta/version-control.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Version control",
- "definition": "The practice of managing and recording changes to digital resources (e.g. files, websites, programmes, etc.) over time so that you can recall specific versions later. Version control systems are designed to record the history of changes (who, what and when), and help to avoid human errors (e.g. working on the wrong version). For example, the Git version control system is a widely used software tool that originally helped software developers to version control shared code and is now used across many scientific disciplines to manage and share files.",
- "related_terms": ["Git", "Reproducibility", "Software configuration management", "Source code management", "Source control"],
- "references": ["https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Sarah Ashcroft-Jones", "Thomas Rhys Evans", "Helena Hartmann", "Matt Jaquiery", "Adam Parker", "Charlotte R. Pennington", "Robert M. Ross", "Timo Roettger", "Andrew J. Stewart"]
- }
diff --git a/content/glossary/vbeta/webometrics.md b/content/glossary/vbeta/webometrics.md
deleted file mode 100644
index d53b5d13e6..0000000000
--- a/content/glossary/vbeta/webometrics.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Webometrics ",
- "definition": "Webometrics involves the study of online content. Webometrics focuses on the numbers and types of hyperlinks between different online sites. Such approaches have been considered as a type of altmetrics. “The study of the quantitative aspects of the construction and use of information resources, structures and technologies on the Web drawing on bibliometric and informetric approaches” (Björneborn & Ingwersen, 2004).",
- "related_terms": ["Altmetrics", "Bibliometrics"],
- "references": ["Björneborn and Ingwersen (2004)"],
- "alt_related_terms": [null],
- "drafted_by": ["Charlotte R. Pennington"],
- "reviewed_by": ["Christopher Graham", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/weird.md b/content/glossary/vbeta/weird.md
deleted file mode 100644
index a657caaee1..0000000000
--- a/content/glossary/vbeta/weird.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "WEIRD",
- "definition": "This acronym refers to Western, Educated, Industrialized, Rich and Democratic societies. Most research is conducted on, and conducted by, relatively homogeneous samples from WEIRD societies. This limits the generalizability of a large number of research findings, particularly given that WEIRD people are often psychological outliers. It has been argued that “WEIRD psychology ” started to evolve culturally as a result of societal changes and religious beliefs in the Middle Ages in Europe. Critics of this term suggest it presents a binary view of the global population and erases variation that exists both between and within societies, and that other aspects of diversity are not captured.",
- "related_terms": ["Bias", "BIZARRE", "Diversity", "Generalizability", "Populations", "Sampling bias", "STRANGE"],
- "references": ["Henrich (2020)", "Henrich et al. (2010)", "Muthukrishna et al., (2020)", "Syed and Kathawalla (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Mahmoud Elsherif"],
- "reviewed_by": ["Zoe Flack", "Matt Jaquiery", "Bettina M. J. Kern", "Adam Parker", "Charlotte R. Pennington", "Robert M. Ross", "Suzanne L. K. Stewart"]
- }
diff --git a/content/glossary/vbeta/z-curve.md b/content/glossary/vbeta/z-curve.md
deleted file mode 100644
index 3430b9ef1f..0000000000
--- a/content/glossary/vbeta/z-curve.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Z-Curve",
- "definition": "Computing a Z-score is a statistical approach mainly used to obtain the ‘Estimated Replication Rate’ (ERR) and ‘Expected Discovery Rate’ (EDR) for a set of reported studies. Calculating a z-curve for a set of statistically significant studies involves converting reported p-values to z-scores, fitting a finite mixture model to the distribution of z-scores, and estimating mean power based on the mixture model. The Z-curve analysis can be performed in R through a dedicated package - https://cran.r-project.org/web/packages/zcurve/index.html.",
- "related_terms": ["Altmetrics", "File drawer ratio", "P-curve", "P-hacking", "Replication", "Statistical power"],
- "references": ["Bartoš and Schimmack (2020)", "Brunner and Schimmack (2020)"],
- "alt_related_terms": [null],
- "drafted_by": ["Bradley J. Baker"],
- "reviewed_by": ["Kamil Izydorczak", "Sam Parsons", "Charlotte R. Pennington", "Mirela Zaneva"]
- }
diff --git a/content/glossary/vbeta/zenodo.md b/content/glossary/vbeta/zenodo.md
deleted file mode 100644
index 42a1899d04..0000000000
--- a/content/glossary/vbeta/zenodo.md
+++ /dev/null
@@ -1,9 +0,0 @@
-{
- "title": "Zenodo ",
- "definition": "An open science repository where researchers can deposit research papers, reports, data sets, research software, and any other research-related digital artifacts. Zenodo creates a persistent digital object identifier (DOI) for each submission to make it citable. This platform was developed under the European OpenAIRE program and operated by CERN.",
- "related_terms": ["DOI (digital object identifier)", "figshare", "Open data", "Open Science Framework", "Preprint"],
- "references": ["www.zenodo.org"],
- "alt_related_terms": [null],
- "drafted_by": ["Ali H. Al-Hoorie"],
- "reviewed_by": ["Sara Middleton"]
- }
diff --git a/layouts/404.html b/layouts/404.html
new file mode 100644
index 0000000000..71ed8cfbcf
--- /dev/null
+++ b/layouts/404.html
@@ -0,0 +1,79 @@
+{{- define "main" -}}
+
+{{ $css := resources.Get "css/404.css" | minify | fingerprint }}
+