Skip to main content

Research approvals iceberg: how a ‘low-key’ study in England needed 89 professionals to approve it and how we can do better

A Correspondence to this article was published on 24 December 2019

A Correspondence to this article was published on 24 December 2019

Abstract

Background

The red tape and delays around research ethics and governance approvals frequently frustrate researchers yet, as the lesser of two evils, are largely accepted as unavoidable. Here we quantify aspects of the research ethics and governance approvals for one interview- and questionnaire-based study conducted in England which used the National Health Service (NHS) procedures and the electronic Integrated Research Application System (IRAS). We demonstrate the enormous impact of existing approvals processes on costs of studies, including opportunity costs to focus on the substantive research, and suggest directions for radical system change.

Main text

We have recorded 491 exchanges with 89 individuals involved in research ethics and governance approvals, generating 193 pages of email text excluding attachments. These are conservative estimates (e.g. only records of the research associate were used). The exchanges were conducted outside IRAS, expected to be the platform where all necessary documents are provided and questions addressed. Importantly, the figures exclude the actual work of preparing the ethics documentation (such as the ethics application, information sheets and consent forms). We propose six areas of work to enable system change: 1. Support the development of a broad range of customised research ethics and governance templates to complement generic, typically clinical trials orientated, ones; 2. Develop more sophisticated and flexible frameworks for study classification; 3. Link with associated processes for assessment, feedback, monitoring and reporting, such as ones involving funders and patient and public involvement groups; 4. Invest in a new generation IT infrastructure; 5. Enhance system capacity through increasing online reviewer participation and training; and 6. Encourage researchers to quantify the approvals processes for their studies.

Conclusion

Ethics and governance approvals are burdensome for historical reasons and not because of the nature of the task. There are many opportunities to improve their efficiency and analytic depth in an age of innovation, increased connectivity and distributed working. If we continue to work under current systems, we are perpetuating, paradoxically, an unethical system of research approvals by virtue of its wastefulness and impoverished ethical debate.

Peer Review reports

Background

The gaze of scrutiny in the relationship between research studies and Research Ethics Committees (REC), Institutional Review Boards (IRB) and other bodies which grant permissions for (funded) research to proceed is typically unidirectional. Such organisations and their members review research studies for the ethical, legal and governance issues they raise. Occasionally, roles are reversed: research studies engage in probing and critical exploration of the work of those gatekeepers. This paper represents one such example. We build on existing literature and the striking findings of an audit-type study of the ethics and governance approvals of one of our recent projects (“Prepared to Share?: a study of patient data sharing in complex conditions and at the end of life” [1]) and offer proposals for change.

The paper is structured as follows:

An introduction sets the scene of researchers’ attitudes towards ethics and governance approvals processes.

We then present an extensive review of the empirical and critical literature on RECs, IRBs and ethics and governance approvals for health research (here understood to include biomedical research, health services research, and health-related research informed by the behavioural and social sciences and the humanities). We present a more detailed account of the background literature than typical of an empirical paper, as the literature is dispersed and little known. We include scholarly work which goes further than the mainstream storyline around the ethical regulation of research, namely how vital the work of ethics committees is to prevent a repetition of, for instance, the Nazi atrocities in the name of science, the Tuskegee syphilis experiment or experiments like those of Milgram and Zimbardo. The publications that transcend this narrative do not (obviously) question the need for ethical research, but probe into or challenge the extent to which the current systems are achieving that goal.

The respective scholarly field is far broader than appears at first sight, yet it lacks both ‘self-awareness’ as a distinctive area of study with core assumptions, questions, typical methods, exemplar studies, etc. and meaningful visibility to readers, at least outside the realm of bioethics. At the same time the work of RECs, IRBs and other permissions granting bodies affects all “human subjects research”. It is an oft-forgotten determinant of the contents and boundaries of our scientific knowledge.

Following the literature review, we present our empirical findings. These are from an audit-type study of the ethics and governance approvals of a project on patient data sharing at the end of life in a single locality in England. The trigger words of ‘data sharing’ and ‘end of life’ notwithstanding, this was, in the words of a research governance officer, a “low-key” (low-risk) study in terms of the ethical and governance issues it raised.

A further clarification of scope and terminology may be helpful. Most of the background literature addresses the ethics approvals of studies. However, study approvals are a broader enterprise, variably termed (at least in the UK) “research ethics and governance approvals”, “ethics and R&D (research and development) approvals”, “study assurances”, etc. Additional sign-offs are needed by R&D departments of participating organisations or university research offices (if the ethics approval has been granted by a committee unattached to a university) around compliance with regulations or the capacity of an organisation to host a project. Researchers involved in human subjects research often use “ethics approvals” as a synecdoche for this broader class of research ethics and governance approvals, not least because the latter are typically contingent on the former and because the bulk of the documentation is first prepared for the ethics review. Our paper concerns this broader class of approvals.

Finally, we outline six proposals for change in the systems of ethics and governance approvals, with potentially global application. Key features of these proposals are that they emphasise the enrichment of the ethical debate at least as much as the efficiency gains and that they rely strongly on opportunities provided by the growing sophistication and use of IT.

The negative set point of researchers’ expectations

Most researchers conducting human subjects research soon internalise an emotional response to the REC/IRB process and the broader system of research approvals. While researchers variably like or dislike different phases of a study, we are still to meet one looking forward to ethics and governance approvals applications. More accurately, we are still to meet a researcher mentioning that they are about to embark on those processes without pulling a face or changing their tone of voice. They/we may latter comment that the process went smoothly, was not as bad as expected, and was even quite useful. In some cases, they/we may feel resentful about being ‘told’ what we can or cannot do when we think we ‘know better’. For fully valid or rather suspect reasons, the set point of researchers’ expectations of the ethics and governance approvals for research studies is often remarkably negative.

This emotional response may be more local, less generalisable than it appears to us, fostered by the systems and institutions we have been working in. We hope readers will be willing to join a debate, both on the scope of problems of research approvals and on new solutions to them from a variety of perspectives and experiences. It is, nonetheless, a sad state of affairs to be experienced even in pockets of the academic world. First, in research as in daily life, ethical dilemmas (typically the starting point of approvals after research has been funded) are amongst the most engaging and enlivening topics of conversation. Then, the people behind the bureaucracy are often thoughtful, well-intentioned individuals willing to make a difference to research and research subjects. REC and IRB board members are generally volunteers. Some of them are, or have been, researchers themselves. Many members of staff working on broader governance approvals, for instance in clinical research networks, individual healthcare organisations or universities, are consistently going the extra mile to support researchers in navigating complex systems. They can easily introduce inordinate delays or even bring the whole enterprise down if only ‘working to rule’ and within the explicit remit of their (paid) roles.

Yet for many researchers with no insider experience in such processes, ethics and governance approvals are hardly ever about ethical deliberation and motivating exchanges with people supporting you to conduct a better study. More often than not, researchers experience them as exercises of form filling; of engaging with a tiresome and wasteful bureaucracy; of learning the rules and language of a game and sticking with your prescribed roles in it; and, more cynically, of somebody out there caring about ethical conduct only insofar as litigation and reputational damage can be avoided. As a result, many of us working in areas or with methods that may be “exempt” from REC/IRB review would consider fitting a study into an exempt category and avoiding the process altogether, while upholding ethical research standards as an integral part of taking professional responsibility (more or less successfully).

Such impressionistic accounts of researchers’ discontent are borne out by formal studies. In a survey of social researchers working on health and applying for NHS ethics approval (UK system under the National Health Service), Richardson and McMullan [2] found that 78% of their respondents (101) agreed or strongly agreed that the time for preparing an application inhibited social research in health and 75% (98) either disagreed or strongly disagreed that the amount of paperwork required was reasonable. Over half (51%, 64 individuals) reported they had modified their research design for the worse in order to secure ethical approval; 45% (58) had modified their research design to avoid the process; 30% (40) had abandoned their research before applying and 12% (16) during the application process (though we cannot assume that the ethics approval process was the sole reason). Twenty-one percent (26) revised their research design after obtaining approval and did not seek further permissions. Fifteen percent (19) had started research needing NHS research ethics approval without applying and 12% (14) had commenced their research before the formal approval was given. More positively, 32% (41) of respondents had made changes for the better as a result of the ethics approval process [2].

Interestingly, for those of us who internalise such negativism, this happens almost entirely through socialisation in our research community and overlaying our personal experiences on the socialisation process. We do not learn to be critical of the ethics and governance approvals of human subjects research by reading academic and professional literature presenting critical arguments or worrying empirical findings. Such literature is sparse, dispersed and hardly ever a syllabus item.

The sparse and fragmented literature on ethics and governance approvals

To our knowledge, there are only two well-known book-length critical explorations of IRBs (with the popularity still confined to bioethics circles rather than those of medical and health researchers): Zachary Schrag’s Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009 (2010) [3] and Laura Stark’s Behind Closed Doors: IRBs and the Making of Ethical Research (2011) [4]. Schrag’s Ethical Imperialism explores the regulation of research in the social sciences and humanities by IRBs and how derivative this regulation is of the concerns, ethical codes, rules and blueprints of legislation and regulation pertaining to biomedical and, to some extent, psychological research. It illustrates how limited or unsuccessful the involvement of social scientists in the debates was (with them practically “left howling outside the door” and becoming “objects of policy rather than its authors”, op.cit., p. 4, p. 8). As any good history book, it shows the contingency of much of what we now take for granted and ethical in human subjects research (e.g. the very concept of prior/prospective review; the risks-benefits framework; even the “do not harm” principle once applied outside of patient populations, such as in studies aiming to increase public accountability). Much of the regulation wielding “totemic influence over the practice of research ethics” (op. cit., p. 78) has also been shaped by time and staff limitations; the moment-to-moment availability or lack of particular types of expertise; by public and political pressures; by tactics and personalities; by the supposedly neutral work of redrafting and editing; by the gradualism of introducing changes, which keeps them off the radar of those most affected by them. Schrag’s book is an unlikely page-turner for work so meticulous in tracing and representing historical detail, so careful in crafting precise and information-dense sentences, and so concerned with getting the history right as opposed to advancing a righteous argument.

Published shortly after Schrag’s monograph, Laura Stark’s Behind Closed Doors is an ethnographic study (again from the US) based on observations of IRB meetings and interviews with IRB members, with some elements of a historical exploration. The historical account focuses on the origins of the method for making decisions about research ethics: group review by experts required to reach a consensus. After over 60 years of using this approach, it may appear to be the most reliable and overall ‘best’ way of making research ethics decisions. Yet group reviews are also a consequence of a particular historical configuration: IRBs are “direct descendants of the Clinical Research Committee that met inside NIH’s Clinical Center starting in the 1950s” [op. cit., p. 157]. Stark also argues that rather than having group review imposed on researchers by outsiders (such as bioethicists or activists, as the story often goes), it was primarily researchers and NIH leaders who developed, justified and expanded this mechanism “as a technique for promoting research and preventing lawsuits” [op.cit., p. 8].

Core findings from Stark’s ethnographic work concern the IRBs’ methods for making decisions. For example, board members appeared to be reading “researchers’ application documents like tea leaves for signs of good character” (p. 6). Most notably, the researcher’s apparent attention to detail was translated into judgments of his or her professional competence and trustworthiness (pp. 17–18). Board members used “warrants” to justify their recommendations and although some pertained to matters of fact and professional expertise, a surprising number came from their private lives. The people who featured there came to serve “as stand-ins for research participants”, thus reinforcing the biases of IRB membership (p. 14). IRBs typically invoked local precedents (p. 47 onwards) in their decision-making processes as opposed to abstract ethical rules. This, Stark argued, ensured efficiency and internal consistency of decisions over time, but was also the source of the much-lamented variability of decisions across IRBs. (Of course, we need to acknowledge the critical stance of “Behind Closed Doors”, meaning that it draws more attention to unacknowledged and problematic aspects of the system as opposed to its strengths and good processes.)

The empirical and critical explorations of research approvals in published papers have a rather limited overlap with the work of Scrhag and Stark. No relevant literature review seems to have been conducted, whether systematic or traditional. We offer a preliminary summary of key themes and approaches, with a focus on the last 20 years.

“Horror stories” appear to be one of two main ‘genres’ in the literature, with the term first used by Bradford Gray in 1978 of the “sad tales told by scholars in their letters to the National Commission and in their testimony at the IRB hearings of 1977” (p. 161) [3]. Mary Dixon-Woods et al. call them “wounding encounters” [5]. More neutrally, these are case studies of challenging ethics review processes, due to time delays and inconsistencies of decisions across committees; [6,7,8,9,10,11] bureaucracy, form filling, paperwork and over-concern with minor language and grammar issues; [12,13,14] lack of appropriate expertise, which may lead to inappropriate rejections, criticisms and recommendations; [15] blocking of important research [16] or compromising its rigour, e.g. through negative effects on recruitment [17]. Mounting costs are also a source of frustration. Most strikingly, Kano et al. estimate personnel costs of obtaining the 84 approvals for their multi-site project at US$121,344 [18] (note, however, that the system for multisite projects in the US has now changed).

Some of those publications are strongly emotionally charged. Mentions of “horror stories” and “wounding encounters” are also typical elements in polemical texts of scathing criticism and radical dismissal of the value of the existing systems of ethics review, especially if applied outside of biomedicine, such as to social sciences research (e.g. see Dingwall and to some extent Derbyshire [19, 20]). Many publications describing significant challenges, however, aim to remain objective and constructive, giving specific recommendations for improvements rather than questioning the overall value of the system.

There are also studies which, while arising from a background awareness of the challenges around ethics approvals, go straight into exploring and evaluating models for optimising the work involved. Recent research of this type concerns, for instance, the introduction of an ethics officer, [21] the use of structured tools to evaluate the work of RECs/IRBs, whether through self-assessment or researcher assessment, [22, 23] models for simplifying standard operating procedures, [24] and approaches built around specific conditions and enabling better separation between the scientific and ethical appraisal within the ethics review [25].

More broadly, audits and evaluations of RECs/IRBs, typically survey-based, appear to be the other leading genre in the literature. This may be a historical path dependence dating back to 1975, when the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (the commission which created most of the foundations for regulating human subjects research) commissioned the original survey of researchers and IRB members (see p. 63 onwards in Schrag [3]). In the last decade, topics include the overall availability and profile (composition, arrangements for their work, etc.) of IRBs and RECs, primarily in areas where ethical regulation of research is new; [26,27,28,29,30,31,32] comparisons of committee members’ and researchers’ views on ethical issues in complex areas; [33,34,35] practices, policies and requirements of IRBs/RECs; [36,37,38] committee members’ knowledge of and attitudes to ethically and legally complex topics [39, 40] and researchers’ perceptions of the review process, outcomes and impact [2, 41, 42].

The literature described above makes little explicit use of theory. In contrast, there is an identifiable niche of work on ethics approvals drawing on theory from the social sciences [5, 9, 43,44,45]. It problematises the foundations of the new regimes of control and warns against their dangers, particularly in the context of the social sciences. Narrower topics are also explored. For instance, Dixon-Woods et al. look into the social functions of REC letters, including ‘stabilising “what is ethical” in a world where ethical matters are in fact indeterminate’; providing institutional displays of “care, thoroughness and thoughtfulness”; and structuring the relationship between the REC and applicants, reinforcing a ritualised supplicant-authority relationship [5]. Or, against expectations that REC decision-making is yet another impersonal process of a modern bureaucracy, Hedgecoe argues that local knowledge, trust and “facework” (roughly, the direct face-to-face interactions between applicants and committee members which serve to build trust) are surprisingly important for the decision-making processes of local RECs [44].

Some recent trends in the literature can also be identified. There is a fast growing global dimension to it (see, for instance, [46,47,48,49]; about a fifth of the 100 most recent publications retrieved by “ethics research committees” as a major heading in PubMed, Jan 2018, were from developing world countries or countries with smaller research communities). There is also attention to ethics approvals challenges and solutions for newer or newly popular study types, such as participatory research or co-design studies, [50,51,52,53] biospecimens and biobanking studies, [54,55,56] digital health research [57,58,59] and research in disaster and developing world contexts [60,61,62,63,64,65].

Finally, there are red threads of issues which do not become the focus of texts but are picked and interweaved in much of the critical literature. It often reminds us that ‘there is no single “right” answer to (most) ethical issues’ (p. 795); [5] that what constitutes ethics, ethical practice and ethical decisions is contested, contextual, theory-dependent, subjective, personal, relational, evolving … (e.g. in 2, 9, 43). In contrast, what filters down to ethics review forms and processes is often an “impoverished ethics” (Carl Schneider quoted in Schrag [3], p.3). The shift of power from scholars to administrators and the rise of a bureaucratic and self-serving “machine” are lamented. Appeals are made to trust researchers and their professionalism more and enable them to take more responsibility for the ethical decisions they make. Attention is repeatedly drawn to serious flaws in or lack of relevant definitions, classifications and boundaries, which have profound impact on what is being regulated and resulting, for instance, in journalists conducting (and being praised for) investigations which social scientists would never be permitted to do or the state performing observations that are far more secretive and sinister that any researcher would ever consider engaging in.

Against this backdrop, the empirical aspect of our paper is a case study of ethics and governance approvals, with a focus on quantifying aspects of the process, one of which does not seem to have been quantified before (the number of “approvers” involved). Although some of the findings are striking, we did not have the experience of a “horror story” or “wounding encounter”. We did not start an audit because we were frustrated. Our study arose out of a table we kept in the project Master File to avoid excessive printing and to justify not printing (communication concerning project approvals must be printed and stored in a physical Master File). If it were not for this table, we would not have paid any further attention to the approvals process once we had gone through it.

This is one of the important points we want to make: much of the burden of ethics and governance approvals and the complexity of the system remain too vague or subliminal to be perceived clearly, like the underwater part of an iceberg. We fail to change course in steering ahead, as we can hardly discern what lies beneath, and we end up damaging the integrity of our research.

Inevitably, by virtue of the position we are taking, as researchers who have found that, largely unknowingly, we have been part of a rather inefficient process, our presentation of issues may feel or be one-sided. We acknowledge that there are many more sides to the story that need to be heard before we arrive at a balanced view of the advantages and disadvantages of current systems. This is one of them.

Main text

The substantive project (study being approved)

The substantive project whose ethics and governance approvals we are discussing was a relatively small mixed methods study, “Prepared to Share?” [1] on patient data sharing in complex conditions, advanced and progressive disease and/or at the end of life, conducted by one Primary Investigator (PI, SB) and one Research Associate (RA, MP). It was an evaluation of the development, implementation and outcomes of a local Electronic Palliative Care Coordination System (EPaCCS) nested in broader research on attitudes to patient data sharing. As the EPaCCS service development project was underway while the study was being carried out, the study was contributing to the EPaCCS project, but the two remained largely independent. Primary data collection was carried out between 2013 and 2015 in Cambridgeshire & Peterborough Clinical Commissioning Group, England’s second largest CCG. It included:

  1. 1)

    qualitative interviews: 40 with 44 primary participants – project developers; General Practitioners (GPs) in their role as data sharing initiators; data sharing recipients including palliative care specialists, out-of-hours, hospital and 111 staff (111 is a non-emergency medical helpline in England, Scotland and parts of Wales); patient and carers; key informants from the local area; and a practice administrator. (There is a discrepancy between number of interviews and participants as some patient and carer interviews involved more than one person and the two project leads were interviewed twice; paper in preparation);

    No question touching on end of life care was included in the interview schedules for patient and current carers. The focus was fully on data sharing, although end of life issues were discussed when raised by interviewees.

  2. 2)

    questionnaire concerning GPs’ and Practice Managers’ (PMs) familiarity, engagement with and perceptions of patient data sharing (405 responses, 64% response rate); [66]

  3. 3)

    ethnographic observations, mostly from the process of developing the data sharing system, but also from use of the data sharing tools in healthcare settings;

  4. 4)

    document analysis, using reports, meeting minutes and email communication of the clinical service development project.

    Three papers have been published from the study, [66,67,68] with more in preparation.

    While we could have attempted to represent the study as an evaluation (an exempt category), as its primary aim was to evaluate a service, we decided against this course of action. There were vulnerable individuals in our intended sample, regardless of how a study is classified relative to a debatable and often arbitrary distinction. We wanted to benefit from expert feedback in a full review (as opposed to a proportionate one) and also ensure that the team was fully protected, should untoward events occur. The benefit of saving time was also uncertain. Exempt studies need a form of approval, be it minimal, as it is advisable to have them confirmed exempt through a formal letter (which may be required by journals before considering publication). In addition, not going through an ethics approval would have still meant going through all other governance approvals, which are often facilitated and/or expedited if preceded by an ethics approval.

The project team

As the relative ease or difficulty of going through ethics and governance approvals is affected by the team members’ research experience, we are adding a brief section to that effect. SB is an experienced, internationally recognised palliative and end of life care researcher, university lecturer and GP, with all his senior level career in the locality of the study whose approvals we discuss. MP is a late-early to mid-career researcher, with some of her education and research experience strongly ethics-focused (an MA in Philosophy and Ethics of Mental Health, for which she even wrote coursework on the value of research ethics committees, and several years of experience on a research programme in “values-based practice” as complementary to evidence-based practice).

The system used for study approval and its relationship to HRA approval

Study documentation was submitted through the Integrated Research Application System (IRAS), the UK’s single electronic system for applying for permissions and approvals for health and social care research. It was through the Coordinated System for gaining NHS Permission (CSP), a simplified system available primarily to studies on the National Institute of Health Research (NIHR) Portfolio, phased out in March 2016. It is important to note that the processes we used have now been modified. Below we present the key claims concerning the changes made. We give the new system the benefit of the doubt but maintain that it will not resolve many of the issues we encountered and, under individual findings, explain why. We also excluded our data on time delays, as much as they were a substantial concern (including 7 months for National Institute of Health Research Portfolio application and 7 weeks for minor ethics amendments). Dealing with delays was one of the strongest motivators for introducing the new system and a parameter showing improvements as a result of it.

The new Health Research Authority (HRA) approval process for England was rolled out on 11th May 2015. HRA Approval is represented as a single “radically simplified” approval system for all studies involving the NHS [69, 70]. At the heart of the reform has been closer integration of the systems for REC review and NHS R&D permissions. This would remove the need for repeated R&D permissions, as granted by each individual NHS organisation involved in a study. The changes are designed to reduce system complexity, timelines and costs; eliminate duplication; make the UK a more attractive place for health research and free up of capacity for research delivery and implementation. [70] In June 2016, HRA announced a backlog of 3000 amendments [71] – pending approvals of changes to existing projects. Bedding-in difficulties can be expected with any significant reform and at least the above seem to have been resolved. A performance report from September 2017 [72] showed that HRA Approval timelines (after final REC favourable opinion) reduced significantly between November 2016 to March 2017, after which they levelled out. The report also stated, however, that only 14% of the current HRA caseload were studies actively processed by the HRA team. The remaining 86% were awaiting other approvals or a response from the applicant [op. cit.]. There is something amiss in a system which controls the outcome of less than a seventh of its cases.

The approvals audit

This audit-type study arose out of a summary table of approvals communications maintained for the Master File of “Prepared to Share?”. Our first recorded communication about ethics and governance approvals is from February 2013. The last document was issued in February 2016. (Chronologically, some of the approvals were granted at a time when HRA Approval was already functional. However, the new system was phased in gradually and our study was fully approved under the old system.) During the most intense period of approvals we had no intention to formalise the emerging evidence, i.e. bias attributable to intentional observations of system inefficiencies is unlikely. Findings are based on records kept by the RA (MP) only.

Eighty-nine individuals (a conservative estimate) have been involved in the study approvals

The study team have communicated with 81 named individuals for research ethics and governance approval purposes through email, telephone or in person, with 8 unnamed individuals mentioned in communications as having provided further advice. These numbers leave out seven categories of individuals and contributions of which we are aware, primarily because communication has been too indirect (such as with staff who processed criminal records checks) or not strictly required (such as with gatekeepers who do not have the authority to grant or withhold permissions formally but could facilitate or block the study progress in other ways). There are also individuals involved in the permissions process who remain invisible to researchers. (In some cases, different individuals performed the same role, if working as part of a team or if post-holders changed. However, the same individual could also perform several roles.)

Thirty-five out of the 44 participants interviewed were health professionals, decision makers and project developers and 9 were patients or family members. This corresponds to roughly two approvers per participant, with all the necessary caveats added about the meaningfulness of such statistics. In one hospital, 10 individuals and 101 exchanges outside the supposedly “one-stop” IRAS platform were needed to interview 4 consultants, a group whose decision-making capacity is hardly a matter of debate. Table 1 provides further details of the distribution of individuals across organisations.

Table 1 Individuals and exchanges involved in the ethics and R&D approvals of the substantive study

Reducing the involvement of individual NHS sites would still leave 59 individuals granting formal approvals. Many of those staff whose signatures will no longer be required will nonetheless be enabling a study’s conduct at a site – a lesser gatekeeping hurdle to overcome, but still an important one.

Four hundred and ninety-one exchanges with those individuals were recorded outside the Integrated Research Application System (IRAS)

We have recorded 491 exchanges with these 89 individuals, generating 193 pages of email text, excluding attachments. These attachments, whose total volume we have not quantified, could contain key information (e.g. amendments requested) or be voluminous (e.g. one of the first emails we were sent by the local R&D team had 76 pages of attachments; a core set of study documentation which we have been sending out numbered 78 pages). In theory, the Integrated Research Application System should act as the platform where all information necessary for research approvals is requested and provided. In practice, information not requested in IRAS forms becomes relevant, especially with non-standard studies; documents are uploaded on IRAS but are not accessible to certain groups; or documents are managed as hard copies and only sent to the study team who then need to pass them on (REC favourable opinion letters were a case in point).

While human inefficiencies on both sides will have had some impact (however we may want to minimise them through self-serving biases), it will be problematic to explain the full volume of correspondence external to the IRAS system through the incompetence of its users, whether our or of our colleagues managing the approvals. We recognise that IRAS is a living platform undergoing continuous improvement, but its current version is still largely the same as the one we used. There is more that an integrated research application system should enable to deserve its name.

The 31-page IRAS application and key study documentation appeared to have been read in detail at least 13 times for one-and-a-half pages of mostly technical suggestions

The 31-page IRAS application and accompanying documentation have been read in detail at least 13 times – a conservative estimate based on instances when informed questions were asked or comments provided (6 REC, 4 local R&D, 2 Sponsor, 1 specific NHS site). We cannot estimate the number of people who have read specific sections from the study documents (a typical set consisted of 78 pages and a minimal set of 23 pages). The total written concerns raised by these 13 people amounted to just one-and-a-half pages of text and addressed almost exclusively minor omissions or clarifications of research procedures. We introduced more changes to our study documentation as a result of consulting the study Lay User Group than as a result of the ethics and R&D review processes.

The most significant ethical issues we experienced arose outside of the NHS ethics committee mandate

The most significant ethical issues we experienced in conducting the research related to interviews with members of the service development team working on the data sharing system evaluated as part of the study. The interviews revealed unexpected sources of dissatisfaction and internal conflict. As both of us are also members of the service development team, devising informative yet sensitive ways of feeding back findings was challenging. There was information we agreed to withhold at this stage and re-consider for publication in the future, as it was deemed potentially damaging to team relationships.

It is hard to tell whether the risks of stirring intragroup conflict could have been picked more efficiently if research on staff were formally reviewed as part of the NHS ethics review process. Hindsight will often mislead us in answering such questions. No matter how thorough an ethics review, it will never be able to cover the whole complexity, risks and eventualities arising in the real world. But it is worth noting that NHS ethics review does not formally consider research on healthcare professionals and non-NHS staff, even though we were required to submit information on this aspect of the study for ethics approval, by virtue of it being part of the broader project.

The study was often unclassifiable under existing frameworks, leading to lengthier debates and timeframes

There were four types of classification difficulties associated with this study which generated protracted and recurrent discussions and delays: 1) the study had both research and evaluation aspects; 2) it involved both patients and health professionals; 3) it combined traditional research and service development elements and was set both within a University and a healthcare setting; and 4) it was funded through an open competition in a geographically confined area, i.e. an open competition within a closed competition. The decision pathways and outcomes for the “pure cases”, here combined in a single study, clashed. For instance, while ‘patients’ and ‘research’ (unless on staff) necessitate NHS REC approval, ‘evaluation’ and ‘staff’ are exemption criteria. Or, since the grant, being primarily for service development, was held by an NHS-related organisation rather than the University as a research grant, sponsorship responsibilities could not be attributed unequivocally. Finally, the research project was funded in two open competitions within narrow geographical and organisational boundaries. This led to protracted discussions concerning financial and infrastructure support from the Clinical Research Network, only available to studies which have undergone high quality peer review and been funded through an open competition.

Implications of the findings

Our findings confirm both empirical findings from previous research on ethics and governance approvals and impressions which are shared informally amongst many researchers working in human subjects research. These concern system complexity, duplication, excessive paperwork, attention to trivial detail in applications while missing core ethical issues, and classification difficulties with significant consequences for the approvals process and outcomes.

We also suggest two new points. The first is that the size of the approvals machine and the nature of the work of its individual parts are largely unknown even to people who are working in it, including those working to reform it. Colleagues and reviewers from within the system have been struck by the figures (as we were) and even doubtful of our data. This is largely because the ethics and governance approvals system is a single whole only by virtue of its outcomes – permissions for a study to proceed. Other than that, it is at best a collage, at worst a ‘bag’ of elements belonging to many different organisations where only some are well connected to others.

The second point also concerns issues around visibility and knowability. While many researchers may be experiencing ethics and governance approvals processes as burdensome and time- and resource-consuming, much of that burden and investment remains under the radar. As we were moving through the hoops of the approvals process, we felt we knew how to navigate it, believed we were managing it well, and found the staff and signatories on the other side unfailingly helpful, even if not always efficient. Our frustrations were only occasional and fleeting. We were completely unprepared for some of the figures generated by the case study. The scale of the work involved is largely invisible when done piecemeal. It is worth re-emphasising that the work described here excludes the core task of obtaining ethics and R&D approvals – completing the paperwork (such as the IRAS form) and preparing the accompanying materials (such as information sheets). Also as mentioned, this case study was based on records held by the RA only, document attachments were not counted in, and estimates as a whole were conservative. The impact on the cost of studies and loss of opportunity to focus on the substantive research is enormous, even when considering an incomplete picture.

Steps towards a truly “radically simplified” system

Even though HRA Approvals seems to be achieving some improvements, we argue that more needs to change before researchers in England experience the system as “radically simplified”. We propose six priority steps for enabling such change. We believe that these steps are likely to be generalisable to other contexts, as many of the challenges they aim to address are shared, as the literature review has demonstrated. In addition, our proposals are predicated on flexible and dispersed working processes, which are becoming increasingly natural with the growing sophistication of information technologies.

Support the development of a broad range of customised research ethics and governance templates rather than rely on generic, predominantly clinical trials orientated, templates

Templates for soliciting ethically relevant information for health research, at least in England, are typically modelled on templates for clinical trials for investigational medicinal products (CTIMPs). While screening questions from other ethically charged areas are also asked, for instance concerning research on vulnerable groups (e.g. children, prisoners, individuals lacking mental capacity), utilising human tissue, involving radiation, etc., these questions are limited. Any health and medical research involves its particular ethical issues. In our case, issues around palliative care research, evaluations of active projects and data sharing received limited or no attention, not least because the proformas through which the information was collected were generic. Ethical and regulatory challenges will be far more effectively captured if templates are customised to different study types, through endeavours informed by domain experts, bioethicist and patients.

There are examples of successful introduction of subject-specific templates for IRBs and their positive impact [73]. There are also settings where health research informed by the behavioural and social sciences and the humanities is reviewed by committees within the respective academic departments, where forms are likely to be more appropriate. Needs for improving template relevance are not universal, but sorely needed in some contexts. Inevitably, such work will create its own challenges, most notably template proliferation and version control. But unless we become more accommodating of the huge variety of studies being conducted, we will continue to stretch and shrink studies to the Procrustean bed of CTIMPs.

Improve study classification, particularly in view of the growing number of innovative and mixed study types

The liminal nature of our study on at least four criteria (e.g. research – evaluation, study of patients – study of health professionals, etc.) was a cause of uncertainty and delays in decisions. To have a system that is equitable and efficient across study types, we need new and expanded decision pathways which can accommodate the growing number of non-typical and multi-method studies.

Possibilities for omissions of ethical and governance challenges based on typological distinctions should also be reduced. Currently, service evaluations and NHS employee research are considered to pose low ethical risks. This is an oversimplification. Most likely, there are no essential typological differences between research and evaluation studies in the ethical challenges they pose. There are simply lower litigation risks associated with studies classified as evaluations. Apart from missing relevant ethical considerations, the current research-evaluation distinction is a source of unequal treatment of projects (i.e. when two very similar projects go down different routes, one of evaluation, the other of research) and is open to manipulation. Even the most responsible amongst researchers are tempted to consider if we can “pass” a study as an evaluation to avoid the complex NHS approvals process.

Link with associated processes for assessment, feedback, monitoring and reporting

The ethics and R&D approvals of a study are only one of the spaces where the methods, design, supporting materials, staff competence, the adequacy of funding and other features of a study are reviewed. Funders, the scientific reviewers whose feedback they solicit, patient and public involvement (PPI) groups and research support services often review significantly overlapping, even the same, documentation. On the monitoring and reporting side, funders, ethics and R&D officers, university administration, impact and pubic engagement officers, PPI groups also have overlapping needs and requirements. While the availability of alternatives can ensure the robustness of the system, creating opportunities to rectify omissions not identified previously, it also creates wasteful redundancy and duplication of effort. Better alignment of the work of these different organisations and networks will lighten up the overall “meta-research burden”, i.e. all those tasks associated with the research support infrastructure as opposed to the substantive scholarly work.

Invest in a new generation IT system

If HRA Approval, or any ethics and governance approval system, strives to be radically more efficient, work on developing the underpinning IT system should be prioritised, with an ambitious vision. Until all study documents are located in one place and easily searchable, voluminous email exchanges outside of the system will continue. Easier access to full study information will enable de-localising and de-synchronising aspects of the approvals process that need not be geographically and temporally fixed. For instance, comments on the ethical aspects of study protocols or the quality of patient information materials may be more efficiently provided by appropriately qualified but geographically dispersed individuals rather than a local REC committee. Broader lay representation will also be facilitated in this way. Apart from improving efficiently, such broadening of participation could also enhance the quality of ethical debate.

Furthermore, rather than each PI sending to non-university bodies credentials which are identical for every study within a university (such as insurance documents, which also periodically expire and are renewed, resulting in further communication), a university can have “an institutional legal, ethical and governance profile for research” and access to all of its studies on a shared system for approvals. Similarly, rather than repeatedly attaching the same CVs and Good Clinical Practice certificates, PIs and RAs can have personal “ethical and governance awareness profiles”, also linking to established platforms such as ORCID, ResearchGate, LinkedIn or Convey (a platform for conflict of interests disclosure [74, 75]). Finally, the non-proprietary, non-sensitive information concerning studies and their ethical credentials can be made available to the public through the same system using different (or similar) viewing rights. Study participants in particular should have easy access to such documentation and opportunities to report on the ethical conduct and adverse effects of being in a study.

Enhance system capacity through increasing online participation and training

One of the reasons for the slow ethics and R&D approvals process is that it is highly localised. In the case of NHS ethics reviews, this is still focused around an ethics committee that meets physically to discuss studies and thus has a limited membership, often worryingly professionalised, and a high workload. If we broaden the number of potential participants in the review processes through a sophisticated IT system and accredited training, we can dramatically increase the capacity of the system. Some possible directions for recruitment efforts include medical charities and patient groups, whose involvement in research has been growing steadily in recent years, postgraduate students in ethical disciplines, and, perhaps most importantly, past research participants. Involving the latter, however, requires us to step up efforts to keep participants informed of the outcomes of studies, recognise their contribution, and nurture their willingness to support research in a variety of ways.

Perhaps, in the not too distant future, existing Research Ethics Committees will turn into local training and advice centres, while much larger ethics review teams do most of the work online.

Encourage researchers to quantify the approvals process they are going through

Finally, we hope that more colleagues will be willing to undertake work similar to the one presented here and that more structural support for bigger and more rigorous studies, as opposed to single case studies, can be provided by funding bodies. If researchers can offer evidence concerning ethics and governance approval processes, rather than primarily express frustration amongst ourselves, we will have much stronger arguments to insist on radical improvements and better ideas to enable them.

Conclusions

The broad variety of stakeholders who experience the impact of burdensome research ethics and governance approvals processes – researchers, R&D staff, health professionals involved in research, research participants, funders, administrators, etc. – have a responsibility to facilitate and demand substantial improvements of the system. Unless we contribute to such change, we will continue to uphold principles of research ethics in a way which is deeply unethical, by virtue of its wastefulness on the one hand and impoverished ethics on the other.

Abbreviations

HRA:

Health Research Authority

IRAS:

Integrated Research Application System

NHS:

National Health Service

NIH:

National Institutes of Health

R&D:

Research and development

REC:

Research Ethics Committee

References

  1. “Prepared to Share” study website. https://www.phpc.cam.ac.uk/pcu/research/research-projects-list/prepared-to-share/. Accessed 5 Nov 18.

  2. Richardson S, McMullan M. Research ethics in the UK: what can sociology learn from Health? Sociology. 2007;41(6):1115–32. https://doi.org/10.1177/0038038507082318.

    Article  Google Scholar 

  3. Schrag Z. Ethical imperialism: institutional review boards and the social sciences, 1965–2009. Kindle Edition. Baltimore: The Johns Hopkins University Press; 2010.

    Google Scholar 

  4. Stark L. Behind Closed Doors: IRBs and the Making of Ethical Research. Morality and society series. Kindle edition. Chicago and London: The University of Chicago Press; 2011.

    Book  Google Scholar 

  5. Dixon-Woods M, Angell E, Ashcroft R, Bryman A. Written work: the social functions of research ethics committee letters. Soc Sci Med. 2007;65:792–802. https://doi.org/10.1016/j.socscimed.2007.03.046.

    Article  Google Scholar 

  6. Harries UJ, Fentem PH, Tuxworth W, Hoinville GW. Local research ethics committees. Widely differing responses to a national survey protocol. J R Coll Physicians Lond. 1994;28(2):150–4.

    Google Scholar 

  7. Ahmed AH, Nicholson KG. Delays and diversity in the practice of local research ethics committees. J Med Ethics. 1996;22(5):263–6.

    Article  Google Scholar 

  8. Maskell NA, Jones EL, Davies RJO, on behalf of the BTS/MRC MIST steering committee. Variations in experience in obtaining local ethical approval for participation in a multi-Centre study. QJM. 2003;96(4):305–7 https://doi.org/10.1093/qjmed/hcg042.

    Article  Google Scholar 

  9. McDonach E, Barbour RS, Williams B. Reflections on applying for NHS ethical approval and governance in a climate of rapid change: prioritising process over principles. Int J Soc Res Methodol. 2009;12(3):227–41 https://doi.org/10.1080/13645570701606127.

    Article  Google Scholar 

  10. Freed MC, Novak LA, Killgore WDS, Rauch SAM, Koehlmoos TP, Ginsberg JP, et al. IRB and research regulatory delays within the military Health system: do they really matter? And if so, why and for whom? Am J Bioeth. 2016;16(8):30–7 https://doi.org/10.1080/15265161.2016.1187212.

    Article  Google Scholar 

  11. White VM, Bibby H, Green M, Anazodo A, Nicholls W, Pinkerton R, et al. Inconsistencies and time delays in site-specific research approvals hinder collaborative clinical research in Australia. Intern Med J. 2016;46(9):1023–9. https://doi.org/10.1111/imj.13191.

    Article  Google Scholar 

  12. Jamrozik K. Research ethics paperwork: what is the plot we seem to have lost? BMJ. 2004;329(7460):286–7. https://doi.org/10.1136/bmj.329.7460.286.

    Article  Google Scholar 

  13. Wald DS. Bureaucracy of ethics applications. BMJ. 2004;329:282 https://doi.org/10.1136/bmj.329.7460.282.

    Article  Google Scholar 

  14. Martyn C. The ethical bureaucracy. Q J Med. 2003;96:323–4. https://doi.org/10.1093/qjmed/hcg060.

    Article  Google Scholar 

  15. Stewart P, Stears A, Tomlinson JW, Brown MJ. Regulation – the real threat to clinical research. BMJ. 2008;337(a1732):1085–7 https://doi.org/10.1136/bmj.a1732.

    Google Scholar 

  16. Saxton DI, Brown P, Seguinot-Medina S, Eckstein L, Carpenter DO, Miller P, Waghiyi V. Environmental health and justice and the right to research: institutional review board denials of community-based chemical biomonitoring of breast milk. Environ Health. 2015;14:90. https://doi.org/10.1186/s12940-015-0076-x.

    Article  Google Scholar 

  17. Ward HJT, Cousens SN, Smith-Bathgate B, Leitch M, Everington D, Will RG, Smith PG. Obstacles to conducting epidemiological research in the UK general population. BMJ. 2004;329(7460):277–9. https://doi.org/10.1136/bmj.329.7460.277.

    Article  Google Scholar 

  18. Kano M, Getrich CM, Romney C, Sussman AL, Williams RL. Costs and inconsistencies in US IRB review of low-risk medical education research. Med Educ. 2015;49(6):634–7. https://doi.org/10.1111/medu.12693.

    Article  Google Scholar 

  19. Dingwall R. The ethical case against ethical regulation in humanities and social science research. 21st Century Society: Journal of the Academy of Social Sciences. 2008;3(1):1–12. https://doi.org/10.1080/17450140701749189.

    Article  Google Scholar 

  20. Derbyshire S. The ethical dilemma of ethical committees. Sociol Compass. 2008;2(5):1506–22. https://doi.org/10.1111/j.1751-9020.2008.00143.x.

    Article  Google Scholar 

  21. Dixon-Woods M, Foy C, Hayden C, Al-Shahi Salman R, Tebbutt S, Schroter S. Can an ethics officer role reduce delays in research ethics approval? A mixed-method evaluation of an improvement project. BMJ Open. 2016;6(8):e011973. https://doi.org/10.1136/bmjopen-2016-011973.

    Article  Google Scholar 

  22. Jaoko W, Bukusi E, Davis AM. An evaluation of the Middle East research training initiative tool in assessing effective functioning of research ethics committees. J Empir Res Hum Res Ethics. 2016;11(4):357–63. https://doi.org/10.1177/1556264616665952.

    Article  Google Scholar 

  23. Hall DE, Hanusa BH, Ling BS, Stone RA, Switzer GE, Fine MJ, Arnold RM. Using the IRB researcher assessment tool to guide quality improvement. J Empir Res Hum Res Ethics. 2015;10(5):460–9. https://doi.org/10.1177/1556264615612195.

    Article  Google Scholar 

  24. Ouwe Missi Oukem-Boyer O, Munung NS, Tangwa GB. Small is beautiful: demystifying and simplifying standard operating procedures: a model from the ethics review and consultancy committee of the Cameroon Bioethics Initiative. BMC Med Ethics. 2016;17(1):27. https://doi.org/10.1186/s12910-016-0110-8.

    Article  Google Scholar 

  25. Knopman D, Alford E, Tate K, Long M, Khachaturian AS. Patients come from populations and populations contain patients. A two-stage scientific and ethics review: the next adaptation for single institutional review boards. Alzheimers Dement. 2017;13(8):940–6. https://doi.org/10.1016/j.jalz.2017.06.001.

    Article  Google Scholar 

  26. Panichkul S, Mahaisavariya P, Morakote N, Condo S, Caengow S, Ketunpanya A. Current status of the research ethics committees in Thailand. J Med Assoc Thail. 2011;94(8):1013–8.

    Google Scholar 

  27. Koepsell D, Brinkman WP, Pont S. Human research ethics committees in technical universities. J Empir Res Hum Res Ethics. 2014;9(3):67–73. https://doi.org/10.1177/1556264614540596.

    Article  Google Scholar 

  28. Sabio MF, Bortz JE. Structure and functioning of research ethics committees in the Autonomous City of Buenos Aires and greater Buenos Aires. Salud Colect. 2015;11(2):247–60. https://doi.org/10.1590/S1851-82652015000200008.

    Article  Google Scholar 

  29. Abrar S, Ronis KA, Khan S, Siraj S, Safdar W, Khalid Y, et al. Status of ethical review boards in medical colleges of Khyber Pakhtunkhwa. J Ayub Med Coll Abbottabad. 2015;27(2):411–4.

    Google Scholar 

  30. Janakiram C, Venkitachalam R, Joseph J. Profile of institutional ethics committees in dental teaching institutions in Kerala, India. Account Res. 2016;23(4):219–29. https://doi.org/10.1080/08989621.2015.1114887.

    Article  Google Scholar 

  31. Hernandez R, Cooney M, Dualé C, Gálvez M, Gaynor S, Kardos G, et al. Harmonisation of ethics committees’ practice in 10 European countries. J Med Ethics. 2009;35(11):696–700. https://doi.org/10.1136/jme.2009.030551.

    Article  Google Scholar 

  32. Campbell EG, Vogeli C, Rao SR, Abraham M, Pierson R, Applebaum S. Industry relationships among academic institutional review board members: changes from 2005 through 2014. JAMA Intern Med. 2015;175(9):1500–6. https://doi.org/10.1001/jamainternmed.2015.3167.

    Article  Google Scholar 

  33. Duffett M, Burns KE, Kho ME, Lauzier F, Meade MO, Arnold DM, et al. Academy of Critical CAre: Development, Evaluation, and MethodologY (ACCADEMY) and Canadian Critical Care Trials group. Consent in critical care trials: a survey of Canadian research ethics boards and critical care researchers. J Crit Care. 2011;26(5):533.e11–22. https://doi.org/10.1016/j.jcrc.2010.12.009.

    Article  Google Scholar 

  34. Rodrigue JR, Feng S, Johansson AC, Glazier AK, Abt PL. Deceased donor intervention research: a survey of transplant surgeons, organ procurement professionals, and institutional review board members. Am J Transplant. 2016;16(1):278–86. https://doi.org/10.1111/ajt.13482.

    Article  Google Scholar 

  35. Stryjewski TP, Kalish BT, Silverman B, Lehmann LS. The impact of institutional review boards (IRBs) on clinical innovation: a survey of investigators and IRB members. J Empir Res Hum Res Ethics. 2015;10(5):481–7. https://doi.org/10.1177/1556264615614936.

    Article  Google Scholar 

  36. Gong MN, Winkel G, Rhodes R, Richardson LD, Silverstein JH. Surrogate consent for research involving adults with impaired decision making: survey of institutional review board practices. Crit Care Med. 2010;38(11):2146–54. https://doi.org/10.1097/CCM.0b013e3181f26fe6.

    Article  Google Scholar 

  37. Ricci DS, Broderick ED, Tchelet A, Hong F, Mayevsky S, Mohr DM, et al. Global requirements for DNA sample collections: results of a survey of 204 ethics committees in 40 countries. Clin Pharmacol Ther. 2011;89(4):554–61. https://doi.org/10.1038/clpt.2010.319.

    Article  Google Scholar 

  38. Roche E, King R, Mohan HM, Gavin B, McNicholas F. Payment of research participants: current practice and policies of Irish research ethics committees. J Med Ethics. 2013;39(9):591–3. https://doi.org/10.1136/medethics-2012-100679.

    Article  Google Scholar 

  39. Beskow LM, Check DK, Namey EE, Dame LA, Lin L, Cooper A, et al. Institutional review boards' use and understanding of certificates of confidentiality. PLoS One. 2012;7(9):e44050. https://doi.org/10.1371/journal.pone.0044050.

    Article  Google Scholar 

  40. Nadig P, Joshi M, Uthappa A. Competence of ethics committees in patient protection in clinical research. Indian J Med Ethics. 2011;8(3):151–4.

    Google Scholar 

  41. Edwards KL, Lemke AA, Trinidad SB, Lewis SM, Starks H, Quinn Griffin MT, Wiesner GL. Attitudes toward genetic research review: results from a survey of human genetics researchers. Public Health Genomics. 2011;14(6):337–45. https://doi.org/10.1159/000324931.

    Article  Google Scholar 

  42. Fischer BA, George P. The investigator and the IRB: a survey of depression and schizophrenia researchers. Schizophr Res. 2010;122(1–3):206–12. https://doi.org/10.1016/j.schres.2009.12.019.

    Article  Google Scholar 

  43. Boden R, Epstein D, Latimer J. Accounting for ethos or programmes for conduct? The brave new world of research ethics committees. Sociol Rev. 2009;57(4):727–49.

    Article  Google Scholar 

  44. Hedgecoe A. Trust and regulatory organisations: the role of local knowledge and facework in research ethics review. Soc Stud Sci. 2012;42(5):662–83. https://doi.org/10.1177/0306312712446364.

    Article  Google Scholar 

  45. Hedgecoe A. Reputational risk, academic freedom and research ethics review. Sociology. 2016;50(3):486–501. https://doi.org/10.1177/0038038515590756.

    Article  Google Scholar 

  46. Motari M, Ota MO, Kirigia JM. Readiness of ethics review systems for a changing public health landscape in the WHO African Region. BMC Med Ethics. 2015;16(1):82. https://doi.org/10.1186/s12910-015-0078-9.

    Article  Google Scholar 

  47. Suzuki M, Sato K. Description and evaluation of the research ethics review process in Japan: proposed measures for improvement. J Empir Res Hum Res Ethics. 2016;11(3):256–66. https://doi.org/10.1177/1556264616660644.

    Article  Google Scholar 

  48. Abdulrahman M, Nair SC. Overall assessment of human research and ethics committees in the United Arab Emirates. J Empir Res Hum Res Ethics. 2017;12(2):71–8. https://doi.org/10.1177/1556264617697522.

    Article  Google Scholar 

  49. Regmi PR, Aryal N, Kurmi O, Pant PR, van Teijlingen E, Wasti SP. Informed consent in health research: challenges and barriers in low-and middle-income countries with specific reference to Nepal. Dev World Bioeth. 2017;17(2):84–9. https://doi.org/10.1111/dewb.12123.

    Article  Google Scholar 

  50. Goodyear-Smith F, Jackson C, Greenhalgh T. Co-design and implementation research: challenges and solutions for ethics committees. BMC Med Ethics. 2015;16:78. https://doi.org/10.1186/s12910-015-0072-2.

    Article  Google Scholar 

  51. Tamariz L, Medina H, Taylor J, Carrasquillo O, Kobetz E, Palacio A. Are research ethics committees prepared for community-based participatory research? J Empir Res Hum Res Ethics. 2015;10(5):488–95. https://doi.org/10.1177/1556264615615008.

    Article  Google Scholar 

  52. Yanar ZM, Fazli M, Rahman J, Farthing R. Research ethics committees and participatory action research with young people: the politics of voice. J Empir Res Hum Res Ethics. 2016;11(2):122–8. https://doi.org/10.1177/1556264616650114.

    Article  Google Scholar 

  53. Calzo JP, Bogart LM, Francis E, Kornetsky SZ, Winkler SJ, Kaberry J. Engaging institutional review boards in developing a brief, community-responsive human subjects training for community partners. Prog Community Health Partnersh. 2016;10(3):471–7. https://doi.org/10.1353/cpr.2016.0053.

    Article  Google Scholar 

  54. Rivera SM, Goldenberg A, Rosenthal B, Aungst H, Maschke KJ, Rothwell E, et al. Investigator experiences and attitudes about research with biospecimens. J Empir Res Hum Res Ethics. 2015;10(5):449–56. https://doi.org/10.1177/1556264615610199.

    Article  Google Scholar 

  55. Mungwira RG, Nyangulu W, Misiri J, Iphani S, Ng'ong'ola R, Chirambo CM, et al. Is it ethical to prevent secondary use of stored biological samples and data derived from consenting research participants? The case of Malawi. BMC Med Ethics. 2015;16(1):83. https://doi.org/10.1186/s12910-015-0077-x.

    Article  Google Scholar 

  56. Kaye J, Briceño Moraia L, Curren L, Bell J, Mitchell C, Soini S, et al. Consent for biobanking: the legal frameworks of countries in the BioSHaRE-EU project. Biopreserv Biobank. 2016;14(3):195–200. https://doi.org/10.1089/bio.2015.0123.

    Article  Google Scholar 

  57. Nebeker C, Lagare T, Takemoto M, Lewars B, Crist K, Bloss CS, Kerr J. Engaging research participants to inform the ethical conduct of mobile imaging, pervasive sensing, and location tracking research. Transl Behav Med. 2016;6(4):577–86. https://doi.org/10.1007/s13142-016-0426-4.

    Article  Google Scholar 

  58. Thayer EK, Rathkey D, Miller MF, Palmer R, Mejicano GC, Pusic M, et al. Applying the institutional review board data repository approach to manage ethical considerations in evaluating and studying medical education. Med Educ Online. 2016;21(1):32021. https://doi.org/10.3402/meo.v21.32021.

    Article  Google Scholar 

  59. Torous J, Nebeker C. Navigating ethics in the digital age: introducing connected and open research ethics (CORE), a tool for researchers and institutional review boards. J Med Internet Res. 2017;19(2):e38. https://doi.org/10.2196/jmir.6793.

    Article  Google Scholar 

  60. Eckenwiler L, Pringle J, Boulanger R, Hunt M. Real-time responsiveness for ethics oversight during disaster research. Bioethics. 2015;29(9):653–61. https://doi.org/10.1111/bioe.12193.

    Article  Google Scholar 

  61. De Crop M, Delamou A, Griensven JV, Ravinetto R. Multiple ethical review in North-South collaborative research: the experience of the Ebola-Tx trial in Guinea. Indian J Med Ethics. 2016;1(2):76–82. https://doi.org/10.20529/IJME.2016.022.

    Article  Google Scholar 

  62. Hunt M, Tansey CM, Anderson J, Boulanger RF, Eckenwiler L, Pringle J, Schwartz L. The challenge of timely, responsive and rigorous ethics review of disaster research: views of research ethics committee members. PLoS One. 2016;11(6):e0157142. https://doi.org/10.1371/journal.pone.0157142.

    Article  Google Scholar 

  63. Mezinska S, Kakuk P, Mijaljica G, Waligóra M, O'Mathúna DP. Research in disaster settings: a systematic qualitative review of ethical guidelines. BMC Med Ethics. 2016;17(1):62. https://doi.org/10.1186/s12910-016-0148-7.

    Article  Google Scholar 

  64. Klitzman RL. US IRBs confronting research in the developing world. Dev World Bioeth. 2012;12(2):63–73. https://doi.org/10.1111/j.1471-8847.2012.00324.x.

    Article  Google Scholar 

  65. Klitzman R. How US institutional review boards decide when researchers need to translate studies. J Med Ethics. 2014;40(3):193–7. https://doi.org/10.1136/medethics-2012-101174.

    Article  Google Scholar 

  66. Petrova M, Barclay M, Barclay SS, Barclay S. Between “the best way to deliver patient care” and “chaos and low clinical value”: General Practitioners’ and Practice Managers’ views on data sharing. Int J Med Inform. 2017;104:74–83 https://doi.org/10.1016/j.ijmedinf.2017.05.009.

    Article  Google Scholar 

  67. Petrova M, Riley J, Abel J, Barclay S. Crash course in EPaCCS (Electronic Palliative Care Coordination Systems): 8 years of successes and failures in patient data sharing to learn from. BMJ Support Palliat Care. Online first: 16 Sep 16; doi:https://doi.org/10.1136/bmjspcare-2015-001059.

    Article  Google Scholar 

  68. Petrova M, Barclay S. Something’s awry (again) in the debate on patient data sharing. Br J Gen Pract. 2018;68(668):133 https://doi.org/10.3399/bjgp18X695081.

    Article  Google Scholar 

  69. Health Research Authority. HRA Approval website.

  70. Messer J for the NHS Health Research Authority. Summary of Plans for Health Research Authority Assessment and Approval. 2014.

  71. NHS Health Research Authority. Update on performance of HRA Approval and related processes. 2016.

  72. NHS Health Research Authority. Update on performance of HRA Approval – September 2017.

  73. DeMeo SD, Nagler A, Heflin MT. Development of a Health professions education research-specific institutional review board template. Acad Med. 2016;91(2):229–32. https://doi.org/10.1097/ACM.0000000000000987.

    Article  Google Scholar 

  74. Convey – Global Disclosure System website. http://www.convey.org/. Accessed 5 Nov 18.

  75. Perkel J. Convey simplifies conflicts of interest disclosure. Naturejobs Blog. 2017. http://blogs.nature.com/naturejobs/2017/03/22/convey-simplifies-conflict-of-interest-disclosure/. Accessed 5 Nov 18.

Download references

Acknowledgements

We would like to thank all REC and R&D colleagues with whom we have worked so as to receive the approvals for the substantive study. Time and time again we have been amazed by the dedication, exceptional efficiency, competence and helpfulness of many of them. This is not a paper criticising individuals, but one addressing systemic issues which have an impact not only on researchers but also on REC and R&D officers amongst many other stakeholders. We would also like to thank our reviewers and the editors for helping to improve this paper significantly.

Funding

This paper presents independent research funded by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research & Care (CLAHRC) East of England, at Cambridgeshire and Peterborough NHS Foundation Trust, and the Health Innovation and Education Cluster (HIEC) hosted by Cambridge University Health Partners (CUHP). It has also been funded through The Marie Curie Design to Care programme. The Marie Curie Design to Care programme is a service improvement programme. This research forms part of the design phase of this programme, which is funded by Marie Curie, the UK’s leading charity caring for people living with any terminal illness and their families. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, the Department of Health and Social Care, HIEC, CUHP and Marie Curie. None of the funders has had any involvement in the conduct and representation of this work.

Availability of data and materials

The datasets generated and analysed during the current study are not publicly available, as a significant proportion of them identifies people and organisations, but anonymised subsets of them are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

MP conceived of the study, carried out the data collection and analysis, and drafted the manuscript. SB supported the interpretation of findings and revised several versions of the manuscript. MP is the guarantor for the data. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Mila Petrova.

Ethics declarations

Ethics approval and consent to participate

For “Prepared to Share?” study: 13/EE/0291, NRES Committee East of England – Cambridge East. Written informed consent was obtained from interviewee participants. For questionnaire respondents, returning the questionnaire was considered to demonstrate consent to participate.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Petrova, M., Barclay, S. Research approvals iceberg: how a ‘low-key’ study in England needed 89 professionals to approve it and how we can do better. BMC Med Ethics 20, 7 (2019). https://doi.org/10.1186/s12910-018-0339-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12910-018-0339-5

Keywords