Exploring how embedding mini independent research projects in computational chemistry modules influences students’ employability
Main concern
My core concern is that students’ opportunities to experience authentic “research-like” work in the chemistry curriculum at UK Universities are currently deteriorating. At the same time, institutional and sector narratives in the UK increasingly emphasise employability, often characterised through short-term metrics that may not capture longer-run career development1 or non-linear career trajectories2,3. In my local context, colleagues and I have observed concrete curriculum changes in this direction: compression and replacement of independent research projects, including planned replacement of individual final-year projects with group work and the shrinking of postgraduate dissertation/project components. This goes against the disciplinary expectations that UK HE Chemistry curricula include a major of assessed practical and project work4,5.
I arrived at this concern through a structured evaluation of my practice that combined: (i) review of assessment artefacts and marking rubrics across recent iterations; (ii) scrutiny of module evaluation comments and informal student feedback (particularly around confidence, “not knowing what counts as good”, and difficulty transferring skills beyond a single assignment); (iii) reflective logs of teaching and assessment decision-making; and (iv) conversations with colleagues teaching adjacent modules about where students seem to “plateau” (for example, in formulating research questions, designing computational experiments, interpreting ambiguous outputs, and writing results beyond a template). This view aligns with my “informed observations”: that students engage well with technical content yet often struggle to connect methods to a broader investigative purpose, and that they need more repeated, structured practice in independent inquiry that resembles what chemists actually do.
A coherent response is to design learning that connects students to research and inquiry as a normal mode of study. Connected curriculum approaches explicitly argue for integrating research-based education as a transformational organising principle. All undergraduates should experience learning through and about research and inquiry, approximating the practices of disciplinary researchers. High-impact practices literature also consistently identifies undergraduate research experiences as especially powerful for engagement and development, particularly for underserved students. In STEM, undergraduate research experiences have been associated with gains in learning and intentions toward scientific careers. More recent work links research experiences to students’ perceived employability development6,7. Within chemical sciences specifically, recent UK/Ireland discussion of skills gaps reinforces the need for curricula that build adaptable, workplace-relevant capabilities rather than narrow task performance8,9.
Students are the primary stakeholders: reductions in research/project experiences risk limiting access to “high-impact” learning that supports deep learning and career development10,11. Notably, this is not evenly distributed: students with less cultural capital, fewer networks, or higher external commitments may rely more on embedded curriculum opportunities to build identity, belonging, and confidence through authentic work (hence the explicit inclusion of belonging/identity in the initial diagnostic work)12,13.
Colleagues and teaching teams are stakeholders because project work is resource-sensitive: it demands careful design, assessment calibration, and support for diverse learners. If project experiences are removed primarily to reduce complexity, then a realistic alternative is to embed smaller, time-bounded research experiences (“mini research sprints”) inside existing modules, rather than treating projects as a single large capstone that is easiest to cut when pressure rises.
The programme/department and institution are stakeholders too, as they have quality assurance and enhancement obligations. Disciplinary frameworks such as RSC accreditation4 explicitly value graduates’ competence in dealing with challenging projects and responsibility for decision-making in variable contexts, while the QAA Chemistry Benchmark5 positions chemistry graduates as combining disciplinary knowledge with practical and transferable skills applicable across activities, projects, and careers. A curriculum drifting away from authentic project work therefore risks misalignment with the expectations that shape programme design and review.
Employers, alumni, and the HE sector are stakeholders because employability discourse is often dominated by short-term outcome measures, while employers continue to report gaps in problem solving, independence, and communication. This creates a strategic challenge: how do we design learning that builds genuine capability while also helping students articulate that capability in employability narratives? Bringing these strands together, my research concern is:
In a curriculum context where large research projects are being reduced or reshaped, how does embedding assessed “mini independent research projects” within computational chemistry modules influence students’ employability-relevant development (autonomy, problem solving, and communication), and their sense of belonging/identity as emerging scientists, across a diverse student cohort?
This concern is also inseparable from ethical and professional responsibilities: because the evaluation will involve identifiable student/alumni/employer perspectives and potential dissemination, the study will follow institutional ethics processes and BERA-aligned principles of informed consent, opt-out, and data minimisation/anonymity.
Action Research
In the education literature, action research is typically framed as practitioner-led, systematic inquiry embedded in practice and oriented towards improvement through change. One of the definitions14 describes it as self-reflective inquiry by participants in social situations “to improve the rationality and justice of their own practices”, their understanding of those practices, and the situations in which they occur. This emphasis matters: action research is not “research about teaching” done in isolation; it is research for (and with) educational practice, where inquiry and improvement are entangled. Across domains, reviews of action research converge on core characteristics: it is problem-focused, involves a change intervention, is collaborative/participatory, and proceeds through cyclical processes15. It also has an explicit democratic and participatory commitment: those affected by the practice should, in principle, participate in planning, acting, observing, and reflecting.
The canonical “cycle” is usually summarised as plan -> act -> observe -> reflect, repeated iteratively. This is often presented as a spiral rather than a closed loop: Lewin’s formulation is commonly paraphrased as a spiral of planning, action, and fact-finding about the results of action16. Carr & Kemmis make this logic explicit: plans are prospective, reflection is retrospective, observation links action to evaluation, and reflection then informs new planning14. However, recent accounts also warn against treating the cycle as a tidy recipe. Kemmis and colleagues note that the “spiral of cycles” representation can oversimplify what happens; in real projects the stages overlap and boundaries are fuzzy17. This is useful for my project because curriculum interventions rarely unfold as clean experimental “treatments”: action research gives a disciplined way to work with that messiness rather than pretend it isn’t there.
Action research is explicitly positioned as a way of producing actionable knowledge: knowledge that is good enough to guide decisions, improve practice, and support collective learning in a specific context. Therefore, the pragmatist core is: the purpose and outcomes include improved action alongside personal and collective learning16. Action research can be similarly framed as inquiry that describes/interprets social situations while executing a change intervention aimed at improvement15. Crucially, action research can still contribute to scholarly knowledge, but usually through:
-
well-evidenced accounts of mechanisms (what changed, for whom, under what conditions),
-
design principles (transferable “if…then…” heuristics rather than universal laws),
-
theorised narratives linking practice-based evidence to wider educational debates (e.g., research-based learning, employability, student identity and belonging in this specific case).
It is worth remembering that action research is not optimised for strong causal attribution in the way a controlled trial or tightly designed experiment is. When changing assessment design, group structures, staff guidance, and student motivation simultaneously, it is usually impossible to isolate a single causal lever with high confidence (and action research generally doesn’t pretend otherwise). Action research resists narrow definitions and checklists: it is a complex, contextually situated process that can be misunderstood if judged by criteria designed for other methodologies15.
It also carries particular insider-research risks: role duality, organisational politics, and ethics are not side issues but central methodological constraints when the researcher is also a lecturer assessing students18. Finally, the “cycle” can become superficial if treated as a procedural loop rather than a deep inquiry stance: simple cycles of action and review can fail to generate sufficient depth of understanding for demanding change contexts16.
In this proposal, I explicitly suggest an intervention in my own teaching system: redesigning summative coursework into assessed “mini research sprints”, then evaluating effects on employability-relevant capabilities, autonomy, and student narratives. That is exactly the kind of problem where action research fits: it allows me to (i) improve practice in real time, (ii) involve stakeholders (students, colleagues, alumni/partners) in shaping and interpreting the change, and (iii) generate credible, theory-connected evidence without pretending I can “freeze” the educational world into a lab experiment.
Following the introduced cycle, my action research project will implement these steps (summarised in Figure 1):
-
Plan: I will design the intervention using analysis of sector/discipline expectations (e.g., QAA/RSC guidelines), consultation with module colleagues and programme stakeholders, alumni/industry perspectives on valued skills, baseline student surveys (experience, employability perspectives, belonging/identity), and the reflective diary.
-
Act: I will run the redesigned assessment as assessed mini research sprints: student groups frame a small investigation, generate/analyse data with modelling tools, and communicate findings via report/poster/oral outputs.
-
Observe: I will collect multiple forms of evidence: post-project surveys mapped to SEEC/QM Graduate Attributes, rubric-based assessment targeting autonomy/problem-solving, student reflective logs, at least one peer observation, and follow-up stakeholder interviews plus Graduate Outcomes comparisons where feasible.
-
I will integrate quantitative and qualitative insights with my reflective diary, identify what to keep/change, and feed this into the next cycle (next cohort/module run). I will also disseminate locally (teaching fora, practice guide) and externally (conference/journal), consistent with an action research aim of shared professional learning.
![][image1]
Figure 1. Action Research Cycle in the context of this proposal.
Literature and scholarship
A core tension underpinning this study is that local curriculum shifts away from substantial independent project work (for example, replacing individual final-year projects with group formats; compressing or reducing dissertation weightings) and risks drifting out of alignment with disciplinary expectations for Chemistry degrees in the UK. I explicitly frame this as a broader UK-HE pattern of “separated, single-aimed modules” displacing research projects, motivating the idea of embedding “mini research sprints” inside taught computational chemistry modules at QMUL as a partial corrective measure.
Two documents are especially load-bearing in setting what “counts” as a Chemistry degree in the UK: the QAA Subject Benchmark Statement (SBS) for Chemistry5 and the Royal Society of Chemistry (RSC) accreditation requirements4. The QAA SBS positions hands-on practical work as “essential” and describes a “developmental trajectory” in which students move from highly scaffolded practicals toward independence and problem-solving under minimal supervision. This matters because the expectation is not merely that students do lab work, but that they progressively acquire autonomy, investigative judgement, and resilience when experiments do not behave “as expected”, which are precisely the kinds of dispositions that are hard to cultivate through narrowly specified, single-outcome assessments.
RSC accreditation is even more explicit about assessed investigative work. For BSc programmes, RSC requires “student-led independent investigative methodology” that is open-ended and involves students managing their own learning; these activities may take forms including research projects, collaborative project work, placements, or open-ended practicals, and must be assessed rigorously. The guidance notes that this kind of project work would “typically account for ~25% of final-year workload”, and emphasises that practical/project work cannot be compensated or condoned if competence is not demonstrated. Summarising, if a curriculum reduces the space for authentic investigation, it is not simply a “style” choice, it actually risks eroding a hallmark of disciplinary formation in Chemistry (at least in the UK).
A key implication for this action-research design is that embedding mini-projects in taught modules can be framed not as a “style” choice, but as an attempt to preserve (and distribute earlier in the programme) the investigative outcomes expected by discipline benchmarks. This is particularly relevant in computational modules where “hands-on” can mean producing code, designing workflows, generating/analyzing data, and defending methodological choices (particularly important)19,20.
Curriculum scholarship has long argued that students learn best when they do not only consume research outputs but participate in inquiry21,22. Connected Curriculum thinking formalises this as a design principle: students should learn through research and enquiry, with assessment and teaching organised around authentic knowledge practices rather than detached content coverage. This aligns closely with the “mini research sprint” concept presented here.
Similarly, Healey & Jenkins argue for “undergraduate research and inquiry” as a mainstream curricular experience (not a privilege for a small subset), and distinguish research-led teaching from research-based approaches where students actively engage in inquiry processes and communication. The point is strategic for the narrative presented here: my intervention is not “extra”; it is a mechanism to enact research-based curriculum principles within the constraints of taught modules.
The empirical literature on undergraduate research experiences (UREs)23 and course-based undergraduate research experiences (CUREs)24 consistently associates research participation with gains in engagement, skills, persistence intentions, and identity-related outcomes, while also warning that causal claims are tricky because many studies rely on self-report and selection effects. Of importance, both the promise of UREs and the need for better-designed evidence on which features matter for which students are highlighted in the literature.
CUREs are especially relevant for this proposal, because they are designed at scale, as they embed research-like experiences in courses/modules, broadening access beyond students who can secure individual placements. Importantly, there is also an equity argument: bringing research into required coursework can reduce the exclusivity of traditional apprenticeship-style research opportunities.
For disciplinary anchoring, there is strong, chemistry-specific evidence: a review of chemistry CUREs surveys implementations and outcomes, signals that chemistry education research has matured past “single case” claims and is converging on repeatable design patterns and evaluation measures. Complementary chemistry education scholarship on project-based and authentic research tasks reports improvements in engagement and higher-order outcomes (while often still leaning on short-term measures)12. This supports the central idea that mini research projects can be credible vehicles for scientific reasoning, teamwork, and communication even when full-scale dissertations are under pressure.
Authentic assessment scholarship argues that we as educators should directly assess the kinds of intellectual performances we value (e.g., framing questions, choosing methods, handling uncertainty), rather than using proxy tests that only approximate those performances25. In parallel, sustainable assessment work argues that assessment should build students’ capacity to make judgements and learn beyond the course, that is assessment aligned with long-term learning rather than short-term task completion26. Taken together, these sources provide a rationale for why mini-projects (with iterative feedback, reflection, and public communication like presentations/posters) are not just more engaging, but actually better aligned with what we claim Chemistry graduates should be able to do.
This observation circles back to the QAA Chemistry SBS, which explicitly recognises assessment forms such as project work, posters, reflective logs, and outputs involving specialist modelling and data tools. Therefore, my proposed intervention aligns well with recognised disciplinary assessment techniques.
Nowadays, the universities are under increasing pressure to maximize the “employability” of their graduates, however there are competing concepts of what actually counts as impact in employability. A risk in employability-led curriculum is collapsing employability into a checklist of skills. Yorke & Knight’s framing defines employability in terms of achievements (skills, understandings, attributes) that make success more likely27. That framing is common in institutional strategy documents, such as QMUL “Graduate Attributes”28 but it can be also viewed as incomplete.
Holmes argues that employability is also a process of identity, positioning, and interpretation within labour markets: it is not only something a student “has” but something they perform and narrate in context29. Tomlinson similarly reviews employability as bound up with changing labour markets, graduate experiences, and forms of “capital,” cautioning against narrow policy metrics30. Bridgstock’s work further argues that career management capabilities are often the “missing” graduate attribute, suggesting that students may need explicit support to translate learning into credible career narratives. These arguments further back a design decision in this project: mini-research projects should include structured reflection and translation (e.g., “what did you do that an employer would recognise?”), not just technical outputs.
However, these views are under the policy pressure, where short-horizon outcomes dominate. UK Graduate Outcomes data are collected around 15 months after graduation, shaping league tables and institutional narratives about “career prospects”32. This underlies another part of the proposed project: short-term snapshots can under-represent nonlinear careers and the slower value of identity formation and research confidence.
Finally, this project targets not only competence but students’ sense of themselves as legitimate participants in the discipline. Communities of practice theory frames learning as participation, with identity as a central outcome of engagement in authentic practices33. Science identity scholarship provides a sharper analytic lens for how recognition, competence, and performance interact, particularly relevant when thinking about who thrives in “open-ended” tasks and who is excluded by hidden norms34. This reinforces another point that research experiences may support belonging and long-term engagement.
Nevertheless, there are several opposing perspectives and constraints:
-
Minimal guidance critique: Inquiry and project-based learning can fail when students lack prior knowledge; evidence-based critiques argue that minimally guided instruction is less effective for novices, because it overloads working memory and privileges students with higher prior knowledge35. This matters because computational chemistry modules often have steep tool-learning curves. Therefore, the designed research sprints need to be a guided inquiry, not “throwing students into research”.
-
Assessment reliability and group work: Group projects raise fairness issues (free-riding, conflict, uneven contribution)36. The cooperative learning literature offers concrete mitigations (individual accountability, structured peer evaluation, explicit team-process teaching). These cannot be ignored and need to be properly implemented.
-
Evidence limits: The CURE/URE literature often relies on self-report, short time horizons, and limited controls, so claims about employability outcomes must be cautious and framed as plausible mechanisms rather than guaranteed effects37.
Taken together, the literature provides a coherent framework: (i) disciplinary standards (QAA/RSC) expect progressive independence and assessed investigative work; (ii) research-based curricula and authentic/sustainable assessment provide a defensible pedagogic rationale; (iii) CUREs offer a scalable model with a growing chemistry-specific evidence base; (iv) employability should be treated as both skills and identity/positioning, which means that the intervention must build reflective translation; and (v) known critiques demand careful scaffolding and robust assessment design.
##
Methods to inform and evaluate action
This study uses a mixed-methods, action research-aligned design in which Evidence 1 informs the shape of the “mini independent research sprints”, and Evidence 2 evaluates their effects once implemented. The choice of methods supports the overall aim of this project: redesign summative coursework into mini-research projects and evaluate impacts on employability-relevant development, autonomy/problem solving, and belonging/identity.
All data collection involving identifiable student/alumni/employer inputs will be reviewed through institutional ethics, with explicit alignment to BERA’s 2024 ethical guidelines38 (consent, minimisation of harm, power relations, confidentiality, and data stewardship).
Evidence 1: methods to inform the action
E1.1 Document analysis (QAA + RSC).
Purpose: Establish disciplinary expectations for assessed investigative/project work and translate these into design requirements for the mini-sprints.
Method: Systematic document analysis (QAA Subject Benchmark Statement: Chemistry (2022) and RSC accreditation guidance (latest booklet): extracting relevant statements, coding for intended outcomes, assessment expectations, and progression/independence.
Output: A “design brief” mapping sprint learning outcomes and assessment features to QAA/RSC expectations and a shortlist of “non-negotiables” (e.g., assessed investigative work with transparent criteria).
Limitations: Risk of selective reading (“cherry-picking”), and documents don’t automatically specify how to implement in constrained modules.
Mitigation: Maintain an audit trail (extract -> code -> design decision), and cross-check interpretations with colleagues (E1.2).
E1.2 Discussions with colleagues (constraints and feasibility)
Purpose: Clarify module constraints (cohort size, contact hours, assessment load, software/hardware access, marking capacity) and secure co-ownership of feasible scaffolding.
Method: Semi-structured staff conversations focused on constraints, risks, marking reliability, and student support load. Notes will be anonymised and summarised into a feasibility matrix that directly informs sprint scope and assessment design.
Output: Feasibility matrix and agreed scope for sprint length, scaffolding, assessment load, and marking procedures.
Limitations: Social desirability / local politics; risk of over-weighting the loudest voices.
Mitigation: Use a consistent prompt guide; anonymise comments; crosscheck against student baseline data (E1.4).
E1.3 Conversations with alumni + industrial partners (valued capabilities)
Purpose: Identify the skills employers recognise (and how graduates narrate evidence of them), plus perceptions of the role of research-like experiences in readiness for work.
Method: Short semi-structured interviews or structured conversations, capturing concrete examples of valued behaviours (e.g., handling ambiguity, communicating uncertainty, teamwork under deadlines).
Output: A capability list + “translation phrases” (how students can credibly evidence capability in CV/interviews), feeding sprint design and reflective prompts.
Limitations: Small, potentially biased sample; partner views may reflect specific sub-sectors.
Mitigation: Purposive sampling across roles/sectors; treat outputs as design inputs not universal claims; compare with benchmark expectations (E1.1).
E1.4 Baseline student survey (starting point and cohort needs)
Purpose: Capture prior research experience, confidence with open-ended tasks, employability perceptions, and belonging/identity indicators at the start of the modules.
Method: An online questionnaire with (i) fixed-response items, (ii) a small number of open prompts. To reduce ambiguity and improve response quality, survey design will follow evidence-based principles (clarity, low burden, trust-building) associated with the tailored design approach.
Output: Cohort profile to calibrate sprint scaffolding (e.g., how much structure is needed; what anxiety points exist).
Limitations: Non-response and self-report bias; students may answer to please the lecturer.
Mitigation: Anonymity; concise survey; clearly state it does not affect grades; repeat key part in post-intervention for comparison (E2.1).
E1.5 Lecturer reflective diary (reflexive practice evidence)
Purpose: Surface assumptions, decision points, and institutional constraints shaping what is “possible”, and record emergent issues during design and delivery.
Method: Prompted weekly entries treated as reflexive data to support transparency and confirmability, and triangulated against other evidence streams.
Output: Reflexive audit trail linking decisions to evidence, material to interpret implementation fidelity and unintended effects.
Limitations: Subjectivity; risk of post-hoc rationalisation.
Mitigation: Crosscheck with peer observation (E2.3) and student data (E2.1-E2.2); keep a “negative cases evidence” log.
Evidence 2: methods to evaluate the action
E2.1 Post-project student survey (change + student sense-making)
Purpose: Measure perceived development, confidence with inquiry, and employability translation after the sprint; compare with baseline descriptively (pre/post).
Method: Level descriptors mapped to SEEC and QM Graduate Attributes, with items on autonomy/problem solving and employability narrative confidence.
Output: Descriptive pre/post comparisons; identification of which sprint components students experienced as most developmental.
Limitations: Self-report and “halo” effects, improvements may reflect general maturation not the sprint.
Mitigation: Crosscheck with reflective logs (E2.2), peer observation (E2.3) and assessment artefact analysis (E2.5).
E2.2 Student reflective logs (process + identity/belonging)
Purpose: capture how students experienced inquiry, uncertainty, teamwork, and recognition, which are mechanisms that surveys often miss.
Method: thematic analysis with a transparent coding process
Output: Theorised themes (e.g., “from compliance to judgement”; “legitimate participation”; “what counts as evidence of skill”).
Limitations: Performative reflection; unequal writing confidence; grading can distort honesty.
Mitigation: Grade for completion only (or clearly separate from grading); provide examples of honest reflection; allow multimodal options where feasible (short bullet logs).
E2.3 Rubric-based analysis of student work (autonomy + problem solving)
Purpose: evaluate demonstrated performance (not only perceptions): research question framing, methodological justification, handling of uncertainty, interpretation, and communication.
Method: Specifically designed rubrics for a given exercise, double marking
Output: Fidelity notes (what happened vs planned), and actionable improvements for the next cycle.
Limitations: Observer effects; inconsistent observers.
Mitigation: Use the same form; choose a developmental (not evaluative) model; repeat with at least two observations across the sprint.
E2.4 Peer observation (implementation + learning design quality)
Purpose: obtain an external view on how the sprint is implemented and marked (scaffolding clarity, student agency, inclusion, and feedback moments).
Method: an observation form and a pre-/post-observation conversation.
Output: Evidence about durability/transfer of learning and what language works externally.
Limitations: Attrition; small sample; recall bias.
Mitigation: Keep interviews short; recruit from multiple cohorts; crosscheck with Graduate Outcomes/progression indicators (E2.6).
E2.5 Follow-up interviews with selected alumni (longer-run translation)
Purpose: assess whether students can later articulate sprint experiences as employability evidence
Method: short semi-structured interviews or structured conversations
Output: A profile of performance on autonomy/problem-solving/communication dimensions; a refined rubric for the next cycle.
Limitations: Marking reliability; potential conflict of interest (lecturer vs students).
Mitigation: Calibration with a colleague; anonymise samples for second-marker review where possible; pre-register rubric descriptors before marking.
E2.6 Graduate Outcomes + internal progression data (contextual indicators)
Purpose: provide contextual, cohort-level indicators (not causal proof) for whether student trajectories shift across pre/post cohorts
Method: Graduate Outcomes provided by Higher Education Statistics Agency (HESA)
Output: Contextual “signals” that can corroborate or challenge interpretations from student-level data.
Limitations: Major confounding; time-lag; small sample size for programme-level slicing.
Mitigation: avoid causal claims; interpret alongside nearer-term evidence (E2.1-E2.5).
The tools for methods E1.1 E1.4, E2.1, E2.2, E2.4 and E2.5 are given in Appendices A-F. They were informed by (i) established belonging measures and HE belonging reviews39, (ii) science identity theory (competence, performance, recognition)34, (iii) self-perceived employability research40, (iv) career decision self-efficacy as a construct, and (v) survey design principles (keep it short, easy entry, respondent experience)41.
Ethics
This project goes beyond the “zone of accepted practice” (routine module evaluation) by generating research outputs, analysing student work/identity data, and potentially publishing results. Therefore, I will seek formal ethics review via QMUL’s human-participant research routes (QMERC/JRMO guidance) and comply with Queen Mary policy on research with human participants42.
- Ethical Concern 1: Power imbalance and perceived coercion (lecturer-student relationship)
- Risk: students may feel obliged to participate or respond positively.
- Mitigation: clear statement that participation is voluntary and has no impact on marks; use opt-out language; collect surveys anonymously; schedule post-sprint research surveys after submission (and ideally after marking decisions are finalised); avoid recruiting interview participants directly during assessment-critical moments. BERA emphasises voluntary informed consent and attention to power relationships.
- Ethical Concern 2: Informed consent and right to withdraw
- Risk: consent not truly informed; difficulty withdrawing once data are anonymised.
- Mitigation: layered Participant Information Sheet + consent tick-boxes; explicit withdrawal window and explanation that fully anonymised data cannot be removed once aggregated/analysed, following BERA guidance on consent, transparency, and participant rights).
- Ethical Concern 3: Use of assessed student work (reports/posters/code) as research data.
- Risk: students may feel their assessed output is being repurposed without consent.
- Mitigation: separate consent for use of assessed artefacts; request permission after grading; analyse only de-identified excerpts; do not include identifiable text/code in dissemination without explicit additional consent. Confidentiality/anonymity are treated as the norm in BERA.
- Ethical Concern 4: Anonymity, and re-identification in small cohorts
- Risk: Students identifiable by role, project topic, or quote.
- Mitigation: suppress/aggregate small-n reporting; remove or disguise indirect identifiers; use “fictionalising” only with transparent explanation where needed. BERA explicitly mentions fictionalising approaches and privacy entitlements.
- Ethical Concern 5: Data protection (GDPR), storage, retention, and privacy notices
- Risk: collecting personal data (e.g., interview contact details), mishandling retention.
- Mitigation: comply with QMUL Data Protection Policy/GDPR guidance; store data on approved QMUL systems; separate contact details from survey data; follow QMUL privacy notice guidance and records retention schedule; define retention and deletion in the ethics application.
- Ethical Concern 6: Peer observation in sessions
-
Risk: students feel monitored; incidental capture of student behaviour as “data”.
-
Mitigation: observation focuses on teaching/learning design (implementation fidelity), not individual student evaluation; no recording of students; brief session notice + opt-out seating/alternative where feasible; anonymise any incidental student examples.
-
- Ethical Concern 7: Participant wellbeing and potential harm
- Risk: open-ended “research sprints” may increase anxiety; reflective prompts may surface stress.
- Mitigation: low-stakes reflective logs (completion-based); clear signposting to support services; avoid invasive personal questions; allow students to skip any prompt. This is based on the BERA principle of minimising harm.
Ethic tools:
- QMUL ethics application form + risk assessment (QMERC/JRMO route)
- Participant Information Sheets + Consent Forms (students; alumni/industry)
- Data management plan + privacy notice text (GDPR/retention)
- Recruitment scripts, debrief sheet, interview consent/audio-recording consent, and anonymisation plan
##
Monitoring & evaluation of the action, impact assessment and dissemination of results
I will implement one action-research cycle (Plan -> Act -> Observe -> Reflect) by embedding a mini independent research sprint as an assessed component within a computational chemistry module. The sprint will include: (i) a scoped research question with explicit choice points; (ii) staged milestones (planning, first results, interpretation, communication); (iii) structured scaffolds (templates/exemplars/checkpoints); and (iv) an authentic communication output (poster/viva or short research brief).
This is a complex educational intervention (multiple interacting components, variable student contexts), therefore evaluation will combine process and outcomes rather than rely on a single metric. This follows UK Medical Research Council guidance that robust evaluation needs to document not only whether an intervention “worked” but how it was implemented, what mechanisms operated, and how effects vary by context. I will also adopt a light realist evaluation stance, by framing findings as “what works, for whom, in what circumstances, and why” and report transparently, using relevant realist standards where applicable. This will be split into the following points:
- Process monitoring (implementation, reach, and fidelity).
Criteria:- Reach/participation: survey response rates; milestone completion; attendance at key sprint sessions; submission rates.
- Fidelity: whether planned choice points, scaffolds, and feedback loops were enacted as intended.
- Equity signals: differential engagement/experience across student subgroups (where data allow ethically and legally).
Sources:
- Peer observation form (Appendix E), lecturer diary, basic administrative indicators (milestones/submissions), brief checks during the sprint.
- Mechanisms and student experience (how and why it works).
Criteria:
- Mechanisms and student experience (how and why it works).
- Evidence students are practising judgement under uncertainty: decision rationales, troubleshooting, articulation of limitations.
-
Belonging/identity signals: comfort asking questions, recognition, meaningful participation in teams.
Sources
- Student reflective logs (Appendix D), open responses in post-survey, and peer-observation notes. Qualitative analysis will follow reflexive thematic analysis good-practice guidance to produce credible, theory-linked mechanism accounts rather than anecdotal impressions.
- Outcomes (what changed).
To avoid overstating claims, outcomes will be reported at multiple levels (adapted from the Kirkpatrick training evaluation logic: reaction-> learning-> behaviour-> results).
Criteria:
- Outcomes (what changed).
- Reaction: student perceptions of clarity, fairness, workload, authenticity (post-sprint survey).
- Learning: rubric-based indicators in assessed outputs (quality of question framing, method justification, interpretation, communication).
- Behaviour: evidence of independent decision-making and self-regulation during the sprint (reflective logs; observation).
- Results (contextual only): internal progression/attainment patterns and Graduate Outcomes indicators, treated descriptively and cautiously given confounding and time-lags.
This study will not claim clean causal attribution (“the sprint caused X”) because cohort differences, concurrent module changes, and wider institutional pressures create unavoidable confounds. Instead, I will treat impact as triangulated plausibility: convergent patterns across process, mechanism, and outcome evidence; transparent reporting of context; and explicit attention to “negative cases” where the sprint did not work or created inequities. This framing aligns with SoTL guidance that credible educational knowledge is contextual, methodologically sound, and appropriately public for critique and reuse.
Dissemination plan:
Internal (Department of Chemistry, School of Physical and Chemical Sciences, QMUL)
- Short summary report to department teaching fora and relevant email lists.
- Discussion at School teaching meetings (focus on feasibility, equity, assessment reliability).
- A concise practice guide (“How to run a mini research sprint in a computational module”) with templates/rubrics and a documented “minimum viable” version plus extensions.
External
- Presentation at the Festival of Education to share the model as a scalable approach to embedding inquiry within taught modules.
- Submission to a discipline-facing venue (e.g., Chemistry Education Research and Practice or Journal of Chemical Education) framed around (i) alignment with discipline expectations for investigative work, (ii) a replicable sprint design, and (iii) mechanism-informed evaluation.
References
1. UK Universities. Graduate employment: its limits in measuring the value of higher education. Available at: https://www.universitiesuk.ac.uk/sites/default/files/field/downloads/2022-02/uuk-graduate-employment-metrics.pdf accessed 16/01/2026
2. Tomlinson M., Journal of Education and Work 2007. Available at: https://www.tandfonline.com/doi/full/10.1080/13639080701650164 accessed 16/01/2026
3. Hordosy R. & Clark T., Social Sciences 2008. Available at: https://eprints.whiterose.ac.uk/id/eprint/136502/1/socsci-07-00173.pdf accessed 16/01/2026
4. Royal Society of Chemistry. Accreditation of degree programmes (2025). Available at: https://www.rsc.org/standards-and-recognition/accreditation/degree-accreditation?gad_source=1\&gad_campaignid=23067566130\&gbraid=0AAAAADs4yQHO01ghfJF-IhVMgIcdXGJ2S\&gclid=CjwKCAiAuIDJBhBoEiwAxhgyFh5nOQ1EGDoo0dCCpiuOS49kE7xYzUjZzAAZndTVFkya-1HQSI33aBoCxZEQAvD_BwE accessed 21/11/2025
5. QAA Subject Benchmark Statement: Chemistry (2022). Available at: https://www.qaa.ac.uk/docs/qaa/sbs/sbs-chemistry-22.pdf?sfvrsn=46b1dc81_6 accessed 21/11/2025.
6. Lopatto D., CBE - Life Sciences Education 2008. Available at: https://www.lifescied.org/doi/epdf/10.1187/cbe.07-06-0039 [Accessed 21/11/2025].
7. Carpenter et al., Biochem. Molecular Bio. Educ. 2022. Available at: https://iubmb.onlinelibrary.wiley.com/doi/epdf/10.1002/bmb.21586 [Accessed 21/11/2025].
8. Milan, J., Journal of International Scientific Publications 2022. Available at: https://durham-repository.worktribe.com/output/1946264/developing-problem-solving-skills-in-chemistry-students-through-project-based-learning [Accessed 21/11/2025].
9. McLaughlin, S., Journal of Chemical Education 2024. Available at: https://pubs.acs.org/doi/pdf/10.1021/acs.jchemed.3c01184?ref=article_openPDF [Accessed 21/11/2025].
10. Kuh, G. High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter. American Association of Colleges & Universities Press (2008). Available at: https://navigate.utah.edu/_resources/documents/hips-kuh-2008.pdf [Accessed 21/11/2025].
11. Bhattacharyya P., et al. How Novice Researchers See Themselves Grow. International Journal for the Scholarship of Teaching and Learning (2018). Available at: https://digitalcommons.georgiasouthern.edu/cgi/viewcontent.cgi?article=1689\&context=ij-sotl [Accessed 21/11/2025].
12. Bangera G. & Brownell S. E. CBE - Life Sciences Education 2014. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC4255347/ Accessed 16/01/2025
13. Finley A. & McNair T., Association of American Colleges and Universities 2013. Available at: https://files.eric.ed.gov/fulltext/ED582014.pdf
14. Carr W., & Kemmis S. “Becoming Critical” 2004. Available at: https://www.asu.edu.bh/wp-content/uploads/2024/12/2-Becoming-Critical-Education-Knowledge-and-Action.pdf Accessed 16/01/2025
15. Waterman et al., Health Technol. Assess. 2001. Available at: https://www.journalslibrary.nihr.ac.uk/hta/HTA5230 Accessed 16/01/2025
16. The SAGE Handbook of Action Research 2008. Available at: https://raggeduniversity.co.uk/wp-content/uploads/2025/03/The-Sage-Handbook-of-Action-Research_compressed.pdf
17. Kemmis et al., Action Research Planner. Available at: https://educons.edu.rs/wp-content/uploads/2020/05/2014-The-Action-Research-Planner.pdf Accessed 16/01/2025
18. Coghlan D. & Brannick T. Doing Action Research In Your Own Organization. Available at: https://kyptraining.com/wp-content/uploads/2020/05/DOING_ACTION_RESEARCH_IN_YOUR_OWN_ORGANI.pdf Accessed 16/01/2025
19. Wilson et al., Plos Biology 2014. Available at: https://journals.plos.org/plosbiology/article?id=10.1371%2Fjournal.pbio.1001745 Accessed 16/01/2025
20. Fuchs et al., J. Chem. Educ. 2024. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11327961/ Accessed 16/01/2025
21. Brew A., Higher Education Research and Development 2003. Available at https://www.tandfonline.com/doi/abs/10.1080/0729436032000056571 Accessed 16/01/2025
22. Fung et al., Connected Curriculum for Higher Education 2017. Available at: https://uclpress.co.uk/book/a-connected-curriculum-for-higher-education/ Accessed 16/01/2025
23. National Academies Report, available at: https://www.nationalacademies.org/news/new-report-examines-the-impact-of-undergraduate-research-experiences-for-stem-students Accessed 16/01/2025
24. Dolan E., Course-based Undergraduate Research Experiences: Current knowledge and future directions Available at: https://sites.nationalacademies.org/cs/groups/dbassesite/documents/webpage/dbasse_177288.pdf
25. Wiggins G., Practical Assessment, Research, and Evaluation 1990. Available at: https://rpgroup.org/Portals/0/Documents/Projects/BRIC/The%20Case%20for%20Authentic%20Assessment%20-%20Wiggins.pdf
26. Bound D., & Falchikov N., Assessment and Evaluation in Higher Education 2006. Available at: https://www.tandfonline.com/doi/abs/10.1080/02602930600679050 Accessed 16/01/2025
27. Yorke M. & Knight P., Embedding employability into the curriculum Available at: https://www.qualityresearchinternational.com/esecttools/esectpubs/yorkeknightembedding.pdf accessed 16/01/2025
28. https://www.qmul.ac.uk/queenmaryacademy/educators/resources/graduate-attributes/ accessed 16/01/2025
29. Holmes L., Studies in Higher Education 2013. Available at: https://www.tandfonline.com/doi/abs/10.1080/03075079.2011.587140 Accessed 16/01/2025
30. Tomlinson M., Higher Education Policy 2012. Available at: https://link.springer.com/article/10.1057/hep.2011.26 Accessed 16/01/025
31. Bridgstock R., Higher Education Research and Development 2009. Available at: https://www.tandfonline.com/doi/abs/10.1080/07294360802444347 Accessed 16/01/2025
32. Graduate Outcomes Survey Methodology Statement, HESA Available at: https://www.hesa.ac.uk/files/Graduate_Outcomes_Methodology_Part2_FINAL.pdf Accessed 16/01/2025
33. Wenger E., Communities of Practice: Learning as a Social System. Available at: https://thesystemsthinker.com/communities-of-practice-learning-as-a-social-system/ Accessed 16/01/2025
34. Carlone H., Johnson A., JRST 2007. Available at: https://onlinelibrary.wiley.com/doi/10.1002/tea.20237 Accessed 16/01/2025
35. Kirschner et al., Educational Psychologist 2006. Available at: https://www.tandfonline.com/doi/abs/10.1207/s15326985ep4102_1 Accessed 16/01/2025
36. Oakley B. et al., Journal of Student Centered Learning 2024. Available at: https://engr.ncsu.edu/wp-content/uploads/drive/1ofGhdOciEwloA2zofffqkr7jG3SeKRq3/2004-Oakley-paper(JSCL).pdf Accessed 16/01/2025
37. Appel N. et al., CBE - Life Sciences Education 2024. Available at: https://www.lifescied.org/doi/10.1187/cbe.22-05-0096 Accessed 16/01/2025
38. BERA Guidelines 5th edition. Available at: https://www.bera.ac.uk/publication/ethical-guidelines-for-educational-research-fifth-edition-2024 Accessed 16/01/2025
39. Goodenow C., Psychology in the school 1993. Available at: https://onlinelibrary.wiley.com/doi/abs/10.1002/1520-6807%28199301%2930%3A1%3C79%3A%3AAID-PITS2310300113%3E3.0.CO%3B2-X Accessed 16/01/2025
40. Rothwell A. et al., Journal of Vocational Behaviour 2008. Available at: https://www.sciencedirect.com/science/article/abs/pii/S0001879107001285 Accessed 16/01/2025
41. Dillman D. et al., Available at: https://www.une.edu/sites/default/files/Microsoft-Word-Guiding-Principles-for-Mail-and-Internet-Surveys_8-3.pdf Accessed 16/01/2025
42. https://www.jrmo.org.uk/media/jrmo/docs/about-us/our-policies/2b-Queen-Mary-policy-on-research-with-human-participants-a.pdf Accessed 16/01/2025
43. Moore G., BMJ 2015. Available at: https://www.bmj.com/content/350/bmj.h1258 accessed 16/01/2025
44. https://assets.publishing.service.gov.uk/media/60f7fdf7d3bf7f56824cc634/Brief_introduction_to_realist_evaluation.pdf Accessed 16/01/2025
45. Felten P., Teaching and Learning Inquiry 2013 Available at: https://journalhosting.ucalgary.ca/index.php/tli/article/view/57376 Accessed 16/01/2025