Intro to LLMs for Researchers — Training Session
We recently ran a practical training session — Intro to LLMs for Researchers — aimed at helping researchers across the Norwich Research Park get to grips with Large Language Models in their day-to-day work.
Large Language Models have gone from niche research topics to everyday tools embedded in search engines, office suites, and coding environments in just a couple of years. Researchers now routinely encounter LLMs when drafting manuscripts, analysing data, or writing code, yet many have had little structured training in how these systems actually work or how to use them responsibly.
This short, practical session was designed to give participants a solid conceptual grounding and, more importantly, hands-on experience with LLMs in a research context — without assuming any background in machine learning or computer science.
What the session covered
Over a single session, we:
- Demystified what an LLM is — from “next word prediction” through to modern transformer-based models like GPT, Llama, and others.
- Explained in plain language how training and prompting work, and why these models can be both impressively useful and confidently wrong at the same time.
- Explored concrete use cases for researchers: literature exploration, summarisation, drafting and editing, coding assistance, and qualitative data support.
- Discussed limitations, biases, and issues around privacy, copyright, and research integrity, with examples relevant to life sciences and data-driven research.
- Ran guided, hands-on exercises so participants could practise effective prompting and critically evaluate model outputs.
Training materials
All materials are available online at https://quadram-institute-bioscience.github.io/ai-training, including slides, exercises, and further reading. Whether you attended the session or not, the materials are designed to be self-contained and freely reusable.