This course provides a comprehensive introduction to Large Language Models (LLMs) and their practical applications.
You’ll learn about the fundamentals of machine learning, focusing on why LLMs are not intelligent and can confidently
generate wrong information (but in a nice English!).
At the same time, with some pre-cautions, we can benefit from language models, and even explore how to run some (smaller)
model locally (in your computer, without sending any data to third parties).
This is a companion website with a short summary of the topic covered with:
| Icon | Meaning |
|---|---|
| further reading material. The most important parts are highlighted by this icon. | |
| Key points highlighted by this icon. | |
| Exercise | |
| Work in progress |
Topics
What are the foundations of Large Language Models (LLMs) and how do they generate text for us; Accuracy and limitations of LLMs; Ethical and integrity considerations; Privacy concerns and local use of LLMs; AI-powered literature mining tools; Overview of coding assistants and coding agents.
| # | Course details |
|---|---|
| Target Audience | Everyone at NBI (students, researchers, staff) |
| Duration | 2.0 hours |
| Format | Classroom |
| Attendees | 6 - 18 participants |
| Structure | Frontal lectures and discussions |
After the course, learners will be able to:
| # | Learning Outcome |
|---|---|
| 1 | Critically evaluate LLM outputs by identifying potential hallucinations, biases, and knowledge limitations |
| 2 | Make informed decisions about data privacy, academic integrity, and appropriate use of AI in scholarly work |
| 3 | Select appropriate deployment models (local vs. cloud) based on data sensitivity and privacy requirements |
| 4 | Effectively prompt and utilize generative AI tools (Claude, ChatGPT, Perplexity) for research-appropriate tasks |
| 5 | Utilise literature mining AI-powered tools (such as Elicit) and understand their limitations |