Participants:

Jes Frellsen, Chaudhary Ilyas, Rasmus Ørtoft Aagaard, A. Emilie Wedenborg, Tommy Alstrøm, Qianliang Li, Søren Føns, Lina Skerath, Finn Årup Nielsen, Jakob Eg Larsen, Susanne Winter, Ivana Konvalinka, Tobias Andersen, Søren Hauberg, Teresa Scheidt, Kristoffer Stensbo-Smidt, Lenka Tetkova, Camilla Narine, Mikkel N Schmidt, Georgios Avanitidis, Vagn L Hansen, Kyveli Kompatsiari, Fabian Mager, Vassilis Lyberatos, Morten Mørup, Nicki Skafte, Federico Delusso, Hiba Nassar, Hanlu He, Beatrix Miranda Nielsen, Lasse Skytte Hansen, Sune Lehmann, Laura Alessandretti, Jonas Vestergaard, Tiberiu-Ioan Szatmari, Antonio Desiderio, Laurits Fredsgaard, Per Bækgaard, Lars Kai Hansen

Background & general comments – specific course comments below

It is universally recognized that Generative AI will impact all dimensions of higher education[1].

In this workshop we focus on the redesign of DTU courses induced by access to state-of-the-art generative AI tools for students and teachers.

The engagement with generative AI can happen at many levels of teaching[2]. For teachers, AI may assist with the definition of course contents and the curriculum, creation of learning objectives, materials and exercises, establishing well-aligned assessment forms, peer feedback, and the evaluation of test results. For students it may include identification of relevant courses, structuring and enhancement of the learning experience, including tutoring, self-evaluation, exam preparation and post exam debriefing.

DTU/university courses aim at teaching a mixture of specific personal competences and problem-solving abilities. The role of the assessment/exams are multiple including monitoring of learning progress and “certification”[3]. Specific competences such as active vocabulary math, programming and science foundations, largely need to be learned and assessed individually - hence without use of generative AI. Problem solving courses at DTU will typically apply state-of-the-art tools, hence generative AI. For constructive alignment of course content and assessment, application of state-of-the-art tools must then be part of the assessment of project work.

If state-of-the-art tools are invoked, how could a student report their use and how could the usage be assessed? If AI tools are considered the equivalent of laboratory tools, the application can be described in the “methods section” along with the description of other experimental tools and methods. The use can then be assessed along with the application of other methods. If an AI tool takes the role of a collaborator/mentor, then a reference like “personal communication” would be a useful attribution. Such strategies should be communicated to students and teachers along with other checklists[4].

When applying tools like search, lexical services (Wikipedia) or generative AI, it is of paramount importance to exercise critical thinking. Such critical thinking includes careful examination of sources, e.g., scientific references. Examination also includes evaluating the evidence presented, credibility of the source and common sense (a rich set of critical common-sense tools are presented in “Calling BS”[5]). It is recommended that DTU creates a checklist for critical assessment of generative AI.

Considering the level of engagement with generative AI in DTU courses, multiple strategies may be envisioned:

** “No-no” courses that teach students basic personal competences with limited use of AI during the course and no use of AI in exams, e.g., basic math, programming etc.

** AI-adapted courses where students are encouraged in learning objectives to use AI, yet students are partially prohibited from using AI in exams, e.g., 02450 Introduction to machine learning, where the use of tools is tested in project reports, while a final personal assessment is carried out as a multiple-choice exam without AI tools.

** AI first courses, i.e., courses that encourage the use of AI in all phases of the course, during learning and in the exams/evaluation.

There is a possibility that the two latter strategies may harvest an AI bonus and engage students at more complex scientific levels than in pre-AI courses.

The resilience of current exam forms to AI attacks is unclear and under rapid change:

Oral exams: For the time being oral exams are considered resilient.

Written exams with all aids/closed network: Such exams may be compromised by local/laptop generative AI.

Written exams / pen and paper: Considered resilient.

Written exams / monitored environment: Considered resilient.

Note: There are no tools that can allow a teacher to check whether a given exam form is resilient to AI-attacks, simply because the tools are rapidly developing. There are no tools available for checking whether generative AI was used in report writing.

For questions and comments please contact Lars Kai Hansen, Feb 15, 2024 (lkai@dtu.dk)

Workshop outcome regarding specific courses

In an initial attempt to reflect on updated course content, learning objectives and assessment forms, we have selected six Cogsys courses and in the workshop, groups discussed and re-designed learning objectives, course content and exam/evaluation models as seen below.

These are: