I‘d like to share the main key takeaways from this interesting book entitled “Responding to AI in assessment in universities: A practical guide for lecturers, published by Stellenbosch University & the University of Cape Town (2026).
1/ The guide argues that AI in assessment is a wicked problem with no single solution. Instead of starting with rules to ban or allow tools, lecturers should start with clear learning outcomes. The goal is to decide which skills must be shown independently and which can be supported by AI to serve an educational purpose.
2/ Because AI can easily create polished final versions of assignments, the authors suggest redesigning tasks to focus on the process. Capturing how students think and develop their work makes learning visible and strengthens the validity of the assessment. This approach moves away from just looking at the end result.
3/ A major “Don’t” is relying on AI detection tools. The sources state these tools are unreliable, produce false positives, and can unfairly affect certain student groups. Instead of catching students, the focus should be on building a culture of integrity through authentic tasks and open discussion.
4/ Lecturers should not assume all students use AI to cheat; many use it for feedback or drafting. Clear and early guidance is essential to reduce student anxiety. Decisions about AI must also consider equity, such as whether all students have equal access to paid tools, stable internet, or electricity.
5/ The guide stands out for its “Do’s, Don’ts, and Don’t Knows” framework, which offers immediate steps while admitting what remains unclear. It prioritizes pedagogical integrity over surveillance and encourages collaboration between staff to avoid giving students conflicting rules.
6/ The authors acknowledge several “Don’t Knows” that represent gaps in current knowledge. These include the long-term impact of AI on disciplinary knowledge, the risk of deskilling if marking is automated, and the specific expectations of professional regulators. Detailed ethical and privacy solutions for using “unbounded” AI tools are also still being developed.
7/ More details: Adendorff, H., Cilliers, F., Huang, C. W., Lester, S., Strydom, S., & Walji, S. (2026). Do’s, don’ts and don’t knows. Responding to AI in assessment in universities: A practical guide for lecturers. Stellenbosch University & the University of Cape Town.
9/ Happy reading ^^
© mhsantosa (2026)
