Numbas – The Gateway to Formulaic Assessments

In the world of STEM, assessing students digitally can quickly become a problem. At lower levels, the questions are relatively simple, but, even then, there’s the case of adaptive marking, multiple parts, partial credit, working out…the list goes on. By the time you’re up to higher level topics, the questions can take a long time to answer, and only being able to assess the final number isn’t going to be very useful for providing feedback.

The main tool for e-assessment on Minerva is Blackboard’s own test tool and, although simple to use, it has limited functionality when doing more complex assessments. There are a number of tools out there that have the extended tools needed, but today I want to focus on Newcastle University’s Numbas.

2018-02-20 10_31_49-Really versatile maths e-assessment _ Numbas

As succinctly stated in their banner, Numbas is versatile; anything not offered by the standard set of tools, you can just code in yourself via JavaScript. But don’t worry, the standard set is very extensive, and it’s really unlikely you’ll be needing to dip into coding unless you’re doing extremely obscure things.

The question types available are:

  • Mathematical expression – For assessing an algebraic statement
  • Number entry – Your standard, simple answer type
  • Matrix entry – For a question that’s answered by a matrix
  • Match text pattern – For text-based questions
  • Choose one/several from a list / Match choices with answers – The standard set of multiple-choice style questions

The questions themselves can be laid out in an exam-style fashion, with steps that are grouped together into parts and allow error-carried-forward.

2018-02-20 13_45_57-Forces on a car

To me, the strongest part of Numbas is its Variable handling. For the entire assessment, there is a shared pool of variables that you define for use throughout. They can either be derived from each other, or from using built-in functions to do things like generate random values for each attempt. For example, in the image above, all three supplied values for the question are generated within bounds, allowing the students to get a slightly different question each time.

These variables can be used to both ask and mark the questions, giving a robust framework for complex, multi-step problems. This is also where the adaptive marking comes into play, allowing a student’s answers to replace the standard variable and be used in the calculations instead.

Once you’re done, assessments can be saved as a SCORM package and put in Minerva with ease. The marks from the assessment are also captured by the module’s Grade Centre, allowing you to track students’ performance.

I could keep talking about features, but it’s one of those things where you’ll get a much better picture by just giving it a try. Accounts are completely free to make, so I’d say just go and have a play with it. I’ve made a couple of example questions that you can pull apart here and here to see how they work, but there are tons of other questions in the public domain from other educators, and there’s help documentation too.

UoL Staff: As always, if you’ve got any questions, or want some help on setting up your first (or hundredth) assessment, just contact your Faculty’s Learning Technologist and we’ll get you started.

Leave a comment