Friday saw a Future Teacher event at the Moray House School of Education on the subject of learning analytics and its role in the future of the university. Much promise, much potential, and lots of messy, but promising developments.
The event was organised by Anne-Marie Scott of Learning, Teaching and Web Services, and the speakers were Dragan Gasevic, Yi-Shan Tsai, and Jeremy Knox. There were approximately 20 attendees, a mix of mostly staff and some students, representing a range of fields on the campus: education, psychology, geosciences, law, biology, and more.
Dragan and Yi-Shan first provided context, polling the attendees as to their level of familiarity with learning analytics and their understanding of its definition. Answers varied considerably, suggesting a field that is still emerging in its scope and application. Most of us were, by any definition, novices in the field of learning analytics.
Dragan discussed the history of learning analytics and how it finds itself shifting from its original position as a deficit model (retention) towards something more proactive and formative (strengthening feedback loops, primarily). Some of the earliest work was discussed, particularly Signals at Purdue University and how it was an important, if ultimately critiqued, project.
Many of these earlier projects used dashboard models and a relatively small set of indicators to achieve some sort of impact: for Signals, 5000 students were identified as a sample and grouped according to three categories of high, medium, and low risk for failing a particular course. These three groups were translated into traffic lights, providing an easy way for teachers to recognise those in danger and presumably offer more or a different form of feedback. There was some success with this approach, but the feedback itself needed bolstering: the stoplight didn’t give enough feedback to change teaching practices.
There is a new emphasis in more recent projects on 21st century data skills, some sort of data literacy, data and privacy protection, and more. The principles are shifting as well: data is never complete, analytics can perpetuate bias, the necessity for humans always being in the loop, how and if projects should be scaled up, and more. Learning purposes vary dramatically as well in terms of quality, equity, personalized feedback, student experience, skills, and efficiency. A very complex tailoring of data to purpose and principle. This was where most of the discussion in the groups sat, this idea of tailoring and having a very specific feedback and guidance system in place, the need for bespoking this all to disciplinary or domain specific needs, a good understanding on how feedback can bolster or undermine student engagement and resiliency. Much to work through here.

Dragan and Yi-Shan transitioned to three applications, more or less in their infancy, and asked us to give them a try.
Loop, On Task, and LARC.
Loop is a learning analytics application that provides access to pageviews, access to course content, forums, and assignments, presumably plugging in via API to an LMS like Moodle or Learn. It tracks to some degree a student’s engagement record, scores for assessments, and more. Dragan referred to Moore’s transactional distance as we were toying with the application, and how some research suggests that depending on the context, increased faculty interaction may or may not lead to positive outcomes. Clusters, bar charts, and more, Loop felt both complex with Dragan emphasising that data can be interpreted in many ways, if done poorly can have a negative impact on effort and outcomes. Dragan pointed to research (Khan & Pardo, 2016) suggesting that student dashboards were mostly ineffective. Ultimately, these applications need to provide capacity for task specific language and appropriate levels of guidance: it can’t be merely summative feedback.
On Task took a different approach, dividing large cohorts of students into quartiles (or whatever cut was deemed appropriate), and drafting text feedback snippets for categories of feedback (particular answers, passages, outcomes, etc.). Categories are translated to set texts for feedback. Feedback is then given based on the quartile. Some degree of granularity while still being general enough to reach some level of scale. The feedback itself is devoid of numbers; it is just guidance. On Task seemed to have some merit for large course (MOOCs or other scaled course structures).
Jeremy Knox then spoke of the Learning Analytics Report Card (LARC), a project that asks: ‘How can University teaching teams develop critical and participatory approaches to educational data analysis?’ It seeks to develop ways of involving students as research partners and active participants in their own data collection and analysis, as well as foster critical understanding of the use of computational analysis in education. It captures data from an individual student’s course-related activity, and presents a summary of their academic progress in textual and visual form.
However, there is some customisation available here: to choose what is included or excluded, when the report is generated, and how it might be presented. It attempts to both empower the individual student and surface some of the hidden power structures that increasingly underpin and govern educational decision-making (like algorithms).
The first draft of the Learning Analytics Report Card interface is complete, and is ready for testing with Moodle data and the phase 1 analytics. The interface is behind the EASE login, which will restrict access to the identified pilot …