#DataEthics

In advance of our panel discussion as part of the Ufi VocTech Showcase, we caught up with Patrick Dunn, Project Account Manager at Ufi, for his thoughts on the challenges and opportunities when it comes to Data + Ethics.

You can find out more about the panel discussion below, or visit the Showcase event page.

"Over a number of decades, two things have most troubled me, as a designer of learning and training: firstly, I want to know what people need to learn, and secondly, when they’ve done it, how well they’ve learned it. To put it another way, I want to know their “starting state” at the beginning of a learning experience, and how they’ve changed, their “end state” afterwards.

In the pre learning-technology era, this was really tough. We used questionnaires beforehand, maybe interviewed a few people, did some observations if budgets allowed, then used tests of various types afterwards. But to do the “starting state” element for large numbers was close to impossible, and was rarely done. Mass testing after the event was expensive and usually came down to the simplest, cheapest option.

Not only did we not know their “starting state”, we often didn’t know when they were starting, or why. We often didn’t know when, how or why their learning experience ended, or how they got on.

The huge leap forward - a real game-changer, and a mind shift for learning designers everywhere - has been access to rich data about our learners.

Whether we like it or not, we’re swimming in an ocean of data, much of which is our own. We routinely give our data away, not just through obvious means such as using google searches or sharing our photos on Facebook, but by through much more subtle means: the games we play and how we play them, messages we send each other, the goods we buy and the decision processes we make while buying them.

Sometimes, it feels like the systems “out there”, know us better than we do ourselves – our whims, strengths, weaknesses and dreams. And this last point, from a training point of view is critical: a classic blunder for trainers used to be to take the views of learners and trainees as accurate, when in actual fact, learner understanding of their needs was often, at best, limited. We are biased to believe that we are correct, even when we are not, and that our world view is naevely realistic, when it is a construct of our unique experience and perspective. How then do we get to competent incompetence where we recognise our learning gaps better. Can technology help go beyond our human failings?

Two Ufi projects illustrate the benefits of using AI to profile learners to assess their needs; their “start and end states”. Game Academy uses profile data provided by gamers to identify the career-relevant skills of each individual. The gamers don’t have to do anything other than keep playing. The system works out, invisibly in the background, their levels of, for example, team working, collaboration and problem solving. It then identifies the gaps they need to fill, and provides guidance on how they should fill them. Fluence’s “Passive Accreditation” system takes a large range of data inputs generated by prisoners, including hand-written submissions, prison staff documentation, reviews and so on, and generates individual learning plans and assessments. Even in such a complex, secure environment, where individual needs are diverse and difficult to ascertain, it has been shown to cut down the effort in producing such assessments, by prison staff, to a fraction of the previous time, and drastically improve accuracy for each individual prisoner.

Of course, there are a range of complex ethical issues surrounding this. It’s fine to use AI to point an advertisement for a new brand of socks at a customer.

But what happens, in a learning environment, if a non-intrusive system, running in the background and invisible to the learner, judges that a key worker’s skills are safe and sufficient, when they are not?

Or offers a disabled learner a course that is dangerous for them? How do we, as humans, assess the degree of “I” in an AI system if, as increasingly common, the system is able to develop its own range of judgements? What about when these judgements are biased to start with, about who the learners are?

And where do we draw the line in terms of what data is made available to a system? In relatively well-contained environments, such as Game Academy and Fluence, where the scope of data sets are known, this is a manageable issue. But the “ocean of data”, that we all swim in on a daily basis, has many shorelines, and human beings aren’t particularly well designed to juggle vast amounts of data about themselves, or even know where this data is.

So although trainers like myself may celebrate our new ability to assess learner needs and performance to a previously unimaginable level of refinement, we must also acknowledge the limitations of blindly using intelligent data systems. Human intervention will be essential for some time to come. To quote Aldous Birchall, from PwC “AI is software built by humans. It doesn't exist in a context free setting. To assign responsibility for an adverse event caused by AI, we need to establish a chain of causality from the AI agent back to the person or organisation that could be reasonably held responsible for its actions.”

Join the discussion on Wednesday 17th November

As part of the Ufi VocTech Showcase Patrick and others will be exploring the role of data and ethics in the development of vocational technology and vocational education.

Panellists for the discussion include: