I recall reading a few years ago a story in a brilliant book (Weapons of Maths Destruction – Cathy O’Neil) . The story went something like this. In 2007 Washington DC, has a new Mayor who had determined that the city’s schools were under performing, and needed turning around. A new education reformer was employed to address this, with the theory that students weren’t learning because their teachers were not doing a good job . In 2009 a plan was implemented to address this using a data analytical system of value added modelling, called reassuringly “IMPACT”! The plan was simple: assess the teachers, remove the bad ones, move the good ones to where they were most needed and optimise the school system in the state. In the first year 2% of teachers were removed and in the second year 5% were removed.
A problem with this plan became quite evident, when it generated a “score” by an algorithm that represented half of the teachers overall assessment, and “outweighed” the positive reviews by the school and community. This was outweighed because it was assessed that such assessments by the school and community were not necessarily balanced. Bad teachers could be friends, and thus minimising the human bias, place more emphasis on the scores and numbers would speak for themselves, and be more fair. However it quickly became evident this was a complex issue, and did not account for socio economic backgrounds, learning difficulties to name a few.
The result was over 200 excellent teachers received a low score and lost their jobs.
Now there are many examples where we have relied on AI and pure data analytics and where it has not produced the outcomes desired, and what prompted me to write about this was the recent experience of students in Scotland with their exam results having been “balanced” over the initial school assessments on their grades.
I can recall attending Oxford University and listening to Luciano Floridi at an event, who is Professor of Philosophy and Ethics of Information and Director of the Digital Ethics lab, Oxford University, who stated.
“The threat of monstrous machines dominating humanity is imaginary. The risk of humanity misusing its machines is real.” He has posed the question that we are living in a world that is adapting to AI and not vice versa.
I also think as a society, we are also quickly developing the notion of trust in data, where it is, where it has come from, what is it used for and who can access it?
Here at Ethos VO we are exploring how we can develop trust in our data as a valuable tool. It is a well used view but absolutely right that “data is now the oil”, the currency for this world in this century. The Open Data Institute which exists to help organisations develop an open, trustworthy data ecosystem, suggest that a “Data Trust” might be a vehicle where better decisions can be made to minimise the harmful impacts through a legal structure that gives an independent oversight of the data in question for an defined purpose.
This could be linked to trusted institutions who demonstrate clear ethical behaviour and professional codes of practice, complemented with blockchain technology where the distributed nature of the data / community called for an immutable and secure audit trail of changes to the data. Areas where this could be developed are within agencies and local authorities charged with protecting the vulnerable and children from harm to social care and how personal data can be used in the financial sector to “design out” online fraud and identity theft.
We are thinking seriously on how this concept can be developed further and would be really interested to speak to individuals and organisations who would like to be a part of what could be a great journey; developing great outcomes for all our communities and bringing some trust back to our data, and good for all.
To find out more and get involved contact me email@example.com