There is a common perception that data is neutral, but we are gradually realising that this is a myth. It is a living breathing organism that is growing exponentially every day. Data can cement the history of inequality, we have seen this in countless examples of facial recognition: illustrated by Joy Buolamwini’s experience in the film Coded Bias, the predictive policing of Pred Pol and Palantir (where a number of black and brown male individuals were wrongfully arrested) or the unchecked algorithms that caused the global economy to crash in September 2008.
What are the effects of how we collect data, examine it, curate and disclose it? Each step in the chain of custody will have some impact on a person. Traditionally, professionals (such as lawyers, doctors and accountants) have adhered to a code of ethics. This is because the data they handle is so “precious” that it must be treated with the utmost care. So do we need to start developing a data code of ethics and a rubric to evaluate distinct types of data?
Most companies subscribe to a code of ethics they publish on their website (the door), this is often a poor reflection of what goes on behind closed doors. How do we reconcile this corporate cognitive dissonance? In a rush to get products to market some companies have chosen to ignore the research on good ethical practices and the mis-interpretation or careless curation of data has come at too high a price. Frances Haughen, the Facebook whistle blower who said Facebook was knowingly prioritising growth over children’s safety, and Safiya’s Noble’s work on algorithms of oppression are all well publicised examples of the harm that has been caused to society. This zero sum approach where duty to shareholders has taken precedence over society is not sustainable. The consequences of this are now starting to affect their bottom line, but it has also affected internal morale (we have seen walkouts in Google, protesters in Amazon and strikes by uber drivers) and most importantly, it has amplified public mis-trust. So how can we do better?
Deeply entrenched cultural bias against women; the universal lack of accessibility to goods and services, and the actual environmental impact on the planet with storing so much data, are just a few examples that dispel that myth that data is neutral. If you look at the number of biases that have been unearthed there are some 300 https://www.visualcapitalist.com/every-single-cognitive-bias/
How can we avoid repeating this history? Which biases do we choose to eliminate and how do we code them out?
This takes a lot of thoughtful consideration. A good place to start is to examine the biases and the intersection of existing legal frameworks like equality, consumer protection and product safety legislation. Another option is to look at human centric design. As humanity we collectively need to ask ourselves what problem can I solve with this data? How can I seek to achieve a better and fairer society. Ethics needs to be credible and objective and adjudicated by an independent third party.
What would good human centric design look like? One approach is taking a multi- stakeholder approach. Focus in this area has been on Western philosophers, high brow academics, top data scientists, security gurus and the state of the art middleware. This is not working.
We need to include more diverse voices and opinions: the behavioural scientist, the customer, the user, the actual class of individuals who are impacted by that algorithm or AI. When building out any data based product the question we need to ask is “who is this not serving?” Edge case reasoning will have to prevail if we are to build a fully inclusive infosphere. Thinking about the edge cases and social impact will create a more sustainable product. Diversity and inclusion can no longer be a policy, it has to be a practice. It is my hope that we see the rise of the ethics committee, overseen by a Chief Ethics officer. Why a committee? Well, it supports diversity of thought and opinion and avoids cognitive bias, sunk costs bias and a single point of failure. Ensuring that top down and bottom up approach will embed a culture of ethics. We avoid silos and this systemised ethical checking ensures one myopic view does not exert power over the other.
At the time of writing ethics is seeking to build a bridge between the GDPR and the EU AI Act which is still under consideration. We live in a hyper connected world but with remote working we also live in an untethered world. It is up to each one of us to try and build ethical frameworks, standards, and disciplines and commit to be bound by them or we risk losing everything.
Thanks for reading and please stay tuned for more blog insights!
Privacy Counsel, Altada Technology Solutions
Follow us on Twitter @altadasolutions
Connect with us on LinkedIn @AltadaTechnologySolutions