Doing algorithmic review as a team: a practical guidance

Products that rely on data science run the risk of incorporating societal biases in the algorithms that power them — potentially causing unintended harms to their users. At Wellcome Data Labs we have been thinking and openly sharing our journey towards surfacing and mitigating those harms. We split our review process in two streams Product impact analysis which looks into harms that can be caused by the product irrespective of the algorithm Algorithmic review process which looks into harms introduced by the model or data irrespective of the product We have written about product impact analysis in the past so this post is going to discuss the second part of our review process in more detail....

March 24, 2021 · 7 min

Assesing the fairness of our machine learning pipeline

My day to day job is to develop technologies that automate different processes at Wellcome through data science and machine learning. As a builder of these digital products, my team and I have to consider the unintended consequences they may have on users and on society more broadly. We know this is important because of famous cases where the consequences weren’t considered and harm has been done. For example the Google photos tagging system and Amazon hiring algorithm that have been accused of unintended racist and sexist bias....

March 10, 2020 · 7 min