On page 549 of “Algorithmic Accountability and Public Reason”, Binns mentions that the more

algorithms responsible for decision-making, the more we need a system of accountability. The author argues that the political theory of public reason can help decide a framework to do so. Public reason is the universal rules which can be justified on public grounds and shared by reasonable people, without appealing to beliefs that are religious, metaphysical, moral, or political in nature. That is, the aforementioned fields will have certain content which cannot be the basis for determining universal laws. The author suggests this theory is useful in minimizing the effects of algorithmic decision-making in that it will limit decision-makers to act on very specific standards which are acceptable to all reasonable people.

Given this context, I propose the question: who are these reasonable people? This question is important because the people responsible for public reasoning make universal rules that fit everyone (reasonable/not) the same way. One possible answer is that reasonable people are those who agree on universal laws that are ethical in the context of human rights like liberty, opportunity, and equal wealth distribution. But, if we take the example of India, gender equality is not an objective truth, which implies that every region has its own separate truths. This is because culture, public opinion and upbringing are driving factors of objective truths, and consequently determines how people think. If I am born into a place which only thinks in a certain way, say that men and women are not equal, then my thinking would be conditioned in the same direction, and I would not be able to fathom a framework where there is such equality. Further, within a region there may be subregions that think differently from each other, for example, abortion laws in the United States. The objective truth that a mother gets to choose what she wants to do with her body is not agreed upon everywhere: some states/regions have banned it and would punish her for the crime, others don’t, and some even support it.

It is not that these places are not reasonable, but that there are different kinds of reasonable thoughts. However, the problem is that we want to hold the algorithms accountable in the same way everywhere but to do so, we would now have to exclude a place which does not follow the exact same reasonable thought process. We cannot account for every region’s opinions and custom accountability measures when creating a universally accountable algorithm system, and it is unclear where to draw the boundaries of such regions since two regions might differ in one thought but agree in another. We don’t have a set definition of who universally reasonable people are in the context of public reason since region determines basic moral values which differ across the globe. Therefore, practical implementation and scalability of such an accountability system for algorithms would not hold.