When Algorithms Governs…

QUESTIONS : Present world is highly influenced, guided and even governed by different algorithms. How deeply do they shape today’s world, and what challenges does this pose for society?

  • Alphonsa Sebastian

ANSWER : Saji Mathew Kanayankal CST

The expanding deployment of algorithms, particularly those driven by Artificial Intelligence (AI) and Machine Learning (ML), has introduced profound ethical challenges in contemporary society. Algorithmic systems now mediate a vast range of human experiences, shaping the information we consume, the relationships we form, the routes we travel, the services we access, and increasingly, the judgements made about us. Across critical sectors such as banking, healthcare, social welfare, and criminal justice, Automated Decision Systems (ADS) are entrusted with evaluating individuals and allocating resources, often with minimal human oversight. At a basic level, these systems operate by analysing personal data to calculate probabilistic assessments of behaviour or characteristics and issuing decisions based on predefined thresholds.

A discerning approach to digital data and content is indispensable, particularly in democratic and social contexts where truth and trust are foundational. Individuals and institutions bear a moral obligation to verify the sources, accuracy, and intentions of information before accepting or disseminating it.

Advanced algorithms are designed to collect, process, and extract meaning from the extensive digital traces generated by individuals through sensors and connected devices. In doing so, they transform complex data into apparently neutral and objective outputs, claiming to predict, optimise, and manage human behaviour. While such systems are frequently praised for their efficiency and precision, their uncritical adoption risks entrenching existing inequalities and producing new forms of injustice. Far from being neutral, algorithmic processes can exert deep, opaque, and far-reaching influence on decision-making and communication, with significant consequences at personal, social, and communal levels.

Contemporary papal teachings, from Pope Francis through Leo XIV, echo these concerns by highlighting the moral risks associated with algorithmic mediation of human life. They caution against the opacity of automated decisions, the marginalisation of vulnerable populations, the reduction of persons to data points, and the erosion of human responsibility in areas of grave moral consequence, including warfare and governance.

Algorithmic Discrimination

One of the most serious dangers posed by algorithmic systems is their capacity to generate and intensify discriminatory outcomes. Contrary to common assumptions, algorithms are not neutral; they often reproduce and amplify biases embedded in their training data. When such data reflect historical, political, racial, gender, religious, or cultural prejudices, algorithms learn these patterns and perpetuate them, frequently to the detriment of the most vulnerable and socially excluded. If training data are incomplete, unrepresentative, or shaped by past discriminatory practices, algorithmic outcomes become distorted and unjust.

When persons are reduced to data points or predictive profiles, critical thinking, moral discernment, and authentic interpersonal relationships are weakened, especially among younger generations. This opacity risks reducing persons to mere inputs within technical processes.

Within algorithmic decision-making, fairness generally denotes the absence of bias against individuals or groups based on inherent or acquired characteristics, whereas unfair systems systematically favour certain groups over others. Yet, discriminatory effects often remain concealed beneath an appearance of objectivity, allowing algorithms to reinforce and even aggregate existing structures of injustice while claiming technical neutrality. As a result, marginalised communities experience disproportionate harm through biased hiring tools, exclusionary credit-scoring systems, or predictive policing technologies.

These injustices extend across multiple social domains. In finance, lending algorithms may deny loans or impose higher interest rates on applicants from particular socio-economic or racial backgrounds, often by relying on proxy variables, such as postal codes that correlate with protected characteristics. In healthcare, algorithms trained primarily on data from dominant demographic groups may underestimate the needs of marginalised populations, leading to unequal access to care. Similarly, recruitment algorithms have been shown to favour male candidates when trained on historical employment data drawn from male-dominated workplaces, systematically downgrading applications associated with women.

Contemporary papal teaching offers a strong critique of algorithmic governance. Pope Francis warns that entrusting decisions about human lives to algorithms undermines justice and fraternity. Antiqua et Nova highlights how biased data can lead to exclusion, especially of the vulnerable, while Pope Leo XIV stresses that artificial intelligence cannot replace moral discernment or genuine human relationships. Together, these teachings insist that technology must always be judged by its impact on human dignity, particularly for those at risk of marginalisation.

Dehumanization in the Technocratic Paradigm

A further ethical challenge posed by algorithmic systems is their dehumanising tendency. Through continual adaptation and optimisation, algorithms increasingly shape human preferences, emotions, and choices, particularly through social media platforms and digital advertising. By prioritising engagement, efficiency, and profit over truth and the common good, these systems can manipulate behaviour, intensify social polarisation, and gradually erode human freedom and critical reasoning. Pope Francis warns that when human capacities such as intelligence or conscience are attributed to machines that lack them, “algorithms risk technologizing the human person rather than humanising technology.” Artificial intelligence, which merely simulates human capabilities while lacking genuinely human qualities, remains “qualitatively distant from the human prerogatives of knowledge and action” and thus poses serious social risks. Within this technocratic paradigm, machines increasingly overshadow human creativity, relationality, and moral autonomy.

Pope Francis warns that when human capacities such as intelligence or conscience are attributed to machines that lack them, “algorithms risk technologizing the human person rather than humanising technology.” Artificial intelligence, which merely simulates human capabilities while lacking genuinely human qualities, remains “qualitatively distant from the human prerogatives of knowledge and action” and thus poses serious social risks.

The substitution of human judgement by automated systems also places prudence and moral responsibility at risk. Algorithms operate through data optimisation and statistical correlation, not wisdom or conscience. While their predictive accuracy may be high, their moral accountability remains limited. Moreover, many advanced machine-learning models, especially deep neural networks, function as “black boxes” whose internal logic is opaque even to their designers. This lack of explainability undermines transparency and accountability, making it difficult to detect, attribute, or correct bias and error. When harm occurs, responsibility becomes diffuse and ambiguous, obscured among programmers, institutions, and technical systems. As algorithms increasingly converge with human decision-making, it becomes harder to discern intent, foresee consequences, or determine moral agency.

Such displacement of human responsibility by technical efficiency fosters a form of technological reductionism that interprets the human person primarily in terms of measurable behaviour, productivity, or predictability. This vision stands in sharp contrast to humanistic and theological traditions that affirm conscience, freedom, relationality, and transcendence as essential to human dignity. When persons are reduced to data points or predictive profiles, critical thinking, moral discernment, and authentic interpersonal relationships are weakened, especially among younger generations. This opacity risks reducing persons to mere inputs within technical processes and threatens, as Pope Francis cautions, to condemn humanity “to a future without hope if people are deprived of the ability to make decisions about themselves and their lives.” Pope Francis insists that despite the impressive capacities of intelligent machines, decision-making must always remain the responsibility of the human person. Digital innovation, he warns, “touches every aspect of our lives, altering the way we think and act,” as decisions in medicine, economics, and social life increasingly combine human judgement with “automatic calculation.” Ethical vigilance is therefore essential to ensure that technology serves human flourishing rather than undermining it.

Erosion of Democracy 

The mediation of algorithms in political life poses significant risks to democratic systems. By shaping citizens’ preferences, perceptions, and patterns of thinking, algorithmic systems can subtly manipulate public opinion. While often justified in terms of efficiency and personalisation, the unchecked power of algorithms threatens core democratic values such as freedom, equality, participation, truth, and accountability.

In his analysis, Thomas Christiano identifies three algorithmic phenomena—hypernudging, microtargeting, and filtering—that profoundly influence electorates and thereby undermine democratic processes (Thomas Christiano, “Algorithms, Manipulation, and Democracy,” Canadian Journal of Philosophy 52, no. 1 [2022]: 109–124). Although these practices are not “necessarily cases of objective manipulation,” Christiano argues that they nonetheless constitute the principal forms of algorithmic manipulation in contemporary politics. He warns that “algorithms can spread fake news throughout society, undermining the epistemic potential that broad participation in democracy is meant to offer.” Moreover, they threaten political equality, since “some people may have the means to make use of algorithmic communications and the sophistication to be immune from attempts at manipulation, while other people are vulnerable to manipulation by those who use these means.”

“AI should assist, not replace, human judgment.” Artificial intelligence must therefore remain a tool that supports human intelligence and freedom, rather than a substitute that claims to replicate or diminish the irreducible richness and dignity of the human person.

Hypernudging refers to algorithmically driven, dynamically updating choice architectures that continuously shape and modify individual behaviour in real time through data feedback. It goes beyond traditional forms of ‘nudging’ by being continuous, adaptive, and highly personalised, often operating without the user’s conscious awareness. Whereas a traditional nudge might involve placing healthy food at eye level in a grocery store, a hypernudge represents its digital intensification: it is personalised, invisible, and constantly recalibrated. Embedded within user interfaces, hypernudges create the illusion of free choice while algorithmically filtering and ordering options to favour particular outcomes. As Christiano observes, hypernudging “does not seem manipulative when it is being used on reasonably well-informed people,” yet it becomes “a threat to democratic participation when persons are operating in environments that do not conduce to political sophistication.”

Microtargeting involves the delivery of highly personalised political messages to individuals or micro-groups based on behavioural data, emotional profiles, preferences, and social identities. While this practice promises more efficient political communication, it raises serious concerns regarding manipulation, inequality, opacity, and democratic erosion. Microtargeting marks a decisive shift from public persuasion to private influence. Unlike traditional political messaging, microtargeted communications remain largely invisible to the broader public, regulators, and even political competitors. Algorithmic systems enable campaigns to deliver messages designed to resonate emotionally with specific groups, often by exploiting fear, anger, or resentment. As Christiano notes, such information may be “highly emotional, triggering anger or fear,” and may include “half-truths, outright misinformation, or hasty inferences,” aimed at nudging particular populations to vote or engage politically. These distorted forms of communication disproportionately affect low-information citizens, who are especially vulnerable to manipulation, making microtargeting a pervasive feature of contemporary democratic politics.

Filtering refers to the algorithmic processes through which information is selected, ranked, prioritised, or excluded based on data-driven criteria. These processes determine what users see and what remains hidden. Although filtering resembles traditional censorship in some respects, algorithmic filtering differs in being automated, continuous, adaptive, highly personalised, and largely opaque to users. It shapes not only access to information but also patterns of attention, belief, and desire. Through techniques such as content-based filtering, collaborative filtering, and hybrid models, algorithms exercise a form of normative power by actively structuring perception and behaviour rather than merely organising information neutrally. When political agency is continuously shaped by opaque algorithmic systems, democracy itself becomes vulnerable to subtle yet systematic forms of manipulation and control.

In conclusion, within a contemporary context increasingly shaped by the technocratic paradigm, algorithms can no longer be regarded as neutral or merely technical instruments. They function as powerful socio-political actors that embed specific values, assumptions, and priorities within their design and operation, thereby influencing human perception, judgment, and social life. While it is neither possible nor desirable to avoid algorithmic systems altogether, their pervasive presence calls for critical discernment and moral responsibility. As Antiqua et Nova rightly affirms, “AI should assist, not replace, human judgment.” Artificial intelligence must therefore remain a tool that supports human intelligence and freedom, rather than a substitute that claims to replicate or diminish the irreducible richness and dignity of the human person.

The growing capacity of algorithmic systems to generate manipulated or misleading information demands a vigilant ethical response. A discerning approach to digital data and content is indispensable, particularly in democratic and social contexts where truth and trust are foundational. Individuals and institutions bear a moral obligation to verify the sources, accuracy, and intentions of information before accepting or disseminating it. Only through such ethical vigilance, rooted in human dignity, responsibility, and the common good, can technological innovation be oriented toward authentic human flourishing rather than becoming a subtle instrument of manipulation and democratic erosion.

Share:

More Posts

Send Us A Message