TRANSFORMING DIGITAL EDUCATION, TOGETHER

“I am NOT human” : Identity, Guardianship and Ethics.

Humanoid AI holding a mask , duality and identity

authored by James Catania rev 2.0

“I am NOT human” : Identity, Guardianship and Ethics.
authored by James Catania.

With the pervasive exponential intertwining of Artificial Intelligence (AI) in our daily lives, a pivotal inquiry must be discussed: should Artificial Intelligence Agents (Aia) be mandated to disclose their non-human identity? This question is laden with ethical considerations and moral philosophical contention. This evolving discourse has given rise to two distinct factions; those who are advocating for transparency to varying degrees and those who are advocating for identity obfuscation to varying degrees. Where is the balance? From complete opaqueness to complete transparency? What considerations should be at the forefront on deciding where to set the alpha channel slider?

A nuanced perspective emerges, challenging the conventional narrative of transparency. Advocates for ambiguity have a compelling case; asserting that concealing an AI identity facilitates integration into human-centric environments. They argue that such a covert approach minimises biases and prejudices that might emerge when users are cognisant of the artificial nature of the entity. Nonetheless, still, cautious voices within this faction underscore the inherent risks, cautioning against a potential obfuscation of accountability and the inadvertent fostering of misuse.

Echoing this narrative, reflecting on the heuristics of voice assistants (the most prevalent of Human-AI technologies) unveils a notable absence of expressions of politeness, gratitude, increase in aggressiveness in conversations with Aia’s, In fact the study by Fatima Shahid on anthropomorphism in voice assistants like Siri and Google Assistant offers a pertinent perspective. Her findings suggest that the human-like attributes of these technologies significantly affect how users interact with and accept them. This ties directly into the ongoing debate about AI transparency and ethics. Such anthropomorphism, while enhancing user engagement, also raises important questions about the awareness of users and the clear distinction between AI and human entities. Shahid’s research thus becomes a critical piece in understanding how the presentation[i] of AI affects its integration and perception in human-centric environments.

A study by NAPA proves that which is already a known fact; humans already adapt their language with voice assistants foregoing human pleasantries and intricacies to devolve in a pseudo-custom language designed to get the best results from voice assistants. At the end of the day, “Google Kitchen On!” works way better than “Google, please switch on the kitchen”. These observations raise a compelling question: If an AI algorithm positively responds to pleasant interactions and the AI agent is mandated to reveal itself, how might users adapt their behaviour? Would they naturally adopt courteous behaviour, or could there be a perceived obligation to act in a certain way solely to optimise the service? Anthropomorphism doesn’t only stop at how it sounds and how it looks.

Yet, this ethical conundrum transcends theoretical abstraction, extending its tendrils into the societal fabric of our interconnected world. The disclosure or concealment of AI identity becomes a catalyst, shaping societal perceptions and influencing the very dynamics of human-AI collaboration. The manner in which individuals and communities engage with these intelligent entities is not merely a technical consideration but a sociological phenomenon, impacting trust, cooperation, and reshaping the foundational structures of our social fabric.

When discussing implications of Aia identities, we must start considering the prospect of guardianship in the context of AI. As we grapple with the increasing integration of intelligent machines into various facets of our lives, the notion of assigning guardianship to oversee the actions and decisions of artificial entities emerges as a potential solution. This concept goes beyond conventional frameworks of accountability, acknowledging that the dynamic and evolving nature of AI necessitates a dedicated oversight mechanism. Guardianship implies a responsible entity, whether an individual or an organisation, assuming a role akin to a custodian, actively monitoring, and steering the actions of AI systems. This approach not only addresses the accountability vacuum but also seeks to ensure that AI entities operate within predefined ethical boundaries, mitigating potential risks and fostering a more transparent and controlled integration into society.

However, the introduction of guardianship introduces a cascade of ethical and practical considerations. On one hand, assigning guardianship could empower a designated entity to act as a steward, aligning the AI’s actions with societal values and ethical norms. On the other hand, it raises questions about the concentration of power and potential misuse, as the guardian entity becomes a gatekeeper of AI behaviour. Striking the right balance, defining the scope and limits of guardianship, and establishing robust mechanisms for oversight and accountability become imperative. Additionally, the concept of guardianship necessitates a revaluation of legal frameworks to accommodate the unique challenges posed by AI systems. Crafting nuanced legislation that acknowledges the evolving nature of technology while safeguarding human interests will be paramount in navigating the uncharted territory of AI guardianship.

This again re-iterates what has been already hinted to multiple times: a pressing need emerges for the establishment of a specialised technology court. As technology continues to advance at an unprecedented pace, the legal system must evolve to effectively address the intricate challenges arising in the digital landscape. A dedicated technology court would offer a focused approach to navigating cases related to cybersecurity, data privacy, and intellectual property in the digital realm. However, implementing such a court requires a commitment to ongoing education for legal professionals to keep pace with technological advancements. Collaboration between legal and technological experts is essential, ensuring a nuanced and informed response to the unique legal intricacies posed by the rapid evolution of technology.

We must also understand that the standard concept of identity when in the perspective of an Aia. primarily revolves around whether an AI agent should disclose its identity, it is equally imperative to scrutinise the empowerment of users in shaping their interactions with artificial intelligence. The focus of identity has to be broadened to encompass the notion of user agency—a user’s ability to assert control over their engagement with AI systems. In tandem with this, the discourse should delve into the ethical dimensions of informed consent. Users should not only be aware of the existence of AI entities but also equipped with a comprehensive understanding of how their data is utilised, the implications of AI-driven decisions, and the extent to which they can exercise control. By exploring user agency and informed consent, we not only align with ethical principles but also address the broader societal implications of integrating AI into our daily lives. This approach fosters a human-centric perspective, ensuring that individuals are active participants in, rather than passive subjects of, the evolving landscape of human-AI collaboration.

In the discourse on AI identity, the EU’s Artificial Intelligence Act is noteworthy. It mandates AI systems to be safe, transparent, and traceable, aligning with our discussion on AI transparency and guardianship. The Act’s regulatory framework classifies AI systems based on risk, resonating with our concept of guardianship. It also emphasises the evolution of legal frameworks to govern technology, echoing our proposition for a specialised technology court. This Act underscores the ethical responsibilities in AI’s evolving landscape, reinforcing the importance of digital identity for artificial agents.

While it is true that obfuscation results in an inherent easier homogenising of AI-Human behaviour but at what cost? what future weight put on society? Hiding something and accepting it as de-facto has never been a preferable option then informed consent and adaptability to acceptance which guarantees a higher future purpose to that same collaboration. From Kant’s deontological ethical principles to utilitarianism and virtue ethics, the consensus has consistently favoured transparency as the optimal choice for society, offering a favourable balance between costs and benefits.

At the moment there is the feeling with AI and AI Agents that the rapid pace of technological advancement resembles a runaway dog, sprinting ahead so swiftly that catching up seems an elusive pursuit. There’s a pervasive concern that individuals might grow weary of the chase, resigning themselves to the notion that this runaway dog may never return. We must not resign ourselves and just lay down in the current to be taken away. We must make sure that safety measures are put in place to ensure a safe navigation in this landscape.


[i]https://figshare.mq.edu.au/articles/thesis/Siri_or_Google_Assistant_The_impact_of_voice_assistant_anthropomorphism_on_consumer_usage_intention/19442021/1

[i]https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Scroll to Top