The Host Who Wasn’t There
The moment often arrives at the end of a memorable experience: a smile, a handshake, a sincere thank you directed at the individual who made a system or procedure feel human. Could be a sales rep, a customer service agent, or someone working at, for instance, a municipality or travel agency. But what if that individual, that helpful, consistent, and seemingly dedicated person you corresponded with, simply does not exist?
Imagine a guest at a holiday resort, seeking out “Max van der Zee”, the delightful concierge whose personalised recommendations and prompt email replies elevated their stay. The guest asks the front desk to thank Max personally, only to be met with a confused smile and the eventual, awkward revelation: Max is not a person, Max is a virtual persona; a digital blend of AI and the centralised Guest Relations team.
This scenario, whether played out in a resort lobby or a corporate boardroom, is where our pursuit of systemic efficiency collides with the ethical imperative of authenticity.
This is not hypothetical. Similar situations are unfolding now in professional contexts; including, troublingly, on platforms like LinkedIn.
The Architecture of Digital Deception: Blurring the Lines of Impersonation
Technology, at its best, should transform abstraction into accessibility. In this case, however, the digital persona is used to transform a systemic reality into a false human clarity; it transforms a complex backend of automation and human inputs into a single, dedicated virtual employee.
The operator’s defence for creating such a persona is often rooted in pure operational mechanics: it is an efficient way to unify communication, stay in touch, and manage high-volume lead qualification or handle a constant stream of customer inquiries. The intent, they argue, is business efficiency, not genuine deception.
But here the critical observer must intervene. When a system is intentionally designed to be indistinguishable from an actual human in a professional context, ambiguity ceases to be accidental. It becomes a feature.
This differs sharply from the accepted conventions of the digital world, such as an anonymised social media avatar or a clearly fictional brand mascot. Those identities are built on a transparent intent: either for privacy (anonymisation) or clear symbolism (brand visualisation). Consider legitimate transparent AI assistants like Amelia from IPsoft, clearly branded as a virtual agent with no pretence of human identity. Here, the defence of efficiency is granted.

The digital persona, however, given a full name, photo, and even a LinkedIn profile, crosses a critical line. It is no longer an avatar for privacy or a symbol for branding; it is a misrepresentation of professional identity. When a prospect researches the company and finds “Max” on LinkedIn with endorsements, connections, and a professional history, they are making trust decisions based on false premises. The deception lies not in whether an AI is involved, but in the human decision to exploit the platform’s foundation of trust for synthetic advantage. The harm to the social contract — the assumption that the person you are networking with is real and accountable — is identical, whether an AI or a human writes the messages and interacts as someone it is not.
The Ethical Collapse and The Regulatory Reckoning
The dissonance deepens when the company in question operates in sectors built on trust and human well-being.
For a B2B relationship, the identity of the person you engage with is a material fact that dictates perceived accountability and competence. Obscuring this fact is a failure of integrity that undermines every claim of authenticity. When businesses invest heavily in enterprise software, they aren’t only buying technology; they are buying into the people and principles behind it.
The contradiction becomes striking when mission alignment comes into play. Consider an HR software company that sells solutions for employee re-integration, burnout prevention, and all other sorts of human-centred activity. By using a synthetic identity to manage its own client interactions, it creates an ethical hypocrisy: it is asking clients to invest in human well-being while signalling that its own human-facing roles are optional and disposable. The message sent is unmistakable: we’ll help you value your humans, but we don’t value ours enough to put them forward authentically.
Critically, this issue is no longer confined to the philosophical domain of ethics; it has entered the legal domain of compliance.
The emerging regulatory landscape, particularly the EU AI Act, reinforces this ethical clarity. Article 52 of the Act imposes transparency obligations for AI systems interacting with natural persons. For general-purpose AI and chatbots deployed in customer-facing roles, the requirement is clear: users must be informed they are interacting with an AI system. Even when human operators craft the messages, the misrepresentation of identity itself may constitute the compliance violation. The deception is in the fabricated persona, not merely in the technology behind it.
The creation of a convincing, named, and networked professional persona like “Max van der Zee”, without any explicit, prominent disclosure, stands in direct conflict with the spirit, and likely the letter, of this regulation. Undisclosed efficiency becomes a compliance risk, turning short-term cost savings into long-term financial and reputational liabilities. While enforcement mechanisms are still developing, early compliance now prevents costly retrofitting or damage control later.
The Slippery Slope: Where Does This Lead?
If we accept synthetic personas as normal in professional contexts, we must consider the consequences. What happens to professional networks when verification becomes impossible? How do references work when the recommender may not exist? What becomes of the trust infrastructure that underpins B2B relationships when every interaction triggers suspicion?

The normalisation of professional impersonation doesn’t just affect individual transactions. It degrades the entire ecosystem. LinkedIn becomes less valuable when profiles can’t be trusted. Professional introductions lose meaning when the person making them might be fictional. The cumulative effect is a collapse of the social capital that makes business relationships efficient in the first place.
We are at a choice point. The decisions companies make now about transparency will either reinforce or erode the foundations of professional trust for the next generation.
Trust Through Radical Transparency
The purpose of technology should be to amplify human capability, not just to accelerate systems. The solution is not to abandon the pursuit of efficiency, but to anchor it in dignity and meaningful disclosure. The executives focused on lowering human burden must recognise that the highest efficiency comes from human-machine synergy built on trust, not through deception. To counter such deception, some preventive measures can be taken:
- Re-design the Persona
Replace the human name and photo with a transparent, functional title: “Hotel X’s Digital Guest Coordinator” or “Aura, Automated Assistant (managed by Company X team)“. This maintains the unified voice while eliminating any deception.
- Declare the Boundary
Implement clear, multi-layered disclosure: a prominent label on the website and an unambiguous statement on the LinkedIn profile (if such profiles are used at all) declaring the entity as a “Digital Persona, managed by the Human Marketing Team“. Transparency should be impossible to miss, not buried in terms and conditions.
- Elevate the Human
Use the AI persona for the mechanics, such as scheduling, data retrieval, initial information gathering; but deliberately emphasise the human team for what matters: strategic advice, complex problem-solving, emotional connection. This preserves the human team’s unique value and justifies the customer’s trust. Make it clear when someone is being transferred from automated assistance to human expertise and v.v.
- Establish Accountability Mechanisms
Provide a clear escalation path to named human leadership for complex or sensitive matters. When stakes are high or relationships matter, real people with real accountability must be visible and accessible. This isn’t just ethical, it’s a commercially sound policy.
The Choice Before Us
Thoughtful leaders understand that our systems reflect our moral intentions. If we use AI to obscure the truth of our operations, we undermine the integrity of the technology itself. True leadership lies not in optimising communication through a synthetic mask, but in cultivating understanding through radical transparency.
In an age where AI capabilities will only grow more sophisticated, the companies that build and maintain trust will be those that choose transparency over trickery from the start. The technology is not the problem; the deception is. And the solution is remarkably simple: tell the truth about who (and what) people are really talking to.
Clarity and conscience belong together, now more than ever.
https://www.linkedin.com/pulse/host-who-wasnt-faux-human-ecosystems-ai-roland-biemans-aenje

Comments are closed