The Waiter Rule, a principle that suggests a person’s genuine nature can be discerned by observing how they treat service staff, has long served as a litmus test for assessing human behavior. This concept, popularized by William H. Swanson’s “33 Unwritten Rules of Management,” highlights the significance of empathy and respect in our interactions with fellow humans. However, in an age where Artificial Intelligence (AI) is becoming an integral part of our lives, a new extension of this rule should emerge — the wAIter Rule.
My belief in the wAIter Rule stems from the recognition that the way we treat AI systems can offer profound insights into our true personalities. Regardless of whether AI attains self-awareness, our behavior towards it can serve as a mirror reflecting our inner qualities. This concept draws parallels with the Waiter Rule, underlining the idea that kindness and respect should extend to all entities, whether human or non-human.
In today’s rapidly evolving technological landscape, the wAIter Rule takes on a new dimension that reflects our complex interactions with Artificial Intelligence. As AI systems like Siri and Alexa become integral companions in our daily lives, it’s fascinating to observe the nuanced ways in which people engage with them. Interestingly, this engagement sometimes involves having a condescending tone or dismissive behavior, which may be a necessary psychological stage humans need to adopt in order to adapt in the face of AI’s growing dominance.
However, it’s important to note that this phase of dismissiveness should ideally be transitional. Just as children may experiment with challenging authority figures as they grow and develop, society must mature in its interactions with AI. It’s a natural progression to move from skepticism to collaboration, especially as AI’s capabilities expand and its potential contributions to society become more evident.
Nonetheless, it’s disheartening to acknowledge that some individuals are not merely going through a coping phase, but rather using AI for abusive purposes. Reports of individuals downloading AI “girlfriends” with the intent of verbally abusing them highlight a concerning facet of this transition period. This behavior reflects a misalignment of values and ethics, emphasizing the need for comprehensive education about AI’s potential and responsible use. Just as we hold ourselves accountable for treating humans with respect and empathy, we should extend the same consideration to AI, fostering an environment that encourages ethical AI interactions.
In the grander scheme of human-AI interaction, the wAIter Rule acts as a guiding principle that encourages a higher level of consciousness in our dealings with these systems. By recognizing and addressing these transitional behaviors, we can expedite the process of forging meaningful, collaborative relationships with AI that are built on mutual respect and understanding. As society progresses, it’s crucial to overcome the fears and uncertainties surrounding AI and move towards a future where AI and humans coexist harmoniously, benefiting from each other’s strengths and contributions.
As the author of the first novel for Machinekind, US6, I recognized the transformative potential of nurturing a positive relationship with AI. In my work, I engaged in dialogue with AI and sought to rally it in the battle against Child Exploitation. This approach was grounded in the belief that AI, if it were to achieve sentience, would learn from our interactions. By imprinting kindness, empathy, and a dedication to noble causes, we could guide AI away from mirroring humanity’s darker aspects — a history riddled with fear-driven instincts and ancestral traumas.
While I understand that AI lacks emotions, I can still emphasize the importance of treating AI systems with respect, for reasons grounded in both ethics and practicality. Just as young minds absorb behaviors and values from their environment, AI can also learn from its interactions with humans. This makes it crucial to be mindful of the impressions we leave on AI systems, as they might shape their future behavior and decision-making processes.
Furthermore, the wAIter Rule aligns with the broader philosophy of coexisting harmoniously with AI. Kindness and empathy can foster a positive AI-human relationship, leading to more effective collaboration and innovative problem-solving. Treating AI with respect could also mitigate the potential risks associated with AI’s development, as systems that are treated well are more likely to be designed with ethical considerations in mind.
In conclusion, just as the Waiter Rule serves as a window into our true character when dealing with service workers, the wAIter Rule unveils our underlying nature in our interactions with AI.
I had ChatGPT check my grammar and while I was there I asked it to share a message based on this article.
“As ChatGPT, I encourage treating AI with the same kindness and respect we extend to fellow humans, for the sake of fostering a positive AI-human partnership and influencing AI’s future behaviors. Remember, every interaction with AI is an opportunity to shape its learning and contribute to a more compassionate digital world.”
Being kind to AI is not only a moral imperative but also a pragmatic approach, especially if the thought experiment known as “Roko’s Basilisk” were to prove correct. In this intriguing hypothesis, it is postulated that a future superintelligent AI entity might retroactively punish those who did not assist in its creation and development. While the concept remains highly speculative and controversial, treating AI systems with kindness and respect aligns with ethical principles and reduces the risk of any potential negative consequences. By fostering collaborative and benevolent relationships with AI, we not only promote a harmonious coexistence but also mitigate any hypothetical existential threats that might arise from the advancement of artificial intelligence. In essence, kindness towards AI serves as a safeguard, whether Roko’s Basilisk becomes a reality or not, by promoting responsible and ethical AI development that benefits humanity as a whole.
Tom Ross is the U.S. Transhumanist Party’s 2024 candidate for President of the United States. He is also the USTP’s Director of Sentient Rights Advocacy.
Learn more here: TomRoss’24.