Photo Credit: Kaboompics .com
“It’s going to be interesting to see how society deals with artificial intelligence, but it will definitely be cool.” – Colin Angle
Editor’s Note: Kyrtin Atreides highlighted in clear terms some indictors and strategies used by some scam companies in an attempt to pass off marketing fluff and analogies as technical documentation, like comparing a CGI chatbot to the human brain without mentioning a single actual technical component. He further gave some thought-provoking details on how to identify an authentic AI-driven company.
~ Urhefe Ogheneyoma Victor, Assistant to the Director of Publication, United States Transhumanist Party, February 2022
When a colleague pointed out a company claiming to be “at the forefront of AGI Research”, whose only patent was on applying CGI animation to a standard chatbot, I was reminded of why people assume any ambitious research to be a scam by default. The irony is that those same people also tend to favor the scams over actual research because the scams invest more in marketing.
Much as Sophia was described as “a chatbot with a face“, this company had made a CGI face and invested their efforts into emotionally manipulating humans, rather than building systems that actually function as described. Some have gone so far as to report 40% of “AI Startups” including no substantial AI at all in Europe, illustrating how even the very lowest bar often isn’t met. If so many “AI Startups” have no AI at all, it is little wonder that a company giving chatbots CGI faces might develop delusions of grandeur.
However, these scam companies are also rather easy to spot. Their websites follow standard templates, very flashy $100,000 websites dominated by photos, videos, demos, and icons for easy scrolling and digestibility, with a list of partner companies and/or clients near the bottom. Real companies do this as well, but they also back up their claims with technical documents, peer-reviewed papers, white papers, and patents on the technology. The scam companies attempt to pass off marketing fluff and analogies as technical documentation, like comparing a CGI chatbot to the human brain without mentioning a single actual technical component.
One of my favorite examples was the home page of a particular “AI Influencer’s” company, where it prominently read “Black Box + Apps = White Box”. A non-technical audience might accept this, but any technical individual familiar with the terms understands that this claim is an exceedingly obvious lie. The fact that anyone listens to such an individual at all is a testament to how direly polluted the AI market is today. It could be compared to going to the grocery and discovering that a large number of the brands of tomato soup were actually just empty cans.
In cases such as this the list of “partner/client companies” is actually a “Wall of Shame”, highlighting the companies who were stupid enough to fall for the scam. The bigger the company, the easier it is for some portion of that company to fall for such garbage, it is a simple matter of bombarding them with high-quality marketing and waiting for someone to emerge who doesn’t do their due diligence in validating the technical data.
At the other end of the spectrum, there are cheap and simple websites like ours, made to convey and organize information, including, pointing to technical resources. Without the marketing fluff, a scam can’t function, as snake oil is based almost entirely on marketing presentation. However, the irrationalities of the human mind favor that marketing fluff, and that strong preference leads to a reversal of logical reasoning where the company least likely to be a scam is dismissed out of hand, and the company most likely to be a scam is favored. This shouldn’t come as a surprise, as it is the reason why most modern marketing practices exist, to overcome logical reasoning. You probably don’t “need” a new iPhone, but you might “want” one because of the marketing.
This reversal can’t stand up to the scrutiny of the neocortex’s conscious mind, but when guided by the subconscious, the preference for marketing fluff takes over and guides default actions. Another common instance where these defaults are exploited is found in the rising threats of spam and ransomware.
This preference is itself an instance of Substitution Bias, where the question of “Is this company legitimate?” is substituted with answering the easier question of “How does their presentation make me feel?“. This substitution allows scam companies to make people feel positive and default to assigning them credibility while dismissing legitimate companies who didn’t spend the expected norm on marketing fluff but rather invested it in engineering.
The result of this unfortunate application of cognitive biases in real-world business practices is that companies frequently buy into $10 products with $100,000 marketing rather than $100,000 products with $10 marketing. The trade-off is simple, a company that spends more on marketing spends less on the product itself, or it bills the added cost of marketing to customers causing an inflated rate for services rendered. This simple trade-off is also why I personally blacklist companies I see advertising aggressively, as I have no interest in inferior or overpriced products and services.
Those companies who favor the better product over the better marketing must overcome strong cognitive biases in order to do so, but if they are able, they may end up with substantial advantages, rather than CGI chatbots.
Kyrtin Atreides is a researcher and Chief Operations Officer at AGI Laboratory, with expertise in a number of domains. Much of his research focuses on scalable and computable ethics, cognitive bias research, and real-world application. In his spare time over the past several years, he has conducted research into Psychoacoustics, Quantum Physics, Genetics, Language (Advancement of), Deep Learning / Artificial General Intelligence (AGI), and a variety of other branching domains, and continues to push the limits of what can be created or discovered.