Keen to capitalise on the AI boom, some companies are exaggerating their AI capabilities. Founder Shield’s Jonathan Selby explains the risks for companies of AI washing.
The allure of AI has captivated businesses across industries, promising increased efficiency, enhanced decision-making and competitive advantages. As AI technologies advance, companies face mounting pressure to appear innovative and technologically savvy. These dynamics have led to the emergence of AI washing, a practice where businesses exaggerate or misrepresent their AI capabilities to attract customers and investors.
For example, the retail industry is a culprit, with many companies claiming AI-powered recommendation systems that are essentially advanced filtering algorithms. Similarly, in healthcare, certain providers boast AI-powered diagnostics that are little more than pattern recognition software. Financial institutions may advertise AI-driven investment strategies that rely more on traditional statistical models than true machine learning.
Unsurprisingly, media hype and public fascination with AI have inadvertently fuelled this trend. Sensationalised headlines and a limited understanding of AI’s complexities create an environment where companies can easily overstate their technological prowess. Consumers eager for cutting-edge solutions may need to scrutinise these claims more closely.
Regulatory risks
Regulatory bodies like the US Federal Trade Commission (FTC) and Securities and Exchange Commission (SEC) are increasingly scrutinising AI-related claims. The FTC has the authority to act against deceptive practices, while the SEC focuses on investor protection and accurate disclosures.
As expected, penalties for false or misleading AI claims can be severe. Companies may face hefty fines, mandatory corrective actions and reputational damage. In extreme cases, executives could face criminal fraud charges.
The FTC has issued warnings about exaggerated AI capabilities in products and services. The SEC has investigated companies for potentially overstating their AI implementations in financial reports. Furthermore, even SEC chair Gary Gensler has warned companies about the dangers of AI washing.
It’s no surprise that industry-specific regulations add layers of complexity. Fir example, many healthcare organisations must ensure AI claims comply with HIPAA and FDA guidelines. Financial institutions face scrutiny under existing fintech regulations. Automotive companies must address safety concerns in AI-assisted driving claims.
Compliance challenges include maintaining accurate documentation of AI capabilities, implementing robust testing protocols, and ensuring marketing materials align with actual product functionality. Companies must also stay abreast of rapidly evolving regulations across jurisdictions.
Legal consequences
Consumer protection laws play a critical role in addressing AI washing. These laws prohibit false or misleading advertising, ensuring that companies accurately represent their products’ AI capabilities. Violations can lead to legal action by regulatory bodies or consumers themselves.
False advertising and misrepresentation claims are massive risks. Companies overstating their AI capabilities may face class-action lawsuits from consumers who feel deceived. Consequently, these legal actions can snowball into bigger issues, such as financial penalties and court-mandated corrective measures.
Liability issues surface when AI systems fail to perform as advertised. If a company claims AI-driven decision-making but relies on traditional algorithms, it may be liable for errors or biases in those systems. It’s not unheard of for product liability claims to unfold, particularly in high-stakes applications like healthcare or finance.
AI washing severely impacts brand reputation and customer trust. Once exposed, it can lead to a loss of credibility in the market. Customers may question the company’s integrity across all product lines, not just those involved in AI washing. This erosion of trust can result in long-term financial consequences, including decreased sales and difficulty attracting investors.
Standalone v third-party LLMs: The dilemma
Standalone AI refers to proprietary systems developed in house by companies, while third-party LLMs are pre-trained language models such as GPT, used via APIs or integrated into applications.
Regulatory and legal risks primarily revolve around data privacy, algorithmic bias and transparency for standalone AI tools. Companies must ensure compliance with regulations such as GDPR, CCPA and the FTC, implement rigorous testing for bias, and provide clear explanations of how their AI makes decisions. Failure to do so can result in regulatory penalties and legal liability.
Third-party LLMs present unique challenges in risk assessment and mitigation. Companies using these models have limited control over their training data or underlying algorithms, which can make it difficult to guarantee compliance with industry-specific regulations or fully understand potential biases. Additionally, reliance on external providers introduces risks related to service disruptions or changes in terms of use.
As imagined, due diligence is crucial when using third-party AI. Companies should thoroughly vet providers, understanding their data sources, model updates and security measures. Clear contracts outlining liability and compliance responsibilities are essential. Regular audits of AI outputs and performance are necessary to catch potential issues early.
Mitigating risks
Developing clear AI communication strategies is crucial. Companies should create guidelines for accurately describing AI capabilities in marketing materials, product documentation and investor communications. This approach includes avoiding hyperbole and distinguishing between current capabilities and future aspirations.
Establishing internal AI governance and compliance frameworks helps ensure consistency and accuracy. This involves creating cross-functional teams to oversee AI-related claims, implementing approval processes for public statements about AI capabilities and regularly updating policies to reflect technological advancements and regulatory changes.
Furthermore, conducting thorough AI audits and assessments is critical. Leaders should perform regular evaluations of AI systems to verify their capabilities and limitations. This approach includes testing for bias, accuracy and reliability. Third-party audits can provide additional credibility and identify potential oversights.
On that same note, leveraging transparency and accountability is key to building trust. Companies should be open about their AI development processes, including data sources and model limitations. Explaining how AI is used in products or services helps manage customer expectations. Implementing mechanisms for addressing AI-related issues or complaints demonstrates a commitment to responsible AI use.
By implementing these strategies, companies can reduce the risk of AI washing allegations, maintain regulatory compliance, and build long-term trust with customers and stakeholders.
Jonathan Selby is the tech industry lead at Founder Shield. He transitioned from traditional brokerage to a leadership role at Founder Shield, where he specialises in client strategy and cultivating a high-service culture for fast-growing companies.
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.