September 29, 2025 • 7 minute read

Reversing the AI Trust Crisis: 7 Tenets for Ethical AI Use

AI is at a critical juncture. Its performance is accelerating faster than regulatory bodies can keep pace, and while both businesses and consumers are expressing AI concerns, its use is still skyrocketing.

Consumers have embraced AI for research, productivity, and conversation; ChatGPT alone has more than 700 million weekly users. Yet a majority of consumers polled by Pew say they’re extremely or very concerned about misuse of personal data (71%), inaccurate results (66%), and bias (55%). 

On the business side, 78% of organizations used AI in 2024, a sharp increase from 55% in 2023. At the same time, Gartner recently declared Gen AI to be in the “Trough of Disillusionment” in its Hype Cycle framework, as business leaders express disappointment with ROI. On the trust side, 55% of AI experts lack confidence that U.S. companies will develop and use AI responsibly and 55% doubt the U.S. government will effectively regulate AI.

There are many reasons for this AI trust crisis. AI is changing how we work, how we learn, and how we access information. Bad actors, poor training, and poor implementation can lead to the issues listed above. And, yet, to the average user, it can seem like no one is slowing down to address the current issues before steaming ahead with new advancements, like agentic AI.

In such an environment, every AI constituent — companies that produce AI platforms, organizations that utilize them, and end consumers — are at the mercy of the worst actors unless ethical and experienced AI companies take a stand. 

Now Is the Time to Commit to Responsible AI

Interactions has led AI innovation in customer care since 2004, a lengthy tenure in a space that’s filled with startups leaping into the opportunities afforded by AI advancements. We’ve been honing our platform — a sophisticated blend of predictive, generative, and agentic AI with human intelligence — for two decades to reach 97% accuracy rates and a product that delivers effortless customer experiences from day one. Our team boasts an impressive roster of AI experts, including many PhDs, whose work has earned 130+ patents.

The reason we’re tooting our own horn is that we believe it’s up to companies like ours to take a leadership position on AI ethics and trust. Every day, we face a gauntlet of tough questions from prospects, customers, auditors, and partners on how we handle AI training and customer data. This pressure (which we welcome!), along with our company ethos of always using AI with the utmost integrity, means that we’ve already thought through the many ethical questions that AI raises, developed governance and accountability measures, engineered critical security and data safety processes, and documented the procedures. 

This work is ongoing. New advancements, use cases, and regulatory frameworks will crop up, and we’ll continue to center trust and ethics in developing responsible AI solutions. 

7 Tenets for Ethical AI Development and Use

A one-size-fits-all approach to AI ethics isn’t feasible, as every organization will differ in how (and if) it develops and uses AI. Regardless, there are core responsible AI principles that we believe every organization should consider as they create their own AI rulebook.

  • Put people first. For each new use case, examine the potential positive and negative effects on your customers, employees, and humans in general. For example, will your use of AI increase inclusivity and standardize care for all customers? How are you testing for bias and errors? How are humans involved in AI oversight processes? What value do you provide in exchange for the use of customer data?
  • Take an Ethical by Design approach. This stance, inspired by Secure by Design principles, accounts for AI risks from the moment each new AI app, use case, or platform purchase is conceived. All decisions should be weighed against your company’s AI guidelines before any risks become reality and you head down roads that are costly to reverse.
  • Demonstrate integrity through transparency and accountability. Define and assign accountability at every stage, maintain accessible documentation on your responsible AI procedures, and always be ready to demonstrate that you are following your own rules. Regulatory requirements, third-party audits, and probing customer questions should be considered welcome opportunities for demonstrating your trustworthiness, rather than intrusions.
  • Engineer for the strictest standards and protect by default. By building to the most stringent standards — whether they apply to your region or industry — enables you to proactively protect your customers and future-proof your business against scrambling to meet the many evolving regulations like the EU’s AI Act and various U.S. legislative actions. Similarly, offering high levels of data protection by default takes the onus off your customers and shows your commitment to their privacy.
  • Select the right AI for the right task. When newer AI technologies like generative AI and agentic AI capture our imaginations, they can quickly become the go-to goal for company innovation. However, using the right AI for the right task is both an ethical and a risk consideration. For example, using Gen AI when a deterministic solution is more appropriate can introduce unnecessary bias and error risks, as well as require more computational power. This isn’t to say that Gen AI doesn’t have its place. Rather, its use should be carefully considered in relation to your needs, goals, and risk tolerance. (To read more about how to identify the right use cases for agentic AI, check out this complimentary Gartner® report.)
  • Consider the company you keep. Every company has either become or will become an AI customer, as AI features become default across the technology landscape. This is true of AI producers, as well. Create procedures to properly vet your AI partners for trustworthiness and integrity. Not only do you want to be sure these companies have similar ethical stances, protective processes, and AI stack vetting procedures, you want to know that they’ll do what they say when it comes to working with your and your customers’ data.
  • Prioritize learning, questioning, and taking a stand. AI, much like other big-hype technologies like cloud, is incredibly complex, making it difficult to keep pace with its acceleration. Constant learning, curiosity, discussion, and vigilance are key. Form an AI council to navigate thorny AI questions, balancing the risks and rewards of AI implementation with executive priorities. This council should include members with expertise in business strategy, data, risk, ethics, IT, and customer experience. Additionally,  there should be continuous education for all employees on both new AI initiatives to improve adoption, but also on AI threats from bad actors, good AI practices (like keeping proprietary data out of public models), and the benefits of powerful, reputable AI platforms.

The AI Trust Council

The seven tenets outlined above aren’t just theory — they’re the foundation of how we build and deploy AI at Interactions. We’ve codified this approach to trustworthy, ethical AI by forming the AI Trust Council

Established by Interactions experts who are deeply engaged with and knowledgeable about AI and its impacts. The Council’s mission is to help our company, customers, and the general business community stay educated, keep asking the tough questions, and to above all prioritize trust and humans as AI continues to evolve. From discussing the ethics of new use cases, to monitoring regulatory changes, to publishing practical guidance, the AI Trust Council shows how responsible AI principles can be embedded into day-to-day decision-making.

For Interactions, the Council is both an internal compass and an external voice — a way to ensure we stay true to our commitments while helping others navigate the complex and fast-moving AI landscape.