While business boons from AI promise to be many — chiefly substantial gains in productivity, investments, and innovation — it’s irresponsible and shortsighted to discuss them without also acknowledging risk factors and planning how to use AI ethically and securely. Especially when you’re considering introducing generative AI tools into roles and operations that will touch your customers.

When pursuing an AI-assisted go-to-market organization to drive efficient growth, especially those using LLMs (large language models), a few key areas of concern need to be addressed for an ethical and secure AI implementation. 

Information security and data protection

Risk: Your company’s and your clients’ internal data are of utmost importance and should be handled with care. Open source LLMs take all of the input they're given and recycle it as output for the next query — regardless of who’s asking the question.

So it’s no wonder data security and accuracy top the list of concerns surrounding the corporate use of generative AI. 

This example from CSO’s “Sharing sensitive business data with ChatGPT could be risky” says it all:

Imagine working on an internal presentation that contained new corporate data revealing a corporate problem to be discussed at a board meeting. Letting that proprietary information out into the wild could undermine stock price, consumer attitudes, and client confidence. Even worse, a legal item on the agenda being leaked could expose a company to real liability.

Outside of exposing your own private information, you risk giving customers wrong information due to generative AI’s tendency to “hallucinate,” or create real-seeming falsehoods in its attempt to answer your question.

From OpenAI:

ChatGPT has no external capabilities and cannot complete lookups. This means that it cannot access the internet, search engines, databases, or any other sources of information outside of its own model. It cannot verify facts, provide references, or perform calculations or translations. It can only generate responses based on its own internal knowledge and logic.

Solution: Stick to proprietary solutions with extra governance and cybersecurity in place to prevent exposing your confidential company data to open and public repositories — and read the terms of agreement. 

When you have a contract with a software vendor, you can keep your data safe behind a paywall and control how your data is being used by their AI in order to keep it under lock and key.

You can also train proprietary LLMs on your own internal data and ensure the copy it generates is free of hallucinations and misinformation that may cause problems with customers later on.

Business ethics and legal compliance

Risk: As explained by Harvard Business Review in “Generative AI Has an Intellectual Property Problem,” the unfettered and unconsidered use of AI-generated content in public materials can result in plagiarism, theft of intellectual property, and, as HBR put it, “substantial infringement penalties can apply.”

Above and beyond IP concerns, generative AI is also accused of posing “existential threats” to many people and their livelihoods — and these threats have recently been met with strikes and legal action.

Solution: When it comes to AI ethics, follow two overarching rules to avoid conflict down the road:

  1. For best outcomes, don’t try to use AI for tasks only humans can do.  Do you have small teams taking on more than they can chew? An AI tool or two will take some work off their plates. Attempting to completely eliminate your sales and/or marketing functions? It won’t work and there are less-than-profitable repercussions waiting down that route.
  2. Don’t knowingly pass others’ published or copyrighted work off as your own. While internal assets like AI-generated solution briefs or even a few AI-assisted sales emails won’t set off any alarms, avoid pulling paragraphs and ideas wholesale from unedited AI output and publishing it under your company’s domain.

Customer experience and brand equity

Risk: Another mistake business leaders can make — and one your sellers, marketers, customer success professionals, and even your customers are wary of — is to assume you can flip a switch on generative AI and have it produce all of your company’s content and communications. 

While LLMs can speed up and assist with certain tasks such as research, generating drafts of emails, or summarizing information it has been fed, the output requires human supervision and intervention to produce customer-worthy content.

A quality loss in your content and customer interactions has a direct correlation to your brand equity and your bottom line

Not to mention, completely removing the human support system your customers have come to rely on also sends them a signal they aren’t worth your time or investment — and you risk losing their trust, ultimately devaluing your brand. 

Solution: The equity of your business and your brand lies in your talent — creative, original marketing attracts prospects, positive and skillful sales interactions turn prospects into buyers, and stable products and reliable relationships turn them into brand loyalists.

In Top Predictions for 2023 and Beyond, Aragon Research offers this advice:

For content creation, best practices will emerge that depend on AI to eliminate busywork, and free up human agents to focus on creative direction, design, and other higher-order conceptual work.

To ensure a positive buying experience that not only maintains but grows your brand equity, let AI take on the right tasks — the ones that take the monotony out of your team’s daily work so they can focus their energy on building customer rapport and engaging, valuable content.

Executive action items for generative AI:

  • Don’t blindly pursue short-term gains — educate yourself and your people on the short and long-term risks.
  • Come up with a plan and task force for addressing risks ahead of implementation.
  • Seek a thorough understanding of the potential risks and rewards of generative AI for each team or function and solicit advice from experts in those roles to successfully supplement their efforts.
  • If implementing AI as part of your own offerings, double down on risk prevention and security and be transparent with your customers about how it works and whether/how their data will be used. Offer opt-outs for clients in compliance and confidentiality-heavy industries like defense and healthcare.

Moreso than other recent advancements in technology, business leaders need to tackle the ethical and security implications of AI because they extend beyond our offices and computers and into the real world. 

But if approached tactfully and ethically with a focus on risk negation and long-term brand and business development, you could be running a next-generation go-to-market organization powered by AI in a matter of months. Ask us how.