How To Create An Effective AI Policy: A Business Owner’s Guide

TBD Conference
4 min readMay 29, 2023

As AI tools become common in the workplace, having a generative AI use policy doesn’t just make sense; it could just keep you in business.

Employee social media errors keep getting people fired and companies in hot water; whether it’s TikTok firings or exposing internal policies, everything goes viral faster and lasts forever. AI poses a different issue, a reputational one beyond the individual, so companies need to ensure that employees understand what is ok to use, when and what can be put into these tools.

Your IP and data could walk out the door, and you wouldn’t even know it. Samsung, Apple, and Goldman Sachs have all banned the use of generative AI tools with company data. The problem has just gotten exponentially larger thanks to ChatGPT introducing its mobile app (US/iOS only for now).

Example prompts and use cases from App Store images provided by OpenAI

So what do the big tools do with your data?

OpenAI — creator of ChatGPT and DALL-E, the most popular AI tools — “may use the data you provide us to improve our models”, but ChatGPT does not currently train itself on data you input. However, it is possible that they may do so in the future. For this reason alone, companies concerned about their privacy (and their employee’s privacy) should opt out of having their data used to train OpenAI models. You can do this by changing your settings at privacy.openai.com. Google, on the other hand, saw what way the wind was blowing with public opinion and opted for a big ‘privacy first’ messaging with Bard at its recent developer conference (probably because they have more contextual data and a larger base to pull from). Bard does not, therefore, train itself on any query you put into the system.

Why is this a problem?

Beyond simply putting in text, AI tools are increasingly encouraging the uploading of files for greater productivity and can now fully access the public (and dark) internet. Ignoring these elements for a moment without guiding employees to know how you’d like them to use or not use the tools available, you risk employees pounding in company data in their prompts you’d prefer no one know about. What about partner data? That’s a lawsuit waiting to happen. Training staff now is risk mitigation for both the short and long term. Beyond this, you are upskilling staff and empowering them, which data shows will keep them around for longer.

So, what should the document look like? As short as possible is a good step. Take a leaf from Starbucks’s social policy — a couple of pages, links to resources, clear contact and good language and a flow that sets out guidelines, not handcuffs. It’s personable and reflects the brand ethos, yours should be too. There’s a more technical skeleton here.

Some things to consider when creating your AI policy document:

  • Guardrails or handcuffs. You can go nuclear (like the companies mentioned above) or empower your people to see the tools in a ‘useful but flawed, so we need to be careful’ way. The time to educate and not scare is now. You might want to start by asking who knows what the tools are, what they do and if they have been used already. Be clear about who the policy affects and when it comes into effect.
  • Include multiple departments when drafting it. At the bare minimum, expect to consult legal, compliance, IT, management, and HR.
  • Note your language — use positive language, not a fearful or aggressive tone. Use catch-all statements like the one’s newsrooms are using, like “No company employee will not use AI to create content that is harmful, biased, misleading or discriminatory.” Other elements you might want to add include infringing copyright and trademarks — knowingly and unknowingly. Help your employees know what’s right and what’s not.
  • Be concise and future-focused — you’ll update the policy frequently as the tools evolve. Make sure people know what version is the most recent one.
  • Up front, be clear about the issues and the outcomes that could happen, including specific employment-related issues if the policy is ignored.
  • Give them a clear point of contact to learn more, ask questions and get clarification.
  • Finally, launch it properly — give it the respect it deserves. Don’t just send the policy in an email or bark it downwards; that’s the last thing you should do with something as prevalent as generative AI will become (right now, just 1–5% of the world’s population knows about the tools). Present it, position it, and give it the senior leadership play AI deserves. Make it interactive, invite criticism and, above all, give feedback that different departments have fed into it, and this is just version one.

If you have something to lose, you have enough to protect.

Generative AI will transform most, if not all, industries out there in some way. Not getting the balance right could mean the difference between keeping and losing clients, to being sued or not being sued. Equally, an untrained staff cohort won’t give you the innovation you need to be competitive and spot opportunities, so look to enable and empower before you totally block tools.

What Did OpenAI Do This Week?’ covers everything the number one AI brand is up to now, next and future. Sign up today for $99 a year or $15 a month.

--

--

TBD Conference
TBD Conference

Written by TBD Conference

Technology. Behaviour. Data. Only the honest get asked on this stage. Only the brave accept. Find out more > thetbdconference.com

No responses yet