Developing an AI use policy

How businesses can benefit from using AI while minimising risks

If you use ChatGPT, or other AI tools, for work related tasks, does your boss know? If you answered “No”, you’re not alone. Seven out of ten professionals (68 per cent) using AI at work are doing it secretly, according to a survey by communication platform, Fishbowl. Don’t force your staff to do the same, says Iain Simmons at corporate legal services provider, Arbor Law. Instead, develop an AI use policy.

The potential impact of AI on our lives is one of the most discussed and debated topics of our time. Since November 2022, and the launch of ChatGPT, the development of AI tools for business and personal use has accelerated. Companies are looking at the resulting wave of new and emerging AI services to determine what can help cut costs, for example through automating more arduous tasks.

Last year, three in four companies (74 per cent) had started testing generative AI technologies and two in three (65 per cent) were already using them internally, according to Deloitte’s state of ethics and trust in technology survey. With more and more AI tools and applications coming to market, these numbers may be much higher,

and businesses need to move quickly to ensure they have the policies and procedures in place to benefit from this technology while minimising any risks to their businesses.

Shadow AI

Unsanctioned use of AI tools at work — shadow AI — is a growing issue. Essentially unauthorised technology implemented without any controls in place, shadow AI could pose security threats through potential data breaches, impact the quality or consistency of work delivered, introduce inconsistencies in operations and even violate industry regulations.

To address this challenge, company rules and procedures need to keep up with the rise of AI and employees need to be educated about what is and is not permissible. Key to this is the development and communication of clear policies and guidelines concerning the use of AI within business operations. This starts with an AI Use Policy.

An AI Use Policy is designed to ensure that any AI technology used by your business is done so in a safe, reliable and appropriate manner that minimises risks. It should be developed to inform and guide your employees on how AI can be used within your business.

It would be impractical to list all potential rules for using AI in the workplace, but there are a few bases that any AI use policy must cover.

Purpose and scope

In the AI use policy’s introduction and purpose section(s), it is always helpful to set the scene. Define the overall context, purpose and scope of the policy, including which staff and tasks it applies to. Are there any related company policies that could be referenced?

Approval process

List any pre-approved AI tools (e.g. OpenAI’s ChatGPT, Google Gemini) and consider including any tools based on those, such as Microsoft Edge’s Copilot, which is powered by ChatGPT. What is the process for approving other or new AI tools? Consider setting out the relevant evaluation criteria in the policy. For example, a high-level minimum standard such as “the AI tool should be legally compliant, transparent, accountable, trustworthy, safe, secure and ethical”. Other things to consider include evaluating vendors, reviewing terms and conditions and conducting a risk-benefit analysis.

Rules of use

Perhaps the most important part for the majority of your employees, set specific do’s and don’ts for inputs and outputs. This is to ensure compliance with data security, privacy and ethical standards. For example, “Don’t input any company confidential, commercially sensitive or proprietary information”, “Don’t use AI tools in a way that could inadvertently perpetuate or reinforce bias” and “Don’t input any customer or co-worker’s personal data”.

For outputs, guidance can reiterate to staff the potential for misinformation or ‘hallucinations’ generated by AI. Consider rules such as “Clearly label any AI generated

content”, “Don’t share any output without careful fact-checking” or “Make sure that a human has the final decision when using AI to help make a decision which could impact any living person (for example, employees/applicants, or customers)”.


About the Author

Iain Simmons is a senior commercial lawyer at Arbor Law, with experience across multiple sectors and jurisdictions, specialising in technology, media and telecoms (TMT). Iain has held senior in-house, Head of Legal and General Counsel positions at several platform, technology and data businesses, from start-ups/scale-ups to multi-nationals.

He has gained a breadth of legal experience, including contract/consumer law, corporate law and governance, M&A, IP, AI/technology, Data Privacy/GDPR and dispute management & resolution.

Developing an AI use policy will help mitigate the risks of shadow AI, ensuring your business can benefit from the rich rewards of AI while remaining suitably protected and operating within legal and regulatory boundaries. For advice and support on drafting and implementing AI use policies, visit www.Arbor.Law.com

Featured image: Adobe

more insights