Skip to content

The Complete, One-Size-Fits-All Employee A.I. Guidelines That Your Company Will Ever Need (No, Not Really…)

AI, AI, AI. Artificial Intelligence is all we hear about these days. Ever notice that if you repeat the letters several times and add an exclamation point, it sounds like an exasperated shriek or doomed battle cry? “AIAIAIAIAI!” 😱I feel your pain. As a lawyer practicing in technology and intellectual property—where new legal developments hurl towards you faster than you can say “Hey Siri”—the addition of AI to the mix turns the learning curve into a vertical line.

Companies are trying to keep up and develop appropriate policies for AI use by their employees, especially if outside vendors provide it (like Open AI, for example). I’ve received numerous inquiries since ChatGPT began making headlines. How can employees use AI at work? What should they be allowed or prohibited to do with it? How should generative output be used? And what are the potential legal consequences?Ai Guidelines

There’s no one correct answer to these yet. (Is there ever when lawyers get involved?) The answers, like the questions themselves, will change and evolve as AI continues to push boundaries. A few current trends are emerging though that can provide some general guidance to employers for now about what employees should do when using certain types of AI in their jobs. And different kinds of AI used in different industries will require policies that will need to be customized accordingly.

For instance, there’s the much-talked-about generative AI, which creates content like text, images, audio, video, software code, etc. There’s also predictive AI, which identifies data patterns in past actions and events to forecast future ones. Monitoring or observational AI analyzes performance analytics and detects anomalies in real-time. Decisional AI automates or augments human decision-making by using data-driven insights to set product pricing, target advertisements, screen job candidates, and evaluate creditworthiness, for example.

For the purposes of this post though, I’m focusing just on generative AI like ChatGPT and DALL-E since it has currently captured the public’s imagination (or fear). And the trite saying that “Content is King” is more relevant now than it was back in the mid-90s and the early commercial days of the internet. Perhaps even more so, since AI consumes massive amounts of third-party data—including your data—to function and generate its output.

Many policies I’ve reviewed are somewhat amorphous and unhelpful when setting firm boundaries and specific best practices. Using catch-all language like “employees must use AI systems responsibly and ethically” or “employees must use AI systems in compliance with all applicable laws and regulations” or “employees must not use AI systems to discriminate against any individuals” are more aspirational in nature and offer little concrete guidance. These generic principles are about as helpful as a company driving policy which dictates that employees shouldn’t crash their cars into trees or close their eyes when driving. More is needed.

I’ll try to give some specific guidance. And there are four areas that currently have the most serious legal implications for a company using generative AI. These should be clearly addressed in employee guidelines as they are the most legally problematic—which in lawyer-speak means a company can be sued or lose certain rights. And even if the legal impact is minimal, a situation that turns into a viral public relations fiasco can sometimes be worse than any legal consequences. These four areas are:

 

(1)  Input data used for the generative output may, for example, be confidential or contain trade secrets, incorporate third-party copyrighted content, or violate the rights of third parties;

(2)  Output is not currently protectable under patent or copyright law;

(3)  Output could violate the confidentiality, intellectual property, or other rights of third parties; and

(4)  Output is biased, incomplete, or just wrong (a “hallucination”), which employees then rely upon when making various business decisions.

 

I’ll discuss each and give my perspective upfront. And of course, there are already some technology-based solutions that have emerged to address these issues too. But like anything else, caution is warranted when relying upon them too much.

1. Input: Employees Cannot Use Confidential Company Information or Other Confidential Third-Party Information as Input Data


Nothing you put into generative AI like ChatGPT remains confidential. That shouldn’t be a shock to learn these days. Even if the AI’s creator claims that your data won’t be saved or integrated into their system as training data, do you really trust it? I don’t. Not now….and maybe not ever.

Once confidential information is disclosed, it’s not confidential anymore. Once a trade secret is disclosed, it’s not a secret and gone forever. Information is always confidential and secure. Until it’s not. Whether it’s a tech-savvy teenager, a criminal syndicate, or a state actor with malign intent, there are almost daily news stories about data breaches. So even if an AI company tells you that your data won’t be disclosed or integrated into their system to train it, healthy skepticism is warranted. My somewhat informed view? Don’t disclose to AI!

Most well-drafted and existing employee policies already contain prohibitions against disclosure of confidential information and trade secrets, so extending it to cover AI isn’t a big stretch. But it needs to be made very clear to employees that using generative AI can result in disclosure. I’ve encountered many people—including fellow lawyers with ethical obligations to keep client info confidential—who are simply unaware of how generative AI treats their input data. Your employee policy needs to explain it clearly.

What is third-party confidential information? It could be several things, but here’s a common scenario: Information from outside the company that a current employee has a legal or contractual duty to keep confidential. For example, someone hired from another company may have signed a non-disclosure agreement with his prior employer to prevent disclosure of its trade secrets. If the employee uses that info as part of an AI inquiry, those obligations may have been violated. While that’s definitely a problem for the employee, it could also be one for their new employer too. The policy needs to make clear that confidential information can include non-company info at times as well.

2. Input: Employees Must Inform the Company Before Using Third-Party Copyrighted Content as Input Data to Produce “Business-Critical” Content


This is a tough one to enforce since there’s so much copyrightable content that people use as input data. Using it isn’t legally problematic per se (and cases are pending that address this issue due to AI’s own massive and ongoing use of it). All sorts of factors come into play too numerous to mention here. Yes, you’re creating a copy of that content for ChatGPT to process and creating other output which could then also potentially violate a copyright owner’s rights. But you may also be using it so that it’s a “fair use,” which is a complicated defense that I won’t delve into now.

What could such business-critical content be? It could be marketing or media content that will be disseminated publicly. It might be content related to the development of new products or services. It might be training materials for employees. It could be computer code that an employee wants to enhance, re-write, or re-purpose for the company’s use. It’s anything that can be essential to a company’s operation or success.

For example, if you take a competitor’s unique marketing content and ask AI to rewrite or modify it using your company’s product instead, is that a problem? Have you infringed—like making an unauthorized derivative work—or perhaps created an unfair competition claim? Possibly, but it depends upon what you use the output for and its similarity to the underlying work. If it’s just to generate some ideas that you then build upon using your own creativity and expression, then maybe not. But the company should be the one to make that call.

This can create an administrative headache, so limiting it to “mission critical” or only certain types of content is sensible. An employee needs to know when to seek guidance from the company. And again, there are now technology tools which can manage AI inputs and, for example, prevent certain information from being sent to outside AI vendors or inform the company when particular prompts are made. It may be a pain at times, but it’s a necessary one for now.

3. Output: Employees Must Not Create Business-Critical Output that the Company Seeks to Own and Protect

While many legal issues surrounding AI are still evolving, one issue has become clear: Output that is 100% AI generated cannot be patented or copyrighted. Trademarks are another matter. Both patent and copyright law have a human authorship requirement. And AI isn’t considered to be human (at least not yet—Cylon, maybe?) While this may change if the law changes, that’s where things stand now.

For example, if a company’s engineering team is using AI to help design new products or improve existing ones, any potentially patentable ideas that are generated can’t be patented. The same is true if the company is using AI to write software for it. If someone in the company’s marketing department uses AI to generate images, videos, or advertising copy, those can’t be copyrighted either.

What does this mean practically? If you can’t copyright and own it, then you can’t prevent others from using it either (or at least paying for its use)—it’s in the public domain upon creation. Thus, if the company wants to keep a competitor from using that cool new advertisement, or comprehensive new training manual, or attention-getting image that AI generated, it will be out of luck. This rule is a simple one: No business-critical company content should be created using AI, especially if the company wants to own and protect it.

What about mixed AI and human-generated content? That’s where AI generates some content, but a human creator then modifies or enhances it. Is that copyrightable? This is a more complex issue, depending upon the nature of the contributions and resulting content that’s created. You can read about that in my earlier blog post here. I expect we’ll see some litigation over the next few years on this issue. For now though, it’s essential for employees to actually create the content themselves that the company plans to own and use.

4. Output: No Reliance Upon Output for Business-Critical Company Decisions Without (Human!) Review

 

AI-generated content should be reviewed by at least one other person before relying upon it for important decisions (especially if the content is used publicly). Yes, this slows the process down if management needs to review and analyze it first, but better safe than sorry. There can be legal implications for not doing so.

For example, if an employee uses AI to draft the company’s website privacy policy and it misleads the site’s users about the company’s privacy practices or how it uses their data, it can prompt an enforcement action by the Federal Trade Commission or a state Attorney General’s office for unfair or deceptive business practices. And those can come with hefty penalties.

If AI is used to pre-screen job applicants and only selects white men under the age of 50 from a diverse pool, that could lead to a discrimination claim. And I’ve seen several news stories about lawyers who add AI-generated case citations to court filings that turn out to be fictitious, resulting in sanctions being imposed—not to mention angry (former) clients.

Business-critical content must be checked by at least one real, honest-to-goodness, actual human being at the company who can apply those painfully slow organic critical thinking skills. Yes, it takes more time for electrical impulses to jump from one neuron to the next, but it can help avoid major legal headaches too. And if your company employs people anyway, why not use them?

 

As a technology lawyer, I’ve learned (sometimes the hard way) that no matter how cool, innovative, or transformative a new technology claims to be, you should never rely on it 100% of the time. The human element should never be removed entirely from the equation. Especially because it’s flawed humans who create that technology—at least for now until AI does that for us too. Therefore, I won’t be as quick as others to genuflect and declare fealty to our coming AI overlords. I wonder if ChatGPT can tell me when would be the best time to do that?