As AI continues its meteoric rise in the 2020s, American businesses are using it to revolutionize operations and the way they serve customers. In addition to automating tasks, analyzing data, and driving sales and marketing, AI is also boosting recruitment, accounting/finance, cybersecurity, logistics, and just about every other function or process.
“Artificial intelligence is rapidly reshaping the business environment by fostering innovation, enhancing efficiency, and creating competitive advantages,” says Karen Painter Randall, partner, and chair of the cybersecurity, data privacy and incident response group at the law firm of Connell Foley LLP, headquartered in Roseland. “However, these advancements introduce complex legal considerations that organizations must address through comprehensive training, effective tools, and robust operational safeguards.”
Steven Teppler, a partner at Roseland-based Mandelbaum Barrett and chair of the firm’s cybersecurity and data privacy practice group, points to the impact of AI on areas like data governance, ensuring transparency, and protecting proprietary algorithms and training data, saying, “AI is transforming business operations at unprecedented – and increasing – speed, forcing companies to grapple with novel legal issues surrounding compliance, privacy, liability, and accountability.”
Experts emphasize the need for companies to establish an AI Use policy and make sure it’s clearly communicated to employees. Connell Foley, for example, helps clients evaluate AI integration risk and put together clear guidelines for acceptable use, workforce training, accountability measures, and AI incident response protocols to ensure security and compliances controls.
“Employees using public AI tools like ChatGPT may inadvertently expose sensitive or proprietary data. The Samsung code leak is a prime example,” says Randall, referring to a 2023 incident when Samsung employees accidentally leaked private information, making the company’s trade secrets part of ChatGPT’s training data. “We help clients establish AI governance frameworks, conduct risk assessments and bias audits, review vendor contracts, advise on regulatory compliance, develop internal AI Use policies and educate executives and employees to promote a culture of transparency and accountability and avoid security, privacy, and contractual breaches.”
According to Nick Duston of Bridgewater-based Norris McLaughlin, P.A., a member of the litigation practice group and chair of the firm’s AI committee, a company’s AI policy should explain potential risks and lay out a well-defined plan for mitigating them, including:
Making sure employees are clear on what data may or may not be exposed to AI.
Establishing a protocol for verifying AI-generated content to avoid issues with hallucinations (responses that contain false or misleading information presented as fact).
Setting a policy for when employees should disclose to others what information has been generated by AI.
Carefully vetting and selecting only those AI tools that mitgate risk through proper data security practices, while ensuring employees don’t use other, unapproved AI tools – particularly those available for free.
Another major factor in using AI is determining who is going to own the information – both what’s input into the AI and what’s output from it – as well as how it can be used and kept safe. According to Wendi Uzar, patent, trademark and copyright partner at Madison-based Riker Danzig, companies should consider the terms and conditions of the AI tool and make sure they maintain ownership of the material put into it.
“In negotiating a contract with an AI company, if a business is creating its own AI tool, or if you’re reviewing the terms and conditions of an AI tool that’s publicly available, you have to confirm the AI tool is not going to share your information,” says Uzar, whose clients include publishing, insurance, and software companies. “Typically, we demand all search queries and inputted information be deleted within a certain time frame – such as three to six months.”
In creating marketing campaigns, Uzar advises clients to make sure the AI tool is not regurgitating false information; she says many attorneys have gotten into trouble for citing cases that don’t exist – or for providing photos copyrighted or owned by a third party. Further, any copyrighted information a company puts on its website, including downloadable articles, forms, and other data, should be accompanied by terms and conditions prohibiting a third party from using this information to train any AI tool.
“There are many cases currently pending in the district courts, and it’s not clear how they will be decided,” she says. “Therefore, if you have terms and conditions on your website governing how your information can be handled, then at least you have an enforceable breach of contract claim to keep that information confidential and protected.”
In addition to internal AI policies, companies should be concerned about their vendors and whether the AI services they deliver could put the company at risk. For example, Teppler has become aware of phone service providers that use AI to evaluate a company’s client voicemails and rank them by “temperature,” or how friendly, unfriendly, or even urgent a message is.
For healthcare providers, using such a service to process what is likely patient information could expose them to liability for potential HIPAA violations imposed by federal regulators like the Department of Health and Human Services.
Even if this client information is not covered by federal laws, such processing could also violate any number of state statutes, such as the New York Shield Act that requires businesses to protect the private information of clients who are New York residents, or this year’s New Jersey Data Privacy Act, which provides consumers who are New Jersey residents with enhanced rights over their “sensitive” personal information and poses requirements on businesses that collect it.
“This is a frontier issue: If your business, or a vendor to your business, uses AI to analyze or process biometric information – and, in particular, if that information also includes sensitive or private information – you need to take appropriate safeguards and measures to ensure you’re not subject to a vendor vulnerability that would put your business at risk,” Teppler says.
Duston further clarifies the matter, saying, “Users of AI must be careful to protect any information they have an obligation to protect, such as attorneys’ maintenance of privileged information, doctors’ duties under HIPAA, or any financial institution’s handling of customers’ personal and banking information.”
According to Randall, most companies could benefit from strategic counsel in navigating the rapidly evolving AI landscape. “Bringing together a multidisciplinary team of legal, business, and technology leaders enables management to better understand and navigate risks, facilitating an innovative, safe, and effective implementation of AI within the organization,” she says. “Using a holistic approach helps to prevent blind spots and reduce the chance of costly missteps.”
To access more business news, visit NJB News Now.
Related Articles: