Considerations When Utilizing Generative AI
Generative AI tools have become the shiny new toy and perhaps rightfully so. There are various ways teams and organizations may utilize them to help scale and drive business but the one I see most often is as an “assistant” of sorts for Sales and DevOps.
For example, these tools may provide immense relief to these teams when it comes to administrative tasks and allow team members to focus on customer interactions and product value. In doing so, individuals may be tempted to share personal and/or company proprietary information with the tool. We recommend organizations consider the following concepts to mitigate the assumed risks of using these tools.
Restrictions When Using Publicly Available Tools
Establish guidelines for employees/contractors that wish to use publicly available tools.
- Prohibit the use of customer PII or other confidential information
- Tie to other policies (i.e., data classification and definitions of “PII” and “confidential information”) to make communication consistent
- Prohibit the use of organizational data (i.e., product information, personnel)
- Prohibit the use of proprietary/confidential data sourced from other third parties
- If an available option, opt out of allowing the tool to train using the information provided
Contractual Agreements
Establish commercial relationships with tools prior to use. Both sides agree on the service level commitments made and there is (hopefully) an emphasis on the security and confidentiality of the data/information shared with the tool.
AI Vendor Procurement Process
Explicitly require approval of a generative AI tool prior to integrating with your system(s). This does not have to be a standalone policy and can be included in your Vendor/Third-Party management policy.
This may already be part of your vendor management program but it’s also wise to classify the tools (if using multiple) by their allowed use cases.
- Unrestricted – may be used freely
- Restricted – may only be used in specific circumstances (define use cases as specifically as you can)
- Prohibited – drawback to this is that it may become cumbersome to maintain and may create more questions as more and more tools become available
Security Awareness Training
Hopefully security awareness training is already part of your cybersecurity program. If you’ve ever helped construct a security awareness training exercise, you are aware that much of the information is also found in company policies. As noted previously, your organization should establish policy and procedure with respect to AI but also be sure to communicate this to your employees and contractors. We suggest adding training regarding AI tooling to your current program that includes, but is not limited to:
- Background – what generative AI/LLMs are and how they work
- Vendors – what tools are allowed and which are not
- Risks – how the tools may be used in your organization and what types of data may be input
Disclaimer
If you’re using AI to generate responses to inquiries or any other type of output, it’s wise to add a disclaimer to the communication. This lets the reader know the response was generated by using AI and that there is some responsibility on them to proof, review, and/or edit before they use it themselves.
Technical Considerations
- If submitting source code to AI tools, ensure the information will not be stored and/or used for training purposes
- Be careful to not include code samples that mimic or reflect your organization’s proprietary information
- Engineering teams should conduct input validation testing before release to a production environment in an effort to mitigate the risk of prompt injection
- Monitor/audit – identify and authenticate users of the service to prevent malicious accounts from gaining access
These recommended considerations are not exhaustive. Organizations should perform their own risk assessment with respect to AI tools actually used and implement appropriate control activities to mitigate the risks to acceptable levels.
Taylor Gavigan is a Manager at Dansa D’Arata Soucia LLP. Taylor is responsible for managing the firm’s attestation department, which includes SOC 1, SOC 2, SOC for Cybersecurity, SOC for Supply Chain, regulatory compliance examinations (i.e., HIPAA, GDPR), and ISO 27001 internal audit engagements.