Generative AI and Cybersecurity

Avatar
chin
Post Count 2
Tech Yukon Staff
Tech Yukon Staff
Post Count 29

We would like to extend our appreciation to Ian L Paterson for joining us virtually to discuss the cybersecurity implications of AI usage. As CEO of the cybersecurity firm Plurilock, […]

We would like to extend our appreciation to Ian L Paterson for joining us virtually to discuss the cybersecurity implications of AI usage. As CEO of the cybersecurity firm Plurilock, Ian is deeply familiar with the advantages and risks of using AI to assist with white collar tasks.

While AI tools can help us work more quickly and effectively, the catch is that they can leak the data that they depend on for learning, which can be a major concern if your organization deals with private or confidential information. Ian identified five ways for organizations to protect their data: awareness, governance, access control, guardrails and evolution. Awareness involves teaching users about the risks specific to their use context and detailing the possible consequences of a data leak. Governance is the use of policy to place specific restrictions on how AI use is allowed and consequences for policy violations. Plurilock has a sample AI policy available on their website for companies to reference. Access controls are software solutions which can help to enforce AI use policy by preventing access to AI tools unless permission is granted. Guardrails are also a software solution which protects data within the AI tool by, for example, automatically anonymizing or redacting sensitive information. Finally, it is important for AI policies and tools to continuously evolve because AI tools change rapidly.

Thank you, Ian, for providing these insights into protecting our data while taking advantage of the possibilities of AI!