We were honoured to be joined by founder and CEO of Ethically Aligned AI Katrina Ingram to discuss the increasingly relevant issue of AI ethics.
While any new technology comes with the potential for unintended consequences, the rapid growth and uptake of generative AI has created a number of ethically complex situations. AI ethics mainly centres around issues of privacy, fairness, transparency, explainability and accountability. These issues have been particularly relevant in the use of AI for surveillance, such as sharing ring camera data with the police, or deception, such as the creation of deepfakes of celebrities and political figures. AI models used in education, hiring processes, banking and social services have the potential to harm individuals if the model is influenced by biases. Biases can arise in AI models due to being trained by data, which is an imperfect representation of reality. Missing data, historical bias and biases from the instruments used can all impact how information is represented in a large-scale dataset. AI models such as ChatGPT also have a significantly higher environmental impact than a basic Google search.
Katrina posits that we can change the ways we use AI to address these ethical concerns. Users have the option to use AI more discriminately and say no to unethical or unnecessary uses. We can advocate for better regulation and product choices which safeguard our data, uphold human rights and minimize environmental impact.
Thank you, Katrina, for your thoughtful contribution to our AI learning series!