Steve Elcock, Founder and CEO of elementsuite, the leading SaaS HR and Workforce Management platform, shares his opinion on AI Safety Summit 2023
As a nation, the UK has a history of shaping the future. But it’s essential that we put in place regulation of the uses of AI by which we can as a country manage AI’s potential and risk in equal measure. This week’s AI Safety Summit has been a strong starting point for this industry discussion and a chance to boost our UK tech investment as we edge closer to responsible use of this powerful technology.
Perhaps the most notable emergence from the summit thus far is The Bletchley Park Declaration, which notes:
“We welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”.
Yet, despite an acknowledgement of the dangers and crucial factors that need to be considered in the use of AI, the discussion ultimately remains rather indeterminate in nature. Data bias needs to be meticulously addressed to ensure that AI systems are fair, ethical, and truly beneficial to society. Failure to do so not only perpetuates discrimination and inequality, but also erodes trust in AI technologies, resulting in, as aptly put by Elon Musk, “regulations that inhibit the positive side of AI”.
Data bias and current AI threats
Data bias refers to the presence of systematic and unfair inaccuracies or prejudices in data. This can lead to incorrect, discriminatory, or skewed outcomes. Data bias has already emerged as a problem, showing up in organisations as discrimination, loss of talent and innovation, and legal and regulatory risks.
Organisations must be able to identify, mitigate, and prevent bias in their data and algorithms to ensure fairness, equity, and diversity within the workplace. This means investing in diverse and representative datasets, using “fairness-aware” machine learning techniques, and ensuring transparency in their data collection and modelling. Continual monitoring and auditing of systems and data are vital to detect and correct bias as it arises.
Another particular challenge that AI presents is around privacy in the context of GDPR, as the “right to be forgotten” cannot easily be implemented in AI models. In security terms, there’s a need to ensure “AI as UI” opportunities are counterbalanced with increased security in relation to future voice and facial recognition technologies as these can be spoofed via deepfake.
Strategising to mitigate AI risks
Alongside those mentioned in the Declaration, there are some key considerations for the UK Government to make when strategising to ensure safer AI systems:
- Assembling an expert AI taskforce – This squad of AI professionals should be multi-dimensional across the industry to include AI researchers, neuroscientists, legal experts, philosophers, sociologists and psychologists; only the widest range of industry representatives will suffice
- Discussions must involve open source – It’s not enough to have support from the big players, such as Google and OpenAI. Independent academic papers must also be considered as key sources for information. The open source community is strong and must also be a critical voice in AI risk mitigation.
- Investing in UK tech will power our AI expertise – Giving greater support and investment to the growth of UK tech in general will drive the growth of AI expertise. The UK seems to be behind in understanding the capacity of tech industries to drive economic growth. This is evident in the lack of gross funding available for tech businesses in their start-up and scale-up phases. Bridging the gap between academia and industry with strong UK academic partnerships (as in the US) will develop the right skills the UK is lacking, namely data scientists, prompt engineers, and model trainers.
- Prioritising critical considerations for privacy and safety – Accepting that there may be a chance of runaway artificial superintelligence in the future, there is a greater need to prioritise the real near-term AI risks. These include data bias, the real possibilities of deepfakes and impact on older, less tech-savvy generations becoming vulnerable to privacy and security issues.
- Shaping regulation around AI in data – Effective regulation will involve spending as much on “adversarial” AI as “generative/positive” AI. A sensible approach is to force an ISO standard that makes organisations document their policies, guidelines and business practices, much like the ISO 27001 standard for information security management systems.
The congregation of global industry experts at the AI Safety Summit has undoubtedly sparked the necessary discussions on AI responsibility in line with the rapidly advancing technology. However, whilst it has begun to kick off some tactical discussions, the connection between regulatory starting blocks and the tech agenda in the public domain are yet to be discussed. As post-summit discussions continue, it is crucial that real and imminent resolutions for many of today’s big challenges in an evolving AI world are addressed.