Thoughts on the AI Summit 2023: A discussion in near-term data risks with Steve Elcock | elementsuite
Click Here
Back

Thoughts on the AI Summit 2023: A discussion in near-term data risks with Steve Elcock

Thoughts on the AI Summit

Steve Elcock, Founder and CEO of elementsuite, the leading SaaS HR and Workforce Management platform, shares his opinion on AI Safety Summit 2023

As a nation, the UK has a history of shaping the future. But it’s essential that we put in place regulation of the uses of AI by which we can as a country manage AI’s potential and risk in equal measure. This week’s AI Safety Summit has been a strong starting point for this industry discussion and a chance to boost our UK tech investment as we edge closer to responsible use of this powerful technology.

Perhaps the most notable emergence from the summit thus far is The Bletchley Park Declaration, which notes:

We welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”.

Yet, despite an acknowledgement of the dangers and crucial factors that need to be considered in the use of AI, the discussion ultimately remains rather indeterminate in nature. Data bias needs to be meticulously addressed to ensure that AI systems are fair, ethical, and truly beneficial to society. Failure to do so not only perpetuates discrimination and inequality, but also erodes trust in AI technologies, resulting in, as aptly put by Elon Musk, “regulations that inhibit the positive side of AI”.

Data bias and current AI threats

Data bias refers to the presence of systematic and unfair inaccuracies or prejudices in data. This can lead to incorrect, discriminatory, or skewed outcomes. Data bias has already emerged as a problem, showing up in organisations as discrimination, loss of talent and innovation, and legal and regulatory risks.

Organisations must be able to identify, mitigate, and prevent bias in their data and algorithms to ensure fairness, equity, and diversity within the workplace. This means investing in diverse and representative datasets, using “fairness-aware” machine learning techniques, and ensuring transparency in their data collection and modelling. Continual monitoring and auditing of systems and data are vital to detect and correct bias as it arises.

Another particular challenge that AI presents is around privacy in the context of GDPR, as the “right to be forgotten” cannot easily be implemented in AI models. In security terms, there’s a need to ensure “AI as UI” opportunities are counterbalanced with increased security in relation to future voice and facial recognition technologies as these can be spoofed via deepfake.

Strategising to mitigate AI risks

Alongside those mentioned in the Declaration, there are some key considerations for the UK Government to make when strategising to ensure safer AI systems:

  1. Assembling an expert AI taskforce – This squad of AI professionals should be multi-dimensional across the industry to include AI researchers, neuroscientists, legal experts, philosophers, sociologists and psychologists; only the widest range of industry representatives will suffice 
  1. Discussions must involve open source – It’s not enough to have support from the big players, such as Google and OpenAI. Independent academic papers must also be considered as key sources for information. The open source community is strong and must also be a critical voice in AI risk mitigation.
  1. Investing in UK tech will power our AI expertise – Giving greater support and investment to the growth of UK tech in general will drive the growth of AI expertise. The UK seems to be behind in understanding the capacity of tech industries to drive economic growth. This is evident in the lack of gross funding available for tech businesses in their start-up and scale-up phases. Bridging the gap between academia and industry with strong UK academic partnerships (as in the US) will develop the right skills the UK is lacking, namely data scientists, prompt engineers, and model trainers. 
  1. Prioritising critical considerations for privacy and safety – Accepting that there may be a chance of runaway artificial superintelligence in the future, there is a greater need to prioritise the real near-term AI risks. These include data bias, the real possibilities of deepfakes and impact on older, less tech-savvy generations becoming vulnerable to privacy and security issues. 
  1. Shaping regulation around AI in data – Effective regulation will involve spending as much on “adversarial” AI as “generative/positive” AI. A sensible approach is to force an ISO standard that makes organisations document their policies, guidelines and business practices, much like the ISO 27001 standard for information security management systems.

The congregation of global industry experts at the AI Safety Summit has undoubtedly sparked the necessary discussions on AI responsibility in line with the rapidly advancing technology. However, whilst it has begun to kick off some tactical discussions, the connection between regulatory starting blocks and the tech agenda in the public domain are yet to be discussed. As post-summit discussions continue, it is crucial that real and imminent resolutions for many of today’s big challenges in an evolving AI world are addressed.

7 HR challenges during Christmas and how to manage them

7 HR challenges during Christmas and how to manage them

Learn how to tackle the top 7 HR challenges during Christmas and plan effectively for the holidays.

Learn more
How to engage frontline employees in the workplace

How to engage frontline employees in the workplace

Find out what employee engagement really means for hourly workers, why it matters, and how you can improve it in a way that resonates with your teams.

Learn more
Who stole my cheese? Solving HR problems that aren't HR problems

Who stole my cheese? Solving HR problems that aren’t HR’s problem

Dive into the funny and frustrating world of HR problems and learn how you can use HR technology to focus on what really matters.

Learn more

Nothing beats the experience of seeing elementsuite in action. It’s flexible, scalable and easy to use by everyone.

Whether you’re in HR, Finance, IT or Ops, experience your new world of work first-hand.

Book a demo