Can you trust AI as your coworker? | elementsuite
Click Here
Back
AI

Can you trust AI as your coworker?

elementsuite CEO and Founder, Steve Elcock, was featured in Wired magazine discussing the impact of AI in the workplace: ‘AI Is Your Coworker Now. Can You Trust It?’. For a deeper look at whether we can trust AI to be your coworker, have a read of Steve’s deep dive into the subject below!

10 June 2024 | Steve Elcock

With the same fervour that fuelled the Oklahoma Land Rush of 1889, where settlers raced to claim their piece of the American frontier, today’s tech pioneers are in a frenzied sprint to stake their ground in uncharted territories of generative AI tools and co-pilots whilst navigating a complex landscape of security and privacy.

AI is a wild west, and AI chatbots, assistants and co-pilots are all the rage as the wagons roll and the land grab accelerates.

Andrej Karpathy (founding member of OpenAI and former Sr. Director of AI at Tesla), talking at a recent AI event put it like this: “…its very clear there’s a lot of space and everyone is trying to fill it…”[1]

The land to grab has been formed by recent advances in AI techniques and models, initiated in 2017 from Google’s Transformer architecture[2] and then later popularised by the launch of ChatGPT in Nov 2022. Since then, even more opportunities (for good and bad use of AI) has been created as the pace of change accelerates, and governments struggle to apply sensible legislation[4] [5] [6] over a technology they realise is immensely powerful, but they don’t really understand.

The problem with AI land grabbing without sensible legislation is that there’s quite a high chance of things going wrong. It’s also clear that the balance between fostering innovation vs regulation for such a powerful general purpose technology is almost impossible against a backdrop of competing nations (US, UK/EU, Russia, China) competing companies (e.g. OpenAI/Microsoft, Meta, Anthropic/Amazon) and competing AI philosophies (Open Source, Open Weights, Closed Weights).

This is particularly true when it comes to security and privacy, and so businesses using generative AI tools such as CoPilot and ChatGPT should be rightfully cautious in their adoption of these technologies as they seek to identify business opportunities to drive productivity.

Security and privacy in the workplace and within IT systems is currently regulated via outdated legislation such as the GDPR, and certifications such as ISO27001 and CyberEssentials.

These regulations and certifications seem woefully inadequate to deal with the AI land grab. So we probably need to go back to basics.

There are two fundamental principles to consider:

1) Data Access – i.e. what is the scope of the data that your AI Chatbot or CoPilot be allowed to access and process on your behalf and

2) Purpose of data processing – i.e. based upon the classification and scope of the data to be processed, what is the purpose of this processing, and is it lawful?

Data access

As any cheerful Data Protection Officer or IT solution architect will tell you, security and privacy in relation to data access is geared around the two great axioms of Authentication and Authorisation.

Let’s start with Authentication:

What It Is: Authentication is the process of verifying who you are.

Example: When you log into your email, you enter your username and password. The system checks these credentials to ensure you are the person you claim to be. This is authentication.

Analogy: Think of authentication like showing your ID at the door of a club. The bouncer checks your ID to confirm your identity before letting you in.

So what about Authorisation?:

What It Is: Authorisation is the process of determining what you are allowed to do.

Example: After logging into your email (authenticated), you might try to access a shared document. The system then checks if you have the rights to view or edit that document. This is authorisation.

Analogy: After showing your ID and getting into the club (authenticated), you might need a special wristband to access the VIP area. Whether you have that wristband or not is determined by the club’s authorisation rules.

The key differences between authentication and authorisation are their order and purpose. Order – because authentication happens first. Once your identity is confirmed, authorisation determines your permissions). Purpose – because authentication is about verifying identity, whereas authorisation is about granting access to resources based on that identity.

These principles are important because when you logon to your laptop, facebook account or HR system, you only see the data you are meant to see because of the authentication and authorisation controls in place. Your AI Chatbot or Co-Pilot MUST have inbuilt mechanisms and controls to apply the same scope of security as you would have as a user of these applications.

Purpose of data processing 

Aside from the question of authenticated and authorised data access, there is a further concern – not just what data is accessible and processed and by whom, but why the data is processed in the first place.

The GDPR is hot on this from a Personal Data Protection perspective, and places an obligation on companies to provide adequate description and controls around their lawful basis for processing personal data. This means for example that whilst there may be valid reasons for companies holding an employee’s bank details (to pay them) or Right To Work (to ensure legislative compliance), any reasonable or lawful basis for scraping their facebook page (to pitch company childcare vouchers, or find out why they were late in on Friday) would be far more questionable, and potentially in breach of the GDPR. To drive this point home, the GDPR also places the obligation on companies to allows their employees to request a summary of all data held about them (a subject access request or SAR), and even better for employees to request that a company deletes all data about them (“right to be forgotten”) – except for data that the company must hold for legal/statutory purposes.

So with these two principles of 1) data access and 2) purpose of data processing now understood, when it comes to AI Chatbots and Co-Pilots, there are some interesting challenges to consider.

For example:

Microsoft’s latest CoPilot+ [6] laptop PCs incorporate a helpful (but slightly insidious) feature called “Recall” that uses “screenray” snapshots of ALL your interactions with your computer, taking snapshots every 5 seconds or so. Although Microsoft breezily spout the benefits, and state that “Your snapshots are yours; they stay locally on your PC“ and “You are always in control with privacy you can trust.”, it doesn’t seem very long before this technology could be used for monitoring employees. Hmmm…

The next challenge is that the average Joe or Janet is probably reasonably unenlightened regarding some of the delightful concepts discussed above around authentication, authorisation, GDPR and purposeful data processing. As such they are happily having fun with ChatGPT, and unintentionally loading PII (Personally Identifiable Information), IP (Intellectual Property), and various flavours of company confidential and sensitive information.

ChatGPT is happily consuming this information with no clear controls or warning to users that what they might be doing is inadvertently and irreversibly breaching data protection controls with no obvious way of forgetting or reversing their error. Once data is input via a ChatGPT prompt it is imbibed into OpenAI’s GPT model weights forever, without any clear way to erase the data. A paper published last year called “Who’s Harry Potter – making LLMs forget” [7] – showed that one of the unfortunately by products of AI is that unlearning is much harder than learning. A bit like a memory in our brains, once data is imprinted into AI model weights it is not easily erased.

As a slightly interesting aside, it is curious (and again a little insidious) that applications such as ChatGPT make this “feature” – of using data to train their models almost entirely opaque to end users. To turn this feature off (which is on by default), in desktop mode you (currently) have to go to Settings (the top right icon that looks nothing like Settings) > Data Controls and set the helpfully named “Improve the model for everyone” toggle to Off. In the ChatGPT App you have to select the “Temporary Chat” option – which again is helpfully switched off by default when you start a new chat – even when you’ve switched it on previously.

Hmmm…

Joe or Janet probably don’t realise that by uploading their team’s salaries into ChatGPT to ask for a salary benchmark, they’ve irreversibly committed a data breach that can’t be contained or controlled, and their team’s salaries are now publically available information. There are ways around this (such as buying an Enterprise ChatGPT license – which claims to provide the assurance that data is NOT used to train models), but they are not well documented, explained or understood when you use ChatGPT.

Finally there are already some fascinating examples of how generative AI can be used by bad actors to commit crime. One of the most famous recent examples earlier this year in Feb 2024 was when scammers used AI-powered “deepfakes” to pose as a multinational company’s chief financial officer in a video call and were able to trick an employee into sending them more than $25 million.[8]

Hmmm…

Which specific tools are an issue and why?

As explained above, use of generative AI chatbots such as ChatGPT to process personal, intellectual property, or company / sensitive information is clearly an issue. Users are largely and likely ignorant of their obligations under GDPR and other company policies, and / or expect that a more rigorous security by design and default is in place within these tools (which it is not). AI companies are hungry for wide and various data to train their models, and are seemingly making it behaviourally attractive (“Improve the model for everyone” on) by default for users to commit data breaches.

But other inbuilt AI tools such as Microsoft CoPilot also pose significant risks for businesses.

Microsoft CoPilot is an AI assistant integrated into Microsoft 365 apps – Word, Excel, Powerpoint, Teams, Outlook and so on. With this power, the AI can transcribe and summarise meetings, identify actions from emails, and identify assets in files and so on, to save time and improve productivity.

It’s security model is based upon a user’s existing Microsoft permissions. Microsoft permissions within an organisation are usually controlled by a single sign on and identity access features via Azure Active Directory – and Microsoft make bold claims that Co-Pilot operates within the access scope of the user – meaning that it cannot access or modify folders and documents beyond the user’s granted permissions.

But if your organisation’s permissions aren’t set properly and Copilot is enabled, users can easily surface sensitive and personal information.

In addition any errors or anomalies in an organisation’s security setup or badly managed data – are amplified by the ease at which AI can interrogate and summarise information. Copilot will happily be prompted to help “> find any files with APIs, passwords or access keys and please put them in a list for me”

Recently the US Congress has banned its use for staff members stating that “the Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services.

Gartner has also urged caution, stating that: “using Copilot for Microsoft 365 exposes the risks of sensitive data and content exposure internally and externally, because it supports easy, natural-language access to unprotected content. Internal exposure of insufficiently protected sensitive information is a serious and realistic threat. External web queries that go outside the Microsoft Service Boundary also present risks that can’t be monitored.[9]

How will this get worse in the future?

This could get worse in the future because:

  1. Once confidential or private data is in the public domain, the instant replication and proliferation of this data around the world via social media and the internet makes it difficult to make it confidential and private again.
  2. The hungry nature of AI to identify and consume new data means that not only does this data exist in obvious and tangible forms (e.g. social media, newspapers) but also within inscrutable AI model weights that don’t/can’t forget Harry Potter easily.
  3. AI synthetic data volumes are growing at an alarming rate – this means that it will become harder for humans to distinguish real, “Ground Truth” data from synthetically created data. This means for example that data that may appear harmless to the human eye / understanding may actually be confidential or private, but be impossible for humans to classify and control. If given too much autonomy, AI agents may be able to perform higher order reasoning and communicate in languages (think dolphin clicks) and perform actions (think white text instructions on white background or RGB pixel codes in images) that either for benign or malicious reasons will breach data – and we won’t have a clue its happening.

Or to put it another way: Genies are not easily put back into bottles.

The fourth (and slightly more depressing) factor is:

  1. AI in the hands of bad actors significantly increases not only the chances of data being hacked/breached, but for humans to be misled (think deep fake phone call with your son or grandson asking for cash to be transferred so he can get home). Being swamped with election propaganda and advertising will seem tame compared to living in The Trueman Show*.

*Ed. Perhaps we are already living in an AI metaverse.

Is there a way for employees to use these tools safely at work?

With the founding principles of authentication and authorisation and purpose of data processing in place it is possible for AI tools to be used safely in the workplace.

But organisations need to think carefully about:

  • The nature of their data
  • How it is classified
  • How access to the data is controlled
  • Why it needs to processed

Then in terms of their AI toolset, they need to think about:

  • What is the scope of the data that the AI tool should be allowed to access
  • What is the model that is used by the AI (particularly ensuring that data is not inadvertently used to train or “fine tune” the model – unless it is entirely controlled within the organisation’s infrastructure and/or security boundaries)
  • How are AI toolset interactions recorded and controlled to identify misuse

A key ingredient for successful AI deployment is to carefully outline and control the use cases that the AI will be required to perform, and to ensure that the right system prompts (unseen prompts that are used to control nefarious user interactions) are in place, along with the right instruct-aligned models, monitoring and methods for capturing human feedback about the AI model performance.

If so, how, from a business perspective and individual perspective?

Businesses need to carefully consider authentication, authorisation and purposeful processing when it comes to their AI toolset deployments. They should ensure that they have a clearly defined AI usage policy and guidelines that they publish and use to train their staff.

An AI toolset deployment should be considered in a similar manner to any system deployment – with a clearly defined set of requirements, objectives or use cases in mind, so that the appropriate data access and processing controls are in place to control and test the outcomes, so that they are safe and secure.

Individuals should be mindful of their responsibilities under legislation (such as GDPR) and company policies, and assume that all AI generated materials or content is wrong and should be checked – taking ownership as if it’s their own work. They should (be trained to) never use free-to-use models for processing personal, company or sensitive information.

In practical terms, an interesting (and perhaps ironic) place for businesses to start with their consideration of safe and secure AI deployments for their staff – might be to take a closer look at the best practices that are already in place in their HR departments. Long considered perhaps by many as an unavoidable cost centre and backwater function of an organisation, HR departments are already experts in data access and security controls (GDPR principles are inherent and key concepts within your organisation’s HR system). HR Systems, by the very nature of the organisation structure and personal data they master, have rigorous security access controls inbuilt which are typically at a much lower level of granularity than the (e.g. Microsoft Azure AD) security access controls that are likely to be more generalised to authenticate rather than to authorise access.

AI assistants and co-pilots that are authenticated and authorised from your company’s HR system may already have solved a number of the security hurdles.

ELLA (one of the first publically announced AI-in-HR deployments, launched in October 2023) is an AI assistant that is built into UK scale-up elementsuite [11] who provide the HR system of record for a large number of UK companies, including McDonald’s UK and Travelodge. ELLA’s combination of authentication and role-based authentication alongside system prompts, retrieval augmented generation (where data is interrogated by the AI as if it were the user securely running a report) before passing into an appropriately instruct-aligned model, with all interactions audited and monitored – appears to be a model deployment of what good should look like for an AI deployment. Layered onto this is ELLA’s ability to incorporate both human feedback, and Mixture of Experts (MoE) models that provide AI-as-judge critique of all outputs, and the security and privacy risks of AI deployment become minimised.

HR systems not only master your job role, and position in the organisation, but have inbuilt legislative compliance controls – making them potentially one of the most obvious and intuitive places to securely launch your AI initiatives.

What needs to change to make these tools safer to use day to day?

To make tools safer to use on a day to day basis, it’s clear that a balanced overall strategy is required. This ranges from organisational AI policy to education of individual users regarding the dangers of using unregulated and unauthorised toolsets, all the way down to the technology – to ensure that the AI toolsets have more focus on ensuring security by design and default, so that naive or uneducated users are protected and do not inadvertently breach their contractual data protection obligations.

To this end governments should step up their thinking on regulation and come up with sensible and workable approaches. One approach might be to create an ISO certification for AI tooling and deployment (similar to the Information Security controls in ISO27001). Perhaps enforced by this certification or in addition to it, (and certainly while certification, regulation and the AI landscape becomes more clearly formed) it might be helpful for organisations deploying AI to be obliged to prove via audit that they are spending as much on “adversarial” (i.e. safety) considerations of AI rather than just the happy generative path. This is sometimes known as “red teaming”, which is where a group of experts, known as a “red team” actively tries to challenge, test, and break an AI system to identify its vulnerabilities, biases, and potential risks. This approach is akin to ethical hacking in cybersecurity, where the goal is to uncover weaknesses before malicious actors can exploit them.

References

  1. Making AI accessible with Andrej Karpathy and Stephanie Zhan https://www.youtube.com/watch?v=c3b-JASoPi0&ab_channel=SequoiaCapital (3m55s)
  2. Attention is all you need https://arxiv.org/abs/1706.03762
  3. US AI Executive Order: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  4. Bletchley Park Agreement: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
  5. EU AI Act: https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2021)698792
  6. Microsoft CoPilot+ laptops https://blogs.microsoft.com/blog/2024/05/20/introducing-copilot-pcs
  7. Who’s Harry Potter – making LLMs forget: https://www.microsoft.com/en-us/research/project/physics-of-agi/articles/whos-harry-potter-making-llms-forget-2
  8. Finance CTO Deepfake Scam https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
  9. US Congress bans use of CoPilot https://www.securityweek.com/why-using-microsoft-copilot-could-amplify-existing-data-quality-and-privacy-issues
  10. 6 prompts you don’t want putting in Microsoft Copilot: https://www.bleepingcomputer.com/news/security/6-prompts-you-dont-want-employees-putting-in-microsoft-copilot
  11. Launch of ELLA https://www.elementsuite.com/articles/elementsuite-launches-ella-pioneering-ai-technology-in-hr-software

5 ways you can use AI to give employees real-time feedback

Enhance your feedback culture with AI. Discover how AI-powered tools can streamline feedback delivery and improve employee engagement.

Learn more
Do your HR systems need couples therapy?

Do your HR systems need couples therapy?

Discover how you can prevent your HR and WFM systems from needing couples therapy, ensuring seamless business success.

Learn more

8 challenges in employee performance management and how to solve them

Discover how to spot underperformance early and take proactive measures to ensure effective performance management.

Learn more

Nothing beats the experience of seeing elementsuite in action. It’s flexible, scalable and easy to use by everyone.

Whether you’re in HR, Finance, IT or Ops, experience your new world of work first-hand.

Book a demo