AIHR
AI in HR

AI Risk Management for HR: 3 Key Risks To Manage & HR Actions To Take 

By Dr Marna van der Merwe, Dr Dieter Veldsman

While AI use in fields like HR began in the mid-2010s, it has made remarkable strides in recent years, particularly with the growing popularity of Generative AI tools. These tools, a subset of AI, focus on creating new content, data, or information by analyzing patterns in existing data.

Despite these advancements, the adoption of AI within HR departments remains slow. For example, currently, 34% of marketing departments regularly use Generative AI tools, whereas only 12% of HR departments do. Additionally, only one-third of HR leaders are actively exploring potential use cases for Generative AI.

Why the sluggish adoption by HR?

AIHR’s research indicates that major adoption barriers are a lack of digital skills (91%) and uncertainty about which tools are best for HR needs. However, more significantly, HR professionals are concerned about the safety and secure use of AI in the department and across the organization.

In this article, we will discuss the risks of AI, HR’s role in mitigating these risks, and actions HR can take in managing AI risks to overcome barriers to adoption.


Risk management is not compliance

A key misconception about AI risk management is that it’s just about compliance. HR professionals often look for clear rules to comply with and the do’s and don’ts to follow, but with AI, simply complying with the rules is not enough. Managing AI risk requires a proper understanding of the risks involved and how to identify, mitigate, and manage those risks. 

Risk management should be an essential part of any AI adoption strategy. Instead of just providing a checklist of rules, it focuses on areas that need close monitoring based on the organization’s risk tolerance and exposure level. This approach helps spot risks early, allowing organizations to address them during the adoption process.

By taking this proactive approach, organizations can take advantage of AI’s full potential while safeguarding against unintended consequences. It also helps build trust in AI, ensures compliance with legal standards, and helps maintain organizational integrity, positioning AI as a valuable tool for long-term success rather than a source of unforeseen challenges.

3 AI risks to manage and why it matters to HR

AI-related risks can be grouped into three broad categories, each posing a unique set of challenges to the organization and affecting HR in different ways. A holistic risk management framework should acknowledge these risks and provide guidance on how to address them at various levels.

However, the foundation of a holistic risk framework begins with an understanding of the type and nature of the risks associated with AI.

1. Inherent risks of AI 

This category of risks involves issues directly related to AI technology itself. These inherent risks arise from how the technology works rather than how it’s used and applied. They can emerge from integrating and using AI in various business processes.

Common inherent risks include: 

  • Bias
  • Lack of transparency
  • Unintended consequences of automated decision-making.

Often seen as the shadow side of AI’s benefits, these risks are directly tied to AI technology’s capabilities and limitations.

Bias and fairness risks

AI systems can unintentionally perpetuate bias if they are trained on biased data. For example, suppose an AI model used for hiring is trained on past recruitment data that shows an underlying preference for certain demographics (e.g., more men than women). In that case, the AI might continue this unintended bias by favoring candidates from those groups.

Bias in hiring or performance evaluations can lead to unfair decisions, resulting in discrimination and damaging the organization’s reputation. It can also expose the company to legal risks and create a less diverse and inclusive workplace. A case in point is the ongoing court case against Workday, as an applicant accused its software of unfairly discriminating against them as part of the screening process at various employers using the technology.

When AI got it wrong

Amazon and Google previously used AI applications to screen candidates which favored white males in the selection process based on the inputted data. These examples illustrate the inherent risk for bias in action – if the AI uses data that already reflects bias, it will inform how it performs actions such as screening candidates.

Opacity and explainability

Some AI systems, especially complex models such as deep learning, operate like “black boxes,” making it hard to understand how decisions are made. This lack of transparency can be problematic, particularly when AI is used in HR functions like recruitment or employee assessments.

If HR professionals cannot explain how AI reaches certain decisions, it could create a lack of trust and accountability. Employees may question whether the system is making fair choices, and it can be difficult to justify decisions to regulators or in legal contexts.

An example beyond HR is how Google’s Deepmind’s AI agent AlphaGo defeated the world Champion Go player in 2016. The series of decisions and moves the AI agent made became so complex that its behavior was unpredictable regarding what it would do next. Even though this was a breakthrough in how AI technologies learn and deal with complexity, the notion that AI can operate without oversight and a human being able to implement sufficient controls poses a risk for the future.

Autonomy risks and unintended outcomes

AI systems, especially when left to operate autonomously, can sometimes behave unpredictably in complex situations. For example, an AI tool might misinterpret candidate qualifications or incorrectly assess employee sentiment, leading to poor hiring decisions or misguided HR actions.

Unintended outcomes in HR processes can lead to incorrect hiring decisions, employee dissatisfaction, or the mismanagement of talent. This could impact the organization’s productivity, lead to poor employee experiences, and potentially harm the company’s culture and reputation.

When AI got it wrong

iTutorGroup was recently sued by the U.S. Equal Employment Opportunity Commission (EEOC) because of its autonomous screening of remote tutors. In this case, the AI was programmed to reject candidates older than a specific age without any human intervention in the process. The software ultimately rejected more than 200 qualified applicants simply based on age. 

Actions for HR

  • Use diverse training data: Use diverse, representative data when training AI models to minimize bias.
  • Regular bias audits: Periodically audit AI decisions to check for patterns of bias.
  • Human oversight: Ensure human review in AI-driven decisions, especially in critical areas like hiring, promotions, and evaluations. Verify AI outcomes before taking action.
  • Bias mitigation tools: Use tools designed to detect and reduce bias in AI models.
  • Use explainable AI models: Choose AI models that clearly explain their decisions whenever possible.
  • Documentation: Ensure that the AI’s decision-making process is well-documented to be explained when necessary.
  • Transparency: Be transparent with employees and candidates about how AI is used in decision-making processes.
  • Test unusual scenarios or outliers: To minimize unpredictable behaviors, test AI models extensively, especially for unusual scenarios (edge cases).
  • Monitoring and feedback loops: Continuously monitor AI systems in real-time and collect feedback on their performance to quickly identify unexpected outcomes.
  • Fallback mechanisms: Implement safeguards where AI decisions can be overridden or revised by humans when necessary.

2. Application-based risks 

This category of risks refers to risks that arise from AI use and application. Application-based risks relate to how AI is deployed, implemented, and managed within HR processes. Even if all inherent risks are well-managed and mitigated, poor use or lack of proper oversight can lead to errors, unethical outcomes, or damage to the organization’s reputation.

Application-based risks can be categorized as human or behavioral risks stemming from humans interacting with AI to fulfill their work. 

These risks often relate to: 

  • Ethical considerations
  • Reputational impact, and; 
  • Balancing AI-powered and human oversight.

Clear guidelines and guardrails for application and intentional behavior change are needed to manage these risks. 

Misalignment with organizational values

When AI makes decisions that are misaligned with your organization’s values or ethical standards, it can conflict with your company culture. For instance, if your company prioritizes diversity and inclusion, but the AI’s hiring decisions reduce diversity, it directly undermines those core values.

When AI contradicts company values, it can harm internal morale, disrupt the company culture, and create confusion about what the organization stands for. It may also lead to ethical concerns that could affect the trust employees and candidates have in the company.

When AI got it wrong

Chatbot Tay, released by Microsoft in 2016 on its Twitter page, was intended to engage in conversation with users and showcase how AI can drive positive interactions aligned with Microsoft’s values of trustworthiness, inclusivity, and innovation. Within 16 hours of release, however, chatbot Tay began responding in racist and offensive manners based on learning from other users’ behaviors, leading to Microsoft removing it from the platform and issuing an apology. 

Reputational damage

If AI is used insensitively or inappropriately, it can lead to bad press and public backlash. For example, using AI to handle mass layoffs without human oversight could come off as cold or impersonal, damaging the company’s image.

A damaged reputation can affect employee trust, candidate attraction, customer loyalty, and overall business performance. Bad publicity about how AI was used can affect the company’s public image and potentially lead to a loss of business or legal issues.

Ride-hailing app Uber is a good example, where its AI-based surge pricing algorithm has been criticized for raising prices during natural disasters or even terrorist attacks – even though the AI performed as expected, without the context of what was happening in the environment as to why people were seeking more rides, it faced public backlash and created damage to Uber’s reputation. 

Over-reliance on automation

Relying too much on AI for HR tasks without involving humans can lead to impersonal or flawed decisions. AI may miss important context or emotional nuances critical in HR situations, such as conflict resolution or employee grievances.

HR decisions often require empathy, emotional intelligence, and human judgment—qualities AI lacks. Over-reliance on AI can make employees feel undervalued or misunderstood, causing dissatisfaction or turnover. It could also result in poor decision-making when the AI misinterprets complex situations.

Actions for HR

  • Define ethical guidelines: Clearly outline the company’s values and ensure that AI tools are programmed and monitored to reflect those values.
  • Human oversight: Always include a human review of AI-driven decisions to ensure they align with the company’s ethical and cultural standards.
  • Regular ethical audits: Periodically audit AI outputs to ensure they don’t conflict with the company’s mission, vision, or values.
  • Use AI thoughtfully: Apply AI in areas where it can assist humans, but don’t use it for susceptible decisions (e.g., layoffs) without human involvement.
  • Communication and transparency: Be open with employees and the public about how AI is used and ensure decisions are communicated emphatically.
  • PR and legal safeguards: Work closely with PR and legal teams to ensure that AI use aligns with the company’s public-facing strategy and ethical responsibilities.
  • Balance AI and human input: Use AI to assist with routine tasks (e.g., screening resumes), but make sure humans handle decisions that require empathy or complex judgment (e.g., employee performance reviews or disciplinary actions).
  • Regular monitoring: Continuously review AI-driven decisions to catch errors or areas where the AI might lack proper judgment.
  • Training for HR teams: Train HR staff on how to use AI effectively as a tool, ensuring they understand where human intervention is essential.
  • Structured AI technology implementation: Ensure that AI tool implementation is supported with change management and upskilling to ensure behavior change in adoption.

In recent years, various laws have been drafted to regulate the use of AI. These laws typically establish conditions for its responsible use and regulatory and reporting obligations. 

For example, in 2023, New York City introduced Local Law 144, which requires organizations to conduct bias audits on automated employment decision tools (AEDTs) used in hiring, promotions, and performance evaluations. On a larger scale, the EU Artificial Intelligence Act is one of the most comprehensive attempts to regulate AI across sectors.

Companies using AI for recruitment or performance evaluations must ensure compliance with transparency, risk management, and data governance requirements, and non-compliance can result in significant penalties. Compliance-related risks in AI are tied to legal and regulatory standards, particularly concerning data protection and employment laws. 

These risks often relate to: 

  • Data privacy violations
  • Discrimination and employment law compliance, and; 
  • Auditing and documentation management requirements.

When using AI in HR, it’s critical to comply with both local and global regulations to avoid breaches that could lead to legal penalties or sanctions. These risks can be monitored and managed by ensuring that AI policies and practices align with legislative requirements.

Data privacy violations 

AI systems often handle sensitive personal data, such as employee information or candidate details. This increases the risk of data breaches or failing to comply with data protection laws like the General Data Protection Regulation (GDPR), which sets strict rules on how personal data should be collected, stored, and used.

Violating data privacy laws can lead to severe penalties, lawsuits, and damage to the organization’s reputation. It can also erode trust between employees, candidates, and the organization, making it harder to attract and retain talent.

When AI got it wrong

A notable example of data privacy violations involving AI in HR occurred with HireVue, an AI-driven recruitment platform. HireVue used AI algorithms to analyze candidates’ video interviews, evaluating factors such as facial expressions, tone of voice, and word choice. Privacy advocates raised concerns that the system collected and processed sensitive biometric data without explicit consent or adequate transparency.

Discrimination and employment law compliance 

AI systems that make hiring or performance evaluation decisions risk discriminatory results (e.g., favoring one group over another based on race, gender, or age). This can lead to legal issues, including lawsuits or regulatory actions against the organization.

Discrimination in hiring or employment practices can also result in significant financial penalties, reputational damage, and a toxic workplace culture. It can also hinder the organization’s efforts to promote diversity and inclusion.

IBM implemented an AI system to help manage promotion and performance reviews. However, internal reports and subsequent lawsuits alleged that the system disproportionately discriminated against older employees, favoring younger workers for promotions. IBM was accused of perpetuating age bias through the AI tool, leading to a class-action lawsuit filed by employees. 

Auditing and documentation requirements 

In some states or regions, laws may require companies to document how their AI systems work and how decisions are made. This includes being able to explain AI decisions to ensure they are made fairly and legally.

Failure to maintain proper documentation and auditing can lead to non-compliance with legal standards, resulting in fines and legal challenges. It can also hinder transparency and accountability within the organization, making it difficult to defend decisions made by AI systems.

Actions for HR

  • Implement data protection policies: Establish clear policies for handling sensitive data and ensure all employees are trained on these policies.
  • Data minimization: Only collect and use the data that is absolutely necessary for AI applications, minimizing the risk of exposure.
  • Regular audits and monitoring: Conduct audits of data handling practices to ensure compliance with laws and identify potential vulnerabilities.
  • Use encryption and security measures: Employ strong security measures to protect sensitive data, including encryption, secure access controls, and regular security assessments.
  • Legal compliance checks: Work with legal experts to ensure AI-driven processes comply with all relevant employment laws and regulations.
  • Adjust algorithms: Modify algorithms to minimize bias and ensure they are designed to promote equitable outcomes.
  • Maintain comprehensive records: Keep detailed documentation on how AI systems are used, including data sources, decision-making processes, and any changes made to the algorithms.
  • Regular compliance reviews: Conduct regular reviews to ensure documentation meets legal requirements and best practices.
  • Establish clear protocols: Create procedures for monitoring and reporting AI systems to ensure accountability.
  • HR upskilling: Upskill the HR team on the importance of documentation and compliance to ensure everyone understands their role in upholding standards.

Next steps 

Awareness of the risks associated with AI use is the first foundational step in risk management and ensuring responsible adoption. To manage risks systematically, a clearly defined process must be developed, outlining the various steps to identify, mitigate, monitor, and audit risks.

In addition, a robust and comprehensive AI risk framework ensures that relevant risks are considered and all stakeholders are aligned to the overarching risk management strategy. 

Stay updated and subscribe to the monthly Leading HR newsletter and LinkedIn page. Our experts provide curated trends and cutting-edge thinking for HR leaders.
Contents
AI Risk Management for HR: 3 Key Risks To Manage & HR Actions To Take 
Speak to us
CASE STUDIES

Discover how AIHR helps top HR teams drive impact.

Ahold Delhaize

Ahold Delhaize used their HR capabilities
framework to implement a Global HR
Academy that responds to local needs.

HumanX

HumanX used a Team License to develop and retain the top digital HR talent they need to succeed.

CNL

Canadian Nuclear Laboratories used cohort-based Boot Camps to support and develop HR Business Partners.

Develop your HR teams to drive value

We help HR leaders empower their teams to drive impact and results. Let’s talk about defining the value HR brings and equipping your team to deliver meaningful business outcomes.

Click here to grab a slot on my calendar. Talk soon!

Dr Dieter Veldsman
– Chief HR Scientist

By submitting this form you agree to receiving a (email) followup from one of our consultants.
Go to Top
better skills,
faster.
where hr teams build
better skills,
faster.
where hr teams build
Send this to a friend