Tablet mockup displaying the mentalport Research Report 2026 "Fear of AI at Work: Why AI Adoption Fails at the Human Level" – a free guide for leaders and HR teams on protecting their AI investments.

Fear of AI & FOBO: Why AI implementation fails in humans

Tim Kleber
Apr 2026

Fear of AI and FOBO: Why your AI investment is failing because of humans and how to prevent it

Reading time: approx. 10 minutes | Category: AI Transformation, Mental Health, HR Strategy

Never before have so many AI projects been launched in Germany as today. And never before has so much capital been quietly and silently destroyed at the same time. According to BCG, global AI spending exceeded 252 billion US dollars in 2024. The proportion of companies that have derived tangible economic value from this: 26 percent. The rest did not achieve any measurable results — despite rollout, despite tool licenses, despite project team.

The problem isn't technology. It's in the minds of the workforce. More specifically: in two psychological phenomena that are still barely known in many management levels, but which are already deciding today whether AI initiatives generate returns or become the most expensive drawer in the company. Fear of AI. And FOBO.

As an HR manager, you are in a unique position: You see the fluctuation, the absences, the managers who are quietly considering leaving. You know that numbers on the dashboard rarely show the full picture. This article gives you the scientific framework, the economic classification and the concrete next steps — so that in the next budget discussion you don't talk about sensitivities, but about controllable risks.

What is behind the abbreviations

Fear of AI: The fear that grows with use

Fear of AI is not a vague skepticism of technology. It is a measurable psychological construct with four clearly defined dimensions, which is distributed differently in companies — by department, management level, demographics and industry. What the research shows particularly clearly is surprising: The fear does not diminish when people gain more experience with AI. It's gaining weight.

The KPMG study “Trust, Attitudes and Use of AI” (2025, n=48,340 in 47 countries) documents exactly that: People become more suspicious of AI the more they come into contact with it. Anyone who works with AI tools on a daily basis experiences their limits, their errors and their black box logic up close. The anxiety is not diminishing. It is becoming more concrete and tangible.

In Germany, this pattern is particularly evident. According to Bitkom (2024), 44 percent of Germans are afraid of AI. 64 percent of German companies report that employees are skeptical about using AI — primarily out of fear of losing jobs. And in a YouGov survey (2024), 34 percent directly stated that they were afraid of losing their own jobs due to AI. This is not a fringe group. These are employees in your teams.

Research — in particular Giermindl et al. (2024, Journal of Business Research) — distinguishes four core psychological dimensions of Fear of AI at Work, which are important for HR teams because they require different interventions.

The first dimension is identity and job security. Employees fear that their role will become obsolete, that their many years of experience will be devalued, and that AI decisions will be made about their careers without them being able to understand or challenge them. BCG (2024, n=13,000) shows: Even 49 percent of regular AI users fear that their job could disappear within ten years — more than twice as many as among non-users.

The second dimension is mistrust, control and data protection. 43 percent of German employees cite a lack of trust as the main reason for non-acceptance of AI (OTRIS/Skopos 2024). The feeling of being monitored or evaluated by AI triggers strong defensive responses — and these reactions are often simply invisible to HR teams on the surface.

The third dimension is incompetence and overload. According to Bitkom (2024), 44 percent of Germans are worried that they will no longer be able to follow technical developments. In a corporate context, only 39 percent of AI users have received training organized by the company (Microsoft Work Trend Index 2024, n=31,000). 47 percent of AI users do not know how to achieve the expected productivity gains at all.

The fourth and most dangerous dimension is rejection and avoidance — because it is the hardest to measure. 31 percent of employees actively undermine their company's AI initiatives: by refusing tools, intentionally poor inputs, or deliberate delay (Writer/Workplace Intelligence 2025). No project tracking in the world captures that.

FOBO: The blind spot in AI strategy

FOBO — Fear of Becoming Obsolete — is the longer-term, existential sister of Fear of AI. While Fear of AI describes the concrete rejection of a tool, FOBO goes deeper: It is the fear that one's own expertise, one's own career, one's own professional identity no longer has a place in an AI-transformed working world. Not tomorrow. But the day after tomorrow.

FOBO is not irrational. It is a rational response to an irrational communication situation in many companies. When management teams announce efficiency gains through AI without at the same time making transparent what this actually means for roles and career paths, a perception gap is created and employees fill it themselves with the worst thing they can imagine.

BCG (2025) proves: At 46 percent, employees in organizations with advanced AI redesign are significantly more likely to be concerned about their job security than those in less advanced companies (34 percent). Whoever transforms the fastest produces the most FOBO. That is the paradox that most AI roadmaps don't plan for.

And then there is another finding that makes you particularly sit up and take notice: Upwork (2025, n=2,500) shows that among AI power users — i.e. those who use AI the most intensively and record a 40 percent increase in productivity — 88 percent report burnout. They are twice as likely to cancel as non-users. The most productive employees are also the most vulnerable. It's no accident. It is the direct result of a lack of psychological support during the introduction of AI.

What that means for your budget

Let's get specific. RAND Corporation (2024) has evaluated 65 structured interviews and comes to a clear conclusion: More than 80 percent of all AI projects fail — twice as many as traditional IT projects. In one of the largest studies on AI change management (n=1,107 professionals, 2024), Prosci measured where failure comes from: 63 percent of all implementation challenges are due to human factors. Only 16 percent for technical problems.

EY (2025) quantifies the economic loss: Up to 40 percent of productivity gains are lost due to deficiencies in supporting people — not due to technical failure. For a company that introduces an AI solution for 500,000 euros and does not pursue an adoption strategy, this means conservatively: 350,000 to 400,000 euros of burnt investment. Not through bad software. Through Fear of AI and FOBO, which no one measured and no one addressed.

As an HR manager, you are usually the one who does not issue this invoice - even though you have the data to do so. Fluctuation, absenteeism, change of management. What is missing is the system that makes the connection visible.

Psychological safety: The lever that all studies cite

Among all the factors that influence successful AI adoption, one stands out so consistently in current research that it can no longer be ignored: psychological safety. The concept, which Amy Edmondson from Harvard Business School has made known, describes the state in which employees believe they can speak out without fear of negative consequences, admit mistakes and try out new things.

Reich et al. (arXiv 2026, n=2,257 employees in a global consulting firm) prove empirically: Psychological safety is a reliable predictor of whether employees will adopt AI tools at all - consistently across experience levels, roles and regions. The MIT Technology Review (2025) asked executives directly about this: 83 percent see psychological safety as a measurable success factor for AI initiatives. 84 percent have already observed concrete connections between psychological safety and AI outcomes in their organizations.

What that means for you: Psychological safety is not an HR wellness issue. It is a measurable variable that directly contributes to the adoption rate of your AI tools. And it is controllable, if you know where it is missing.

The four dimensions and their appropriate interventions

This is one of the most important mistakes in practice: There is no universal measure against Fear of AI and FOBO. A townhall about AI opportunities doesn't help against the feeling that you no longer have your own skills. A data protection declaration does not help against fear of loss of identity. If you don't know where the fear is, you trade with a watering can — and burn off budget without measurable effect.

The following overview shows which interventions fit which dimension:

Dimension Typical Signs in Teams Effective Interventions
Identity & Job Security Questions like “Will I be replaced?”, reluctance toward AI pilots, rumors within the team Clear communication of roles, career path discussions, employee involvement in AI design
Distrust & Data Protection Rejection of monitoring tools, skepticism regarding data entry, escalations to the works council Framework works agreement on AI, transparency regarding data paths, clear opt-out options
Incompetence & Overwhelm Errors in tool usage, silent non-usage, avoidance of AI features Role-specific training, peer mentoring, psychologically safe learning spaces without performance pressure
Rejection & avoidance Active criticism, shadow AI usage, deliberately poor AI inputs Co-creation instead of top-down rollout, early involvement of resisters, pilot projects with volunteers

Microsoft (Work Trend Index 2024) makes the difference tangible in figures: Trained users report 19 times more often that AI improves their productivity. Not twice as often. Nineteen times. The multiplier is not in the model - it lies in training, support and the feeling that you are doing this together and are not left alone.

What is already legally valid and what you need by August 2026

Many HR teams still underestimate this point. Article 4 of the EU AI Act, which has been in force since February 2, 2025 and is enforced by national supervisory authorities from August 2, 2026, makes AI literacy mandatory. All companies that use AI systems must ensure that their personnel have a sufficient level of AI expertise - role-specific, documented and as a continuous process, not as a one-off webinar.

Anyone who ignores this risks sanctions of up to 15 million euros or 3 percent of the previous year's global turnover. By way of comparison, in mid-2024, only 24 percent of German companies had even dealt with the EU AI Act (Bitkom 2024). The enforcement date is getting closer.

At the same time, paragraph 90 paragraph 1 number 3 of the Works Constitution Act requires that the works council be informed as early as the planning phase of new technical systems - not after the rollout, not after the pilot. Before. Organizations that ignore this risk not only legal consequences. They undermine the trust of the very bodies whose active support ultimately decides on adoption. In this context, an AI framework works agreement is not a compulsory bureaucratic exercise. It is a trust architecture.

What high performers actually do differently

McKinsey (2025) distills the difference between the 6 percent of companies with significant EBIT effects from AI to one sentence: AI high performers are 2.8 times more likely to fundamentally redesign workflows — rather than simply adding tools to existing processes. And this difference is reflected in 3 times higher EBIT contributions from AI.

What unites these companies: They treat AI implementation not as an IT project, but as a business transformation. They invest in change management. Prosci (2024) calculates this: Structured change management processes lead to a 2.9 times higher success rate. Projects with continuous C-suite sponsorship have a 68 percent success rate — without this sponsorship, 11 percent. The gap between value creation and capital destruction is not a question of technology. It is an accompanying issue.

For you as an HR manager, this means in concrete terms: You not only have a supporting role here. You are the person in the company who significantly determines the difference between 11 and 68 percent success rate. And to make that difference, you need data about the health of your workforce, not gut feeling.

Five areas of action that are now making a difference

Five areas of action can be derived from the research situation, which can be implemented concretely and immediately — without months of preparation time.

The first and most important thing is to measure before you trade. Anyone who doesn't know where Fear of AI and FOBO are in their own company — by department, management level, demographics — is acting blindly. The right action for the wrong problem costs as much as no action at all.

The second is to provide change management with a real budget. 61 percent of failed AI initiatives have used less than 15 percent of the project budget for change management (Prosci 2024). The recommendation is 20 to 30 percent. What sounds like a lot is cheap compared to 70 to 80 percent of total investment burned.

The third is to understand psychological safety as a controllable variable — not as a state that is simply there or not. Pulse checks based on scientifically validated dimensions make it possible to visualize the baseline, progress and impact of measures. That is what you need to talk to management and works council not about sentiments, but about key figures.

The fourth is to involve the works council at an early stage. Not because the law requires it — even though it does — but because a works council, which is involved from the outset, becomes an anchor of trust for the entire workforce.

The fifth is to keep an eye on the risk of burnout among power users. 88 percent of the most productive AI users report burnout (Upwork 2025). Anyone who builds AI champions must systematically track their psychological stress — otherwise you lose the very people on whom the transformation depends first. It's not a wellness issue. That is a strategic fluctuation risk.

Fear of AI and FOBO are controllable - if you know where they're sitting

The research situation is clear: AI transformation fails because of people, not because of technology. Psychological safety is the key lever. And this lever needs an instrument — a system that makes psychological stress and anxiety visible, makes them controllable and derives concrete measures.

The mentalport Fear of AI Assessment is based on the scientifically validated FAIW-10 instrument (Giermindl et al. 2024, Journal of Business Research) and measures all four dimensions — anonymously, in less than ten minutes per person. The result: individual development recommendations for employees and aggregated heat maps for HR and management teams, which show where the AI fear zones lie in the company. By department, management level and demographics.

The decisive difference to a classic employee survey: The assessment is the start of a management cycle. Measures are automatically derived from the results — for managers, for teams and for individual employees. And it is becoming apparent whether these measures are effective. As an HR manager, this gives you what you need for budget discussions, works council meetings and transformation reviews: data that tells a story.

Anyone who starts now still has time to lay a systematic foundation before the enforcement date of the EU AI Act in August 2026 and to give AI investments in the company a realistic chance of return.

The mentalport Fear of AI Assessment is free, anonymous and completed in less than ten minutes per person.

You will immediately receive an evaluation with specific measures and a monetary impact assessment — individually and at organizational level.

Now to the Fear of AI Research Report 2026

Or book an appointment directly: mentalport.health/book a consultation

Fear of AI and FOBO FAQs

What is Fear of AI at Work and how does it differ from general AI skepticism?

Fear of AI at Work is a scientifically measurable psychological construct that goes beyond general technology skepticism. It describes specific fears that employees develop towards AI systems in a professional context — divided into four dimensions: fear of identity and job security, distrust of control and data protection, the feeling of incompetence and overwhelmed, and active rejection and avoidance. The construct was validated by Giermindl et al. (2024, Journal of Business Research) and operationalized in the FAIW-10 instrument. The decisive difference to general AI skepticism: Fear of AI at Work is role-specific, can be measured with departmental accuracy and requires differentiated interventions depending on the dominant dimension.

What does FOBO mean and why is it relevant for companies?

FOBO stands for “Fear of Becoming Obsolete” — the fear that one's own expertise, professional identity and career will no longer have any place in the long term as a result of AI. In contrast to Fear of AI, which relates to specific tools and processes, FOBO is a more existential, deep-seated fear. FOBO is particularly relevant for companies because it is difficult to see: Employees rarely address it directly; instead, it expresses itself in silent demotivation, withdrawal from change processes or increased willingness to fluctuate. BCG (2025) shows that employees in organizations with advanced AI redesign are significantly more likely to be concerned about their job security than in less transformed companies — a clear indication that FOBO is growing, not declining, as AI matures.

Why do so many AI projects fail despite good technology?

Because technology isn't the problem in most cases. The RAND Corporation (2024) has analyzed that more than 80 percent of all AI projects fail — twice as many as traditional IT projects. Prosci (2024, n=1,107) specifically measured where the failure is coming from: 63 percent of all implementation challenges are human factors, only 16 percent are technical factors. The most common causes include lack of change management, lack of employee involvement, inadequate training, and loss of C-suite sponsorship after launch. EY (2025) quantifies the loss: Up to 40 percent of potential productivity gains are lost due to deficiencies in human support — not through poor models.

What does psychological safety have to do with AI adoption?

More than most AI strategies plan for. Reich et al. (arXiv 2026, n=2,257) empirically prove that psychological safety is a reliable predictor of whether employees will adopt AI tools at all — consistently across experience levels, roles, and regions. Teams in which employees can admit mistakes, ask questions and try out new things without fear of negative consequences experiment faster with AI, learn more efficiently and use tools more permanently. Conversely, Kim, Kim & Lee (2025, Nature Humanities and Social Sciences Communications) show the negative effect: Introduction of AI without support can actively reduce psychological safety and thus increase depression among employees. 83 percent of the managers surveyed see psychological safety as a measurable success factor for AI initiatives (MIT Technology Review 2025).

What does the EU AI Act require regarding the AI competence of employees?

Article 4 of the EU AI Act, which has been in force since February 2, 2025 and is enforced by national supervisory authorities from August 2, 2026, requires all providers and operators of AI systems to ensure that their personnel have a sufficient level of AI competence. In concrete terms, this means that companies must document in a role-specific manner which AI competencies exist, how they are developed and how compliance is demonstrated. A one-time webinar isn't enough — you need a continuous, documented system. Violations of key obligations could result in sanctions of up to 15 million euros or 3 percent of the previous year's global turnover. By way of comparison, according to Bitkom, only 24 percent of German companies had even dealt with the EU AI Act at all.

How can HR measure fear of AI in their own company?

With a structured, scientifically validated assessment that differentiates the four dimensions of Fear of AI — not with a general employee survey. The FAIW-10 instrument (Giermindl et al. 2024) measures identity and job security fears, mistrust of data protection and control, experience of incompetence, and rejection and avoidance on a standardized scale. At organizational level, the resulting aggregated result provides heat maps by department, management level and demographics — and thus the basis for targeted measures instead of universal watering can programs. The decisive advantage over qualitative formats such as townhalls or feedback rounds: The assessment is anonymous, does not generate any social desirability effects and provides data that can be reported to management and works council. The mentalport Fear of AI Assessment is free of charge and completed in less than ten minutes per person.

Any further questions? Book an exchange directly: mentalport.health/book a consultation

Data basis: BCG (2024/2025), Bitkom (2024), EY (2025), KPMG/University of Melbourne (2025), McKinsey (2024/2025), Microsoft/LinkedIn Work Trend Index (2024), MIT Technology Review (2025), Otris/Skopos (2024), Prosci AI Adoption Research (2024), RAND Corporation (2024), Reich et al. arXiv (2026), Slack Workforce Index (2025), Upwork Research Institute (2024/2025), Writer/Workplace Intelligence (2025), YouGov/DPA (2024). Complete list of sources in mentalport Research Report 2026: Fear of AI at Work.

About the drafters

Tim Kleber

Tim Kleber is CEO and co-founder of mentalport. As a mechanical engineer, business psychologist and data scientist, he combines technical precision with psychological expertise. His specialization: psychological risk assessment (GBU Psyche) in accordance with §5 ArbSchG and ISO 45003-compliant implementation in companies. After his own auditor experience in occupational safety, he and the mentalports team developed anonymous infrastructure for mental wellbeing management - today used by over 50 companies to reduce psychologically related downtime and active wellbeing management.

Follow Tim on linkedin, so you don't miss out on expert insights into mental health at work.

Start now with no obligation & free of charge

Your path to mental wellbeing management

subscribe to our newsletter
Thank you! Your submission has been received!
Yikes! Something went wrong while submitting the form.