A growing number of people are now relying on generative AI to automate parts of their work, and it’s more than happy to oblige. As the complexity of its answers has increased, people feel increasingly confident in relying on it to do more than simple planning or idea generation. Now, a growing number of people appear to trust it to keep people safe in the form of generating risk assessments.
It’s clearly possible to create a risk assessment with AI, or people wouldn’t be doing it. But should you? The answer is generally no, and for a wide range of reasons—from the very real safety concerns to potential legal liabilities, to the nuances that make each risk assessment unique to the business that creates it.
Creating a risk assessment with AI
It is possible to create a risk assessment with AI, and it will get you close to what you’re looking for. Here’s what happens when we ask ChatGPT to create a risk assessment for our business:
Risk Assessment for Health and Safety Training Provider
Company Name: [Your Company Name]
Location: [Company Address]
Date of Assessment: [Date]
Assessed By: [Name of Assessor]
Review Date: [Review Date]
1. Overview
[Your Company Name] provides health and safety training through in-person sessions at our training centre, online training via Zoom, and consultancy services from our office. This risk assessment identifies potential hazards, assesses risks, and outlines control measures to ensure the safety of employees, trainees, and visitors.
2. Identified Hazards & Control Measures
2.1 In-Person Training at the Training Centre
Hazard: Slips, trips, and falls
- Risk: Trainees and staff may trip over cables, bags, or wet floors.
- Control Measures:
- Keep walkways clear and ensure adequate signage for wet floors.
- Conduct regular housekeeping checks.
- Ensure cables are safely secured and out of walkways.
Hazard: Fire Safety
- Risk: Risk of fire from electrical equipment or flammable materials.
- Control Measures:
- Maintain fire extinguishers and conduct regular fire drills.
- Ensure clear evacuation routes and proper signage.
- PAT testing for electrical equipment.
Hazard: Manual Handling
- Risk: Staff or trainees may suffer injuries from lifting heavy training materials or equipment.
- Control Measures:
- Provide manual handling training.
- Use mechanical aids where possible.
- Ensure lifting tasks are risk-assessed and supported by best practices.
Hazard: First Aid and Medical Emergencies
- Risk: Injury or illness occurring during training.
- Control Measures:
- Have a first-aid trained staff member present.
- Ensure first-aid kits are stocked and accessible.
- Keep emergency contact information available.
2.2 Online Training via Zoom
Hazard: Digital Security and Data Protection
- Risk: Personal data of trainees being compromised.
- Control Measures:
- Use secure Zoom links with password protection.
- Train staff on GDPR compliance and data security.
- Ensure anti-virus and firewall protections are in place.
Hazard: Ergonomic Risks
- Risk: Prolonged screen time leading to eye strain or musculoskeletal issues.
- Control Measures:
- Encourage regular screen breaks.
- Provide guidance on good workstation setup.
- Offer support for accessibility needs.
2.3 Office-Based Consultancy Services
Hazard: Workstation Ergonomics
- Risk: Poor posture and workstation setup leading to musculoskeletal disorders.
- Control Measures:
- Conduct Display Screen Equipment (DSE) assessments.
- Provide adjustable chairs and monitor stands.
- Encourage regular movement and stretching.
Hazard: Lone Working
- Risk: Employees working alone at the office being at risk of accidents.
- Control Measures:
- Implement a lone working policy.
- Ensure staff check-in procedures are in place.
- Provide emergency contact procedures.
Hazard: Stress and Mental Health
- Risk: Work-related stress or mental health concerns.
- Control Measures:
- Promote an open-door policy for support.
- Provide mental health first aid training.
- Encourage regular breaks and workload management.
3. Risk Assessment Review
This risk assessment will be reviewed annually or sooner if significant changes occur in operations, regulations, or after an incident.
Signed: _______________
Date: _______________
This doesn’t look terrible. It makes sense, is cogently written, and addresses many of the points we asked it to. We mentioned in the prompt that we were a safety training company in the UK, and that we provide both in-person and online training, as well as consultancy services and support for our courses. ChatGPT addressed each of these points in detail, with suggestions of the potential hazards involved and how to mitigate them.
However, the first thing to point out is that we had to prompt it for those specific details. If you gave it a more basic prompt—for example “write a risk assessment for a health and safety company”—it would likely not have included the details about online training, or specific references to our in-person training. The less specific you are in what you tell the AI, the more generic its response will be.
Is it just a case of giving the AI enough information to go on, though? Generative AI benefits from more specific prompts, but it also requires phrasing, with clear points that can’t be misconstrued or misinterpreted. It would be hard to add enough information to a prompt that the risk assessment would cover every risk factor within your business—and if you can point all of these risk factors out yourself, you’d be better off doing the risk assessment yourself.
The problem with using ChatGPT for risk assessments
The way that tools like ChatGPT and DeepSeek work is through pattern recognition. The models that these AI tools are based on contain millions of examples of writing from all across the internet. By scouring through all of these, they can start to recognise patterns in how people write about different topics. When you ask the AI to write a risk assessment, then, it’s drawing on both other risk assessments and a range of health & safety related content.
This might make it sound pretty smart, but AI doesn’t understand writing or language in any meaningful way. It can’t even recognise words beyond the shape of individual letters. This is why, if you ever ask AI to include a bit of text in a generated image, it struggles. A tool like ChatGPT is essentially a fancier version of autocomplete stringing words together based on how other people have used them.
By default, this means that everything AI outputs is derivative. The more specific you get with a prompt, the fewer relevant examples there will be for the AI to draw from, and the less likely you are to get a good result. When it comes to risk assessments, this means that you’re likely to get a risk assessment that looks like every other risk assessment that’s been put online. That might be good in the sense that the fundamentals will be there—but it won’t have any real specificity to your business.
An AI model can’t walk around your premises, or even (at this point) scour your website. It can’t look at a hazard and decide on the best way to protect against it; or identify a risk that you aren’t aware of and don’t tell it about. It’s also often going to struggle to give you information that’s specific to the UK, as there’s less information on UK law out there, and most AI companies are based in America.
The benefits of a proper risk assessment
The point of all of this isn’t that an AI can’t do what a human can. What it can’t do in this case is what it can’t see, and what you don’t tell it. What it also can’t do is bear any legal responsibility if things go wrong or give you anything to fall back on. A good risk assessment should keep people safe—but if the worst does happen, you need to be able to prove that you completed it properly, and did everything you reasonably could to keep people safe.
This is the fundamental service a competent risk assessor provides. Unlike an AI, they can actually visit and see your premises and make observations about safety risks and hazards without being told to make them. They can apply knowledge, rather than just cobble together words from other risk assessments. They can help you to make positive changes to improve safety in your workplace. And they can give you a risk assessment that’s a living, breathing document that reflects the current state of your workplace, not a robotic one made by an algorithm.
AI is great at looking competent. You give it a simple command, and it produces something that looks the part to an untrained eye. It can even help people with the requisite training to speed up certain tasks. AI can be a starting point for a risk assessment if you know how to edit and improve it, and how to take that template and use it to conduct a proper risk assessment. But it isn’t competent, it just looks like it is—and it won’t pass muster if anyone ever investigates it.
–
AI tools aren’t completely without value, and they can make you more productive in certain tasks. But when it comes to safety, there shouldn’t be any compromises, particularly to a computer that can’t see your premises. Only a competent risk assessor is capable of writing and conducting a risk assessment that records and addresses the risks and hazards in your workplace—and makes it safer for employees and visitors.