Law firm Hill Dickinson has restricted employee access to artificial intelligence tools following a “significant increase” in usage.
The global firm, which specialises in commercial law and employs more than a thousand people worldwide, warned its workforce in an email sent by a senior director about using AI tools.
It highlighted that that much of the staff usage was not in accordance with the firm’s AI policy and said that in future, access to such tools will only be granted through a request process.
AI usage at work
CIPD to lead research into responsible AI adoption
AI can only lead to better jobs with a people-first strategy
Hill Dickinson’s chief technology officer, who sent the email, reported that in the space of just seven days in January and February, there had been more than 32,000 hits on the ChatGPT chatbot and 3,000-plus hits on the Chinese AI service DeepSeek.
In the same timeframe, writing assistant tool Grammarly had nearly 50,000 hits.
However, it is unclear how often employees visited ChatGPT, DeepSeek, or Grammarly, or how many times individuals revisited these sites, as multiple hits could have been registered each time a user accessed the websites.
The email to employees, seen by the BBC, said: “We have been monitoring usage of Al tools, particularly publicly available generative Al solutions, and have noticed a significant increase in usage of, and uploading of files to, such tools.”
A spokesperson for Hill Dickinson said: “Like many law firms, we are aiming to positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients. AI can have many benefits for how we work, but we are mindful of the risks it carries and must ensure there is human oversight throughout.
“Last week, we sent an update to our colleagues regarding our AI policy, which was launched in September 2024. This policy does not discourage the use of AI, but simply ensures that our colleagues use such tools safely and responsibly – including having an approved case for using AI platforms, prohibiting the uploading of client information and validating the accuracy of responses provided by large language models.
“We are confident that, in line with this policy and the additional training and tools we are providing around AI, its usage will remain safe, secure and effective.”
The company highlighted that it has not banned the use of AI platforms, but has simply restricted access until a colleague has approved use. Since the note was circulated, it has received and granted usage requests.
Commenting on the use of AI tools in law firms in general, Ian Jeffery, CEO of the Law Society of England and Wales, said: “AI is here to stay as more than half of solicitors in law firms use some form of AI. AI tools could improve the way we deliver legal services but they need human oversight and government regulation as detailed in our AI strategy.
“The Law Society also provides general guidance, including a checklist of the key things to consider when it comes to using generative AI, including how to protect client information. We hope to be able to support the legal profession and the public to navigate this brave new digital world and make justice fair and equal for all.”
Sign up to our weekly round-up of HR news and guidance
Receive the Personnel Today Direct e-newsletter every Wednesday
HR roles in IT, internet and new media on Personnel Today
Browse more HR roles in IT, internet and new media