The rapid pace of AI evolution has required policymakers around the world to keep pace, concerned about the potential impact on employment, data privacy, security, intellectual property and a range of other problems – although AI could pose an existential threat to the future. human race. This has created an even greater rush among vendors who want to exploit the technology, to try to steer regulation in a direction that will not create unnecessary barriers to adoption, nor stifle useful innovation.
So it’s becoming common to see executives from tech giants like Facebook, Google, and Microsoft, as well as others like X and OpenAI, testifying before Congress, lobbying the EU, or chatting with political leaders during events like the recent AI Safety Summit hosted by the UK at Bletchley Park. But what about enterprise technology providers who may not be as well known, but for whom AI is increasingly becoming central to their own offerings?
At last week’s Workday Rising EMEA conference, I sat down with Chandler Morse, vice president of corporate affairs at the HCM and financial software giant, to learn more about the company’s public policy work around this very topical question. It’s clear from our conversation that while Workday’s involvement may not make headlines, the company is taking a proactive role in advancing agreement on what AI regulation should look like. Morse said to me:
In many ways we believe — and this is our message to regulators — that there is a path to follow, we know what the path is, we just need to put it on paper.
This is actually where Workday really shines. We’re the team that shows up with the legislative language, the amendment language, the drafted bills, to say, “Here’s a path forward.” We’re pretty proud of the impact we’re having right now.
And what is Workday’s goal? He answers :
Workday is all about AI unlocking human potential. One of the things we know is that there is a lack of trust in technology. People don’t tend to want to use technology they don’t trust. So from our perspective, it’s the issues of transparency, explainability, bias and discrimination that would most influence what people think about the use cases we would try to elevate…
Our motivation is very clear. We want our customers to use this technology, and adoption and comfort level with using AI in the HR context increases when people feel more comfortable.
Engage with policy makers
Having developed AI technologies over the past decade, Workday began actively engaging in HR AI policy discussions in 2019. It can therefore bring a lot of experience to the discussions which have become much more intense since the popular success of ChatGPT. Lawmakers aren’t necessarily aware of how all the technologies fit together, or even what exactly Workday does. But at least, unlike some longer-standing public policy issues, this isn’t something politicians have already made up their minds about. He comments:
It is refreshing to see how policy makers are aware of how much they need to learn.
The main task is to move the discussions towards concrete measures, and policymakers welcome the guidance that Workday can offer, he believes:
It feels like no one knows what to do. ‘What are we going to do? How are we going to do this?’ Our response to this is that this is not a particularly new regulatory technology. This is just another example of technology that we need to apply a regulatory regime to. It is possible to do them in a way that is not cumbersome, but gives good results.
He highlights experience with data privacy regulation, where adopting a risk-based impact assessment approach has helped to clarify the way forward. He adds:
We just don’t think it’s that hard to do. And frankly, that’s what everyone should be doing anyway.
Therefore, Workday’s engagement with the EU has encouraged the risk-based approach which it says has now become “table stakes” following the EU AI bill adopted. The company then turned to NIST, the US standards body, becoming an early supporter of the work to create your AI risk management frameworkas a first step toward creating common ground on AI regulation in the United States.
Codify existing practice
Often the easiest way to make progress is to codify existing practices already adopted in the field. For example, Workday recently partnered with ADP, Indeed, and LinkedIn to produce a paper on HR AI practices as part of the Future of Privacy forum. He says:
Our hypothesis was… that there was some consensus around HR AI best practices… The goal was to adapt to existing practices, and we found them to be incredibly easy to put in place, and we produced the document.
Globally, there are many different initiatives underway. Workday participates in the regulatory process in Canada, the United States, Singapore and Australia, the United Kingdom and the EU at the government level, and other bodies are involved, such as the G7 and the OECD. In the United States, if Congress delays action, then, as we have seen in the area of privacy protection, each American state will begin to act as it pleases. Regulatory proliferation then becomes an issue. Morse comments:
We place great importance on international harmonization as the key to the success of an AI regulatory regime. I think we have a lot of work to do.
Obviously, I think that the European AI law will be the first domino to fall. Much like privacy, I think this will have an outsized impact on a global scale. In the environment we’re in now, there’s no framework to go back to and say, “Well, are we going to do this or something else?” » Now it’s a sort of free-for-all, where everyone is trying to understand. Once EU AI is adopted, it will be the starting point for conversations around the question: “Do we want to differ from this?” Do we want to be like that? …
We’re not trying to say that everyone should adopt the same thing. We say, first, that we must prioritize regulations that build trust, but also support innovation. Maneuverability and confidence are therefore key elements. But also interoperability. Is there at least a way to make the frameworks work together?
Social impact of AI
At the same time, the impact of AI on the workforce is also attracting the attention of policymakers. Workday recently appeared before the Senate Health Committee on this topic. Here, it is important to highlight the positive impact of technology as well as the downside risks. He says:
We are still in the early stages. But AI is an exciting technology in that it will not only drive change in the future of work, but it also provides the tools needed to mitigate these problems – to the extent that you can take a talent-based approach on skills to evolve with AI. tools such as Skills Cloud (from Workday) which can provide additional capacity to drive career reskilling, drive talent marketplaces and drive career planning.
This type of discussion is far more relevant to real-world outcomes than talking about an existential threat posed by AI, which he describes as “a distraction.” He pursues :
It distracts us from real-world issues and concerns that are going to affect more people in the short term…
Our goal is to ensure that the benefits of these technologies, when implemented responsibly, can be harnessed. In many ways, our regulatory view is on a shorter horizon in terms of the impact we think AI is going to have. We believe this will have an impact on the workforce, and we believe AI can support tools that can help workers and employers address these impacts. But to get there, we’re going to need to put safeguards in place so that people have confidence and can feel good about how they use it.
It is in this context that we are evolving, with concrete proposals, with clear language. There’s no bill we won’t question, there’s no conversation we won’t participate in, again, around this idea of how we can do something ? In many ways, existential conversation doesn’t help with this.
My opinion
It’s interesting to see how much work is being done outside of the headlines to advance the regulatory regime around AI and, as Morse puts it, “do something.” For Workday, it’s all about building a responsible regulatory foundation that will build public trust in technology and enable businesses to continue to harness its potential.