This is Allens

Bill Tanner

Bill is Allens' Chief Information Officer and a key member of the firm's Generative AI Steering Group.

TIA-EOY23_BIll-Tanner_426-3.jpg

ChatGPT burst onto the scene in November 2022. We'd been following developments closely, but in February, we decided Allens needed to have a broader conversation about what generative AI could mean for our clients, our firm and our people.

We were alive to the fact that  AI was high in the hype cycle at that time. We were eager to understand the risks associated with the technology and get a handle on how much of the hype was going to translate into real-world change. Right through our exploration, it's been important to align our own use and experimentation with the advice we're giving clients. So, we started 'eating our own dog food' (to use a famed IT development phrase) in August, with the release of our own example of ChatGPT – 'Airlie' – aligned with our AI governance framework, and alongside an internal idea generation campaign to explore potential use cases.

Airlie was met with strong internal take-up and intense interest from clients. Since its launch, we've met with around 30 clients who are keen to learn more about how we've approached generative AI. Experimenting early – with real client and firm information, in an environment with the right controls – has been an important stepping stone. From there, we can explore everyday AI, and venture into more custom-built AI for specific problems, which is the next horizon of AI integration at Allens.

Looking ahead, trust is the fundamental challenge with AI. At its core, generative AI is a mathematical equation. It's built to predict what comes next, not arrive at the correct answer. It has no context for what is right. So, we're now seeing a shift into restricting responses from a trusted data set, leveraging a technical concept called 'retrieval augmented generation'. That is, you run your query, then do a check against a trusted repository, and only return results from that data set. I think that is where we're going to see real value, and a higher trust factor, in the responses we get from AI.

Regulating for trust is also important. How do you ensure that organisations aren't leveraging AI in a way that can break trust or confidentiality, and how do you reduce the chances of it being used in a malicious way? It will be critical to consider how we create a regulatory environment that can minimise potential harm.

As with any new technology, generative AI creates a need for new skills and ways of thinking in order to fully harness its potential. For example, prompt engineering – the ability to effectively interact with the system by providing detailed instructions and sufficient context – is a critical skill that we are prioritising in training our people.

I'm excited by the investments we're making in equipping our firm and people for the future. Generative AI is the next wave of disruptive digital change – however, it's coming at us at a highly accelerated pace, driven by the unprecedented access that people and organisations have to the technology, with low barriers to entry. We need to be building organisations and developing professionals that are resilient and highly adaptive to change. The days of doing the same thing for your entire career are gone, and employers have a duty to help give their people the skills and knowledge they'll need to thrive and adapt.

Three moments shaping the future of AI

 

number_25pc-blue_80x80px-1.png

Enterprise adoption of AI

We are expecting that AI can enable us to deliver more value, enhance customer experience, improve efficiency and drive innovation. But it also requires us to rethink our business models, processes, roles and capabilities. How do we leverage AI to augment our human intelligence and creativity, rather than replace it? How do we ensure that we have the right skills and mindset to work alongside AI, and not against it? How do we foster a culture of experimentation and learning, where we can quickly test and scale AI solutions while managing the risks and uncertainties? These are some of the questions that we need to explore and answer as we embark on our AI journey. We have a unique opportunity to shape the future of AI in a way that aligns with our purpose, values and vision. We have a responsibility to use AI ethically, responsibly and inclusively, and to help our clients do the same.

number_25pc-blue_80x80px-2.png

Increased AI capacity in the market

One of the key enablers of AI adoption is the availability and accessibility of computing resources and infrastructure. 

However, increased AI capacity in the market also creates some challenges and tensions, especially in relation to environmental, social, and governance (ESG) aspects. AI is a powerful technology that can help us achieve positive outcomes for society and the environment, such as improving health and education, and enhancing social justice and inclusion. But AI can also have negative impacts, such as exacerbating inequalities, creating ethical dilemmas, and increasing energy consumption and e-waste, due to the powerful processing infrastructure it requires. We need to balance the benefits and risks of AI, and ensure that we use it in a sustainable and responsible manner. We also need to collaborate with our clients, third party vendors and our people, to develop and implement ESG standards and best practices for AI.

number_25pc-blue_80x80px-3.png

The rise of AI safety and ethical AI as a priority for organisations and society

AI safety and ethical AI are not just technical issues but also social, legal and philosophical ones. They require us to address questions such as: How can we ensure that AI systems are reliable, secure, transparent, explainable, fair and accountable? How can we prevent or mitigate the potential harms and risks of AI, such as bias, discrimination, manipulation, exploitation, deception, coercion or violence? How can we foster a culture of trust, responsibility, and collaboration among all stakeholders involved in the AI ecosystem?

These questions are not easy to answer, and they pose significant challenges for organisations and society. However, answering them is essential for ensuring that AI is a force for good.