Unravelled: Considering Robo-advice
17 April 2019
Other articles in this edition of Unravelled:
- Trustees, start your reviewing! Super funds and the 'Member Outcomes Act': what needs doing, and when
- Design and distribution obligations – products caught (or excluded)
- AFCA: The first six months – and the year ahead
Written by Partner Simun Soljo and Senior Associate George Blades
'Robo-advice' is emerging as an important way of providing financial product advice, particularly as the advice industry continues its post-Royal Commission transformation and younger generations more comfortable with technology seek wealth solutions they can trust. One question, however, looms large over both the industry and financial product distributors alike: 'what legal risks will inform my robo-advice investment?'
One fundamental area of uncertainty remains the definition of 'personal advice' in the Corporations Act 2001 (Corporations Act). In recent years the issue has attracted the attention of ASIC's investigators, resulting in various enforcement outcomes including two enforceable undertakings and, now, a Federal Court judgment.1 But while the judgment has provided some guidance on the meaning of one of the more important terms in the definition – the word 'considering' – does this guidance help with the more fundamental issue that the law was not drafted for robots?
This article revisits the theory of robo-advice in light of this new judicial guidance.
The Corporations Act draws a line between the two types of financial product advice, calling one 'personal advice' and the other 'general advice'. It is 'personal' if it is given to a client (including by 'electronic means') in circumstances where the provider of the advice has 'considered' the client's personal information.2 All the rest is 'general'.3 The distinction matters because a 'personal' adviser owes more onerous obligations, including an obligation to act in the client's best interests.4
Accordingly, whether the client's personal information has been 'considered' is a very important threshold question.
Unsurprisingly, ASIC appreciates the importance of this dividing line and the consequences that flow when the line is unclear.5 Its guidance on the word 'considered', however, clarifies little. RG 244 says ASIC will not take action where personal advice is provided merely because the provider of the advice 'uses' personal information about a client's relevant circumstances 'to choose general advice that is relevant and useful' to the client, but only if the provider of advice does not 'consider' the client's personal information when they 'prepare the advice'.6 If that alone is not hard enough to understand, ASIC's enforcement action (resulting in the two enforceable undertakings and Federal Court decision) throws further uncertainty into the mix – it was taken in relation to pre-prepared statements about single financial products.
The Federal Court has now provided some clarification about the meaning of the word 'considered'. The judge said it refers to 'an active process of evaluating or reflecting upon', and 'an intellectual engagement' with, the subject matter of the consideration.7 '[A]ctive listening' by the adviser, without more, is not enough.8
So, is intellectual engagement of this quality possible for a robo-adviser? This question begs another, 'who is the robo-adviser?'
Who is the robo-adviser?
It's a good question. It might be tempting to think of the robo-adviser as the person or team who assembled the 'decision tree' or rules of the robo-advice tool which govern the advice. They, after all, probably did most of the thinking to identify what personal information should result in what advice. But the personal advice definition in the Corporations Act speaks of the 'provider of the advice'9, and in the majority of cases the provider of the advice will not be the same as the person or team who assembled the robo-advice tool.
So who is the provider of the advice? In the human-advice context, it is the human who provides the advice who is very likely the provider, rather than their employer or any licensee who has authorised them to provide the advice.10 In the robo-advice context, however, it is less clear. The law has clearly been drafted with a human in mind. We think the best view is that it is the person who 'offers' the advice through the robo-advice tool who provides the advice.11 So, in a web-based service, the 'robo-adviser' is likely to be the person who owns and runs the website, or who puts themselves forward (such as through branding or explicit statements in disclaimers) as providing the advice contained on a website. Robots do not yet have legal personality.
Can a robo-adviser 'consider'?
Turning, then, to whether that person in fact 'considers' the client's personal circumstances in the context of the robo-advice, before the recent Federal Court decision, the industry was likely to have taken an anthropomorphic view of the robo-advice tool, essentially treating its computational effect as the robo-adviser's 'considering'. This seems to be ASIC's approach, too, where in ASIC's robo-advice guidance, RG 255, ASIC's 'Digital X' fintech company example is said to have considered the client's personal information '[t]hrough its algorithm'.12
In the more common scenario where the provider of the robo-advice is a company, it's possible a familiarity with the fiction of 'corporate knowledge' and other corporate states of mind might result in a casual acceptance of this kind of view of the robo-advice tool as the company's 'considering'. But what about the less common scenario, where the provider of the robo-advice is a human? Do we so readily impute the computational effect of the robo-advice tool to the human mind? We think the answer, perhaps surprisingly, is much less clear.
And following the recent Federal Court decision, does anything change in considering 'considering'?
This is another interesting question. As we have said, even before the decision, there was a question whether an advice provider (let's say a human, to make matters more interesting) who was providing robo-advice could be said to have 'considered' a client's personal information. The court's stated requirement to 'evaluate', 'reflect upon' and 'intellectually engage' with, the personal information may indeed assist an argument that, on the current state of the law, 'considering' is a distinctly human concept which cannot be done by a robot without agency or, in any event, requires the robo-adviser – him, her or itself – to have engaged in that process personally, rather than through robotic imputation.
As we say, though, the more common scenario is where the robo-adviser is a company. In most cases, if a court were asked to consider whether a corporate advice provider considered the client's personal circumstances through robo-advice computation, we think it would likely adopt that anthropomorphic view and say yes. The Federal Court decision isn't likely to change this. There is probably nothing more human about the word 'intellectual' than there is about 'considering'.
For the time being, the industry - twice shy from the bite of compliance failures – waits for clarity on legal risks informing investment decisions. And ASIC's robo-advice guidance doesn't explain how the robo-adviser 'considers'.
- Australian Securities & Investments Commission v Westpac Securities Administration Limited  FCA 2078 (OSAD Case).
- Corporations Act 2001 (Cth), s766B(3). This article has only referred to the 'subjective test' aspect of the definition. But one should not forget the 'objective test' aspect of it.
- Corporations Act 2001 (Cth), s766B(4).
- Corporations Act 2001 (Cth), s961(1).
- According to ASIC Deputy Chair Daniel Crennan QC, '[t]he dividing line between personal and general advice is one of the most important provisions within the Financial Services Laws. It directly impacts the standard of advice received by consumers', ASIC Media Release, 19 February 2019.
- ASIC RG 244.48–49; see also ASIC RG 244.34.
- OSAD Case  FCA 2078,  (Gleeson J).
- OSAD Case  FCA 2078, .
- Corporations Act 2001 (Cth), s766B(3)(a).
- This view is consistent with the Corporations Act's view on who, in a 'human' personal advice context, owes the best interests obligations to the client: Corporations Act 2001 (Cth), s961(2). Another indicator that this is the better view is the Corporations Act's contrasting use of the term 'providing entity'. Also, ASIC also takes this view: see, eg, ASIC RG 244.51.
- This view is consistent with the Corporations Act's view on who, in a personal robo-advice context, owes the best interests obligations to the client: Corporations Act 2001 (Cth), s961(6).
- ASIC RG 255.28. In any event, ASIC certainly in describing the advice as being 'without the direct involvement of a human adviser' doesn't try to engage in a kind of retrospective mental gymnastics which would be required to have a human 'consider' the client's personal circumstances.
Other articles in this edition of Unravelled
- Simun SoljoPartner,
Ph: +61 2 9230 4635
You can leave a comment on this publication below. Please note, we are not able to provide specific legal advice in this forum. If you would like advice relating to this topic, contact one of the authors directly. Please do not include links to websites or your comment may not be published.