Google had a laugh on April Fool’s Day, releasing a joke ad for a “human autocompleter” in place of the computer algorithms presently used. It’s good to see a corporate entity willing to take the mickey out of itself, as only days earlier, the Milan Court of Appeal handed down a decision that held Google Italy liable for defamation because of the way its autocomplete feature linked a businessman’s name with the words for “fraud” and “conman”.
Google’s April Fools Day ad for a “human autocompleter” suggested successful applicants would have:
- a typing speed of at least 32,000 words per minute;
- a willingness to sit and watch user searches come in, predict what they would be searching for, and type it in as quickly as they could; and
- a certificate in psychic reading (not a pre-requisite but strongly preferred).
However, it was the very “human” element of the autocomplete algorithms that led Judge Roberto Bichi and a panel of two other judges to rule that the autocomplete suggestions were produced by Google Italy itself.
When users typed in the name of an Italian businessman the autocompletion offered to complete the search with “truffatore” (con man/crook) or “truffa” (fraud). This caused trouble to the entrepreneur and financial services educator as current or potential clients might search for his name in connection with trading. The court ruled that the association of the plaintiff’s name with the word “fraud” was likely to lead users “to doubt the moral integrity of the individual” and “to suspect him of illicit conduct.”
Google argued that it should not be held liable for terms that appear in autocomplete as the terms are predicted by computer algorithms based on searches from previous users. Google also argued that as a hosting provider, it should be protected from liability by the safe harbour provisions of the European Union’s e-Commerce Directive. The court rejected these arguments, holding that the content was produced by Google, although through automated means and that the safe harbour provisions did not apply.
It is notable that, in his blog, Carlo Piana, lawyer for the plaintiff, said that the judgment “is by no means an endorsement to censorship”, as Google was given notice, the complaints were discussed and the request was for removal of only two limited search suggestions.
While Google Italy “review their options”,* it is interesting for us to review how this situation may be treated under Australian defamation law.
Having notice and knowledge of a defamatory statement and having the opportunity to, but failing to remove it amounts to publication under the law of defamation. However, in 2009, the Queen’s Bench (UK) held that there was no human input involved in the generation of a list of Google search results, and that Google was merely facilitating the publication of others’ content.** Although Google had been notified of the existence of the offending content, Eady J held that they did not have the power to control what items came up in a search and so were not liable.
In terms of autocomplete, it is likely that Google would be seen as having:
- notice and knowledge of the defamatory statement (at least once it was informed of the situation); and
- the ability to remove the offending content (as it has now done following the court case, and as it was able to do at the time in the same way it filtered out racist or sexist search suggestions and terms known to be used to distribute copyright-infringing material).
Google’s failure to remove the content in these circumstances, would mean it would be likely to be viewed as the “publisher” of the defamatory material.
But what of the defences which may be available to Google Italy?
The defence of “innocent dissemination” would likely be lost as soon as Google was informed of the possible defamatory nature of the content. Similarly, the protection offered to Internet Content Hosts under clause 91, Schedule 5 of the Broadcasting Services Act 1992 (Cth) would be limited to content hosted in Australia and would cease to be of assistance once Google was alerted to the situation. This clause is similar to the EU’s safe harbour provisions, but is more narrowly drafted, and does not provide such broad protection against legal liability.
The defence of “honest opinion” would also be unlikely to assist unless Google could convince a court that suggested search terms were a genuine expression of its opinion rather than a statement of fact. This seems doubtful given that Google’s own website states that autocomplete suggests search terms “based on a number of purely objective factors”.
The difficulty with the defence of truth is that a publisher is required to prove the underlying allegation in order to avail itself of protection. It is questionable whether the underlying allegation is that the businessman was a “fraud” or a “con man”, or merely that other users had searched for those terms in conjunction with the name of the business name.
Finally, fair comment. To enable itself of the defence of fair comment, Google would have to prove that the defamatory content was (a) fair; (b) on a matter of public interest; (c) based on fact; and (d) recognisable as a comment. Google would be likely to struggle to make out the elements of the defence, especially as the autocomplete suggestions are so short. Without any further facts identified in the autocomplete suggestion itself, the comment may only be able to be comprehended as a statement of fact.
It will be interesting to see how Australian courts approach these defences to defamation in the context of the internet, especially with the rise of social media. In the mean time, yours truly will be busy increasing my typing speed and getting my psychic reading skills certified …
* Thanks to Danny Sullivan of searchengineland who contacted Google for comments on the ruling.
** Metropolitan International Schools Ltd v Designtechnica Corporation  EWHC 1765 (QB).