Google’s Web Cache Is An Essential Facility

As part of our interaction with the House Antitrust Subcommittee, we were able to submit several questions to the committee that the committee later sent to Google in order to get answered. Reproduced below is one of most important questions we asked and Google’s answer to it:

3. Has Google ever been approached by another business about giving access to its cache of web crawl data? If so, what was Google’s response?

Like many other firms that create and maintain indexes for web search, Google offers syndication options that allow other providers to show search results on their own sites that customize or extend Google’s search results. Google’s syndication program, known as Google Programmable Search Engine (formerly known as Google Custom Search), is based on Google’s core search technology and allows providers to create a search engine for individual sites, collections of sites, or the whole web. Providers can select and customize the categories of content they receive from Google’s indexes, allowing them to create custom “topical” search engines that focus on specifc sources or kinds of content (https://developers.google.com/custom-search/docs/topical). Google also offers APIs for certain kinds of content, such as Google Maps (https://cloud.google.com/maps-platform) and Google News (https://newsapi.org/s/google-news-api). Providers can customize the look and feel of the search results they receive from Google and can supplement those results with additional information or promotions from other sources. For partners that want even greater control and customization, Google also individually negotiates syndication agreements. For more information, please see the guide for users available at https://developers.google.com/custom-search/docs/overview.

Page A-2 from “GOOGLE’S SUBMISSION IN RESPONSE TO SUBCOMMITTEE QUESTIONS FOR THE RECORD FOLLOWING JULY 29, 2020 HEARING” , question 3 from Chairman Cicilline.

The response from Google here is an attempt at misdirection, hoping that the reader will confuse Google’s syndication offering with access to the vast amounts of data that Google has collected over the years. The question specifically asks about access to the cache of web crawl data and not about access to the things that have been built on top of that data. A locked down derivative product built on top of the data is obviously no replacement for the data itself. It’s an insult to Congress that Google thought they could get away with this and thought that no one would notice that they obviously did not answer the question.

The reason why this question is so important and why we need to know the answer is because of something called the essential facilities doctrine. Of all the potential antitrust actions that can could be pursued, the essential facilities doctrine stands out the most to us when it comes to Google. Here’s Wikipedia, on the definition of the doctrine as well as the criteria for its application:

The essential facilities doctrine (sometimes also referred to as the essential facility doctrine) is a legal doctrine which describes a particular type of claim of monopolization made under competition laws. In general, it refers to a type of anti-competitive behavior in which a firm with market power uses a “bottleneck” in a market to deny competitors entry into the market. It is closely related to a claim for refusal to deal.

Under the essential facilities doctrine, a monopolist found to own “a facility essential to other competitors” is required to provide reasonable use of that facility, unless some aspect of it precludes shared access. The basic elements of a legal claim under this doctrine under United States antitrust law, which a plaintiff is required to show to establish liability, are: 1.) control of the essential facility by a monopolist, 2) a competitor’s inability to practically or reasonably duplicate the essential facility 3) the denial of the use of the facility to a competitor, 4) and the feasibility of providing the facility to competitors. The U.S. Supreme Court’s ruling in Verizon v. Trinko, 540 U.S. 398 (2004), in effect added a fifth element: absence of regulatory oversight from an agency (the Federal Communications Commission, in that case) with power to compel access.

At first blush, a claim by a new search engine under this doctrine against Google looks pretty straightforward.

  • The first criterion is met because Google controls a saved cache of web crawls and is a monopolist in regards to search.
  • The second criterion is met because crawling the web is a natural monopoly. It’s not possible for new search engines to crawl some parts of the web and build their own complete caches because of the rational and legal restrictions website operators have put in place that we have previously discussed.
  • The fourth criterion is met because it’s possible for Google to copy the data stored in the cache or otherwise grant access to competitors. Google runs the Google Cloud Platform, a service that provides companies with information technology services. It would be perfectly feasible to offer access to the crawled cache as part of this offering.
  • The fifth criterion is met because there is currently no agency set up to provide regulatory oversight to internet crawlers.

The reason we submitted that question to the committee about whether another business had asked for access to the web crawl data is because if another business had asked and been refused, then that would satisfy the third criterion and could form the basis of an antitrust lawsuit under the essential facilities doctrine. We are not currently aware of any business who has asked Google for access to the cache and been denied. However, that does not mean that such a request has never been made or that it couldn’t be made in the future. There’s also a private right of action when it comes to antitrust here in the United States so private competitors to Google that have been refused access by Google to the web cache could sue Google themselves instead of waiting for the government to take action.

The courts have been increasingly hostile to antitrust lawsuits so there’s no guarantee that a lawsuit like this will go anywhere. However, Google executives and employees have a bad habit of saying dumb things via email, so if a lawsuit like this ever got to the discovery stage, there would likely be some fun stuff that came out to say the very least. Plus, part of the recommendations that Congress proposed was the revival of the essential facilities doctrine by clearing out any excuse the courts have to try and ignore it, so by the time a case started in the next few months might actually reach trial, Congress could have cleared the way already for it to go through smoothly and successfully.

In summary, to fuck with Google, we recommend companies start suing the living shit out of Google under the essential facilities doctrine for access to the web cache. Google is in so many markets now that almost every company is a competitor of theirs, so almost any company could potentially sue them for access. This sort of lawsuit is a venue available to anyone and has a decent chance to really mess things up for Google either by forcing them to open up the cache generally and/or revealing documents via discovery that have further antitrust ramifications later on. So, if you are a competitor to Google who has ever been refused access to the web cache in the past or one who is looking to be refused access in the future and you want to make Google hurt, hit us up [email protected] and let’s see what we can do.

Share via
Copy link