AE Monthly

Articles - November - 2006 Issue

Google Loses A Skirmish In A Belgian Court

Ggle

Belgian Google posts court order --the usually clean Google page cluttered with copy.


By Michael Stillman

An interesting skirmish took place in a Belgian court a few weeks ago which may have implications for the battle between search giant Google and publishers unhappy with Google Book Search. Google Book Search is the search engine's service that is copying texts from millions of old books and making them available through the internet. While there is no issue over books for which copyrights have expired, some of what Google has copied is still within copyright. Google only makes "snippets" of books under copyright available, a sentence or two around the search terms, versus the entire text for out-of-copyright books. They have also agreed to completely remove books upon request by the publisher. However, this has not been sufficient for some publishers who have focused not on the snippets shown (logically minimal "fair use" copying like a book review) but on the fact that Google has copied the entire protected text. Additionally, they feel that Google should be asking permission, rather than making it incumbent on the publisher to request removal.

The Belgian case dealt with a complaint by Copiepresse, a firm that represents the owners of copyrighted press articles in French and German published in Belgium. Their attack is particularly targeted to Google News, which provides snippets of these articles, along with a link to the site. Copiepresse convinced the Belgian court that this violated their copyrights. The court ordered Google to stop posting these snippets, with a hefty daily fine if they refused. Google has complied while it appeals. Additionally, the Court demanded Google post its order on their Belgian site for five days, with which Google reluctantly complied. All of this only applies to Google's Belgian site, and Belgians can still readily access Google's other locations to find this material, but Copiepresse would like to see this extended to all of Google's sites. As long as they are opening the copyright can of worms, why not open the bucket of worms of one country's laws determining what viewers in other countries may read?

Google is not permitted to comment publicly about the case during the appeals, but Rachel Whetsone, European Director of Communications for Google, did post on Google's official blog that publishers can use the universal "robots.txt" standard to prevent their sites from being visited by Google and other search engines. Simply add this to the page's code, and Google will not display that site in its results. However, this does not appeal to the publishers who have something of a schizophrenic relationship with Google. They may not like Google posting their content, but they love the traffic Google brings to their site. So, rather than taking the easy step to keep Google off of their site, they would prefer to set the rules by which Google will be allowed to visit their site. While it is a bit unclear exactly what it is Copiepresse wants, an interview with Margaret Boribon of that firm published on Groklaw.net sounds like they want Google to license their content. In other words, Google should pay a fee for displaying this information, or provide some other benefit. Of course, if this happens, there could be all kinds of fees suddenly introduced into the searching process, and much of the free nature of the internet could disappear.

AE Monthly


Article Search

Archived Articles

Ask Questions