Google

From P2P Foundation
Jump to navigation Jump to search


Discussion 1: Google as a Business

Mark Choate's on Google's Competitive Advantage

Mark Choate:

Google’s success is dependent upon the assumption that a link from one site to another implied some level of endorsement such that a page with lots of links to it must be better in some way than a page with only a few links to it. Google’s PageRank algorithm (which is how the search results are prioritized) is based in part upon how many other sites link to a given page. If you have two separate pages, both with similar content (as ascertained by word count and position), favor is given to the page to whom more sites link than the other.

When a person participates in a transaction online, a residue is left behind that is meaningful. When one page links to another page, information is embedded in that link, and when a user makes a choice and clicks one link instead of another, there is information about the choice embedded in the action. There is no such residue on books, other than dog-eared pages, smudges and coffee stains, nothing that can compare with the usefulness that that can be gleaned from these fingerprints that are left online.

This has proven to be a remarkably effective strategy for Google, so effective that Google was able to enter the Internet market rather late in the game and very quickly become a leading online business. When Google was launched, Yahoo! was the leading search engine. History has demonstrated that Yahoo’s competitive advantage was not sustainable at all. Why is Google’s story different?

Every day Google learns more about the content that is distributed on the Internet. The knowledge is a consequence of the steady aggregation of knowledge in the form of links created by human beings. As a result, Google’s site makes it easier for me to find the information I am looking for. As Google aggregates this data, the search engine continues to improve. For this reason, it will be very difficult for a new entrant in the market to compete with Google. Even if they were able to reconstruct Google’s software and technical infrastructure, they would not have the years of aggregated data documenting user behavior and capturing those moments of human judgment that Google has already acquired and will continue to acquire.

But what is interesting about Google is that Google isn’t entirely transparent in the same way that Wikipedia is. In fact, Google’s core search algorithm is proprietary and proponents of the Enterprise 2.0 label often say that Enterprise 2.0 firms are moving away from proprietary technology in favor of open standards.

The source of Google’s success is Google’s masterful understanding of the value of the online transaction. They know the value of information and they have taken creative steps to find ways to measure those moments of information exchange that take place so frequently online. Google has not opened up its API to outside developers because of a belief in their part on some intrinsic value in openness. Quite the opposite. They have opened up their API because they have discovered that it is in their best interest to encourage as many transactions with Google as possible, so that Google and Google alone can sleuth through the evidence that is left behind.

Google also isn’t collaborative – at least not in the traditional sense of the word. Google really operates just like any other business. In exchange for a service (like receiving a list of search results), the user gives Google a little piece of information. Google aggregates that information and uses it to improve the search service as well as to create revenue through the sale of advertising. This isn’t collaboration; it’s capitalism.” (http://www.cutter.com/offers/enterprise2.html)


Google as an enclosure of the common

Toni Prug:

"Google is a tool for better utilization of the commons, engineered for vast private profits, whilst relying on the common production and utilization of what it provides. The larger the common, the more websites that Google can access for free and provide as searchable, the better the sales pitch to advert buyers and Google users, and larger the profits. Google utilizes the labour of the common without privatizing it. Yet, as we have seen with the most funding for technology coming from state funds in USA, Google’s PageRank patent – a concept whose history of has recently been developed (Franceschet 2010) – is held by Stanford university who also got a large number of shares in the company. While the commons are open, the source on which Google built its empire, the algorithm producing their presentation to the users, is closed. Google’s use of the data it stores on its users is also entirely opaque. Their book digitizing is another project where Google used commons to create a vast catalogue of commodities. Again, like in the case of their search system, it uses what the common produces, adds value to it by making the access easier, and repackages it into forms which accommodate profit streams. You cannot copy and paste books that google scans and provides on their website, although they might be copyright free. All of the Google’s processing power is proudly done on cheap hardware running versions of Free Software operating systems, another commons on which Google business model entirely depends.

Google confirms the thesis that ‘capitalist abstraction rests on the common and cannot survive without it, but can only instead constantly try to mystify it’ (Hardt and Negri 2009, 159). The example of estate agents is another illustration of it: ‘location, location, location’ is a name for the proximity of the property to the common, to quality of the neighbourhood. It is commons like parks, cultural events, libraries, recreation, education, child care, health, transport facilities, that give value to private property (2009, 156). Google is like estate agents, it places its services in the midst of the best common they can find. In both cases, the larger and better the common, larger their profits can grow, once embedded into the flow of the common being produced and utilized." (http://hackthestate.org/2010/03/05/series-on-commuonism-open-process-the-organizational-spirit-of-the-internet-model-2/)


Discussion 2: What's Wrong with Google Search?

We need Open Process Search Systems to replace Google

Toni Prug:

"Search systems, as several participants at the Deep Search conference noted, is an essential component of the Web. And given the importance of the Web, and its embeddedness into multiple key aspects of life, the society cannot do without one. The architecture and protocols of the Internet and the Web might be open, developed by IETF via open process, running mostly Free Software, but the architecture of search systems remains closed. This is not good enough. As part of the democratic practice of the common, we have to have search systems built on the basis of IETF and Free Software principles. We need Open Process search systems.

Search systems have four distinct components: Crawler, Index, Search and Rank, and GUI. We could and should build a public infrastructure where first two components are shared, and on top of the indexed Web, open interfaces to various Search and Rank algorithms and user interfaces are provided (Rieder 2008). There are different ways this could be done. One is through existing grid systems used in academia, this system is already distributed, staffed with highly skilled people and like the rest of the Web, mostly built using Free Software. Other option is to internationalize Google. A worldwide public organization could demand from USA to break Google search system away from the rest of the company, release all knowledge to do with how it operates (technical documentation) into the common and make it into a separate globally owned company. Democratic ownership would also ensure accountability in dealing with user data, something Google arrogantly refuses to do. The form of such global ownership, the model of the new management of the commons, remains an issue to solve. Google uses Free Software to utilize the commons (Web) as their core profit stream. Yet neither belong to any single nation.

Hence, the solution on how to manage it should not belong to any single nation’s economic and legal system – regardless of where the Google corporation, or any other entity utilizing the commons for the profit, is legally based. Indeed, in the discussion on the patenting of biological material, the question of disclosing the origin of the material part of a patent application is one of the key political issues (Howard 2008). When a seed of a Brazilian, or an Indian origin is to be patented, mandating disclosing the origin in the application can be used to deny bio-piracy by the more developed economies of the biological material originating in less developed countries. In a similar way, who gives the Google right to utilize what is common to the world, the Web, for private profit and without global accountability?

Why would we allow Google to be subject to the laws of any single state? The French state attempt to control what Google does within their web-territory renders the tension between the commons, for profit organizations and the state visible.

The question is then, why do not other organizations, in other states, do what Google does, and why not use them instead? They might do so in future better than Google does, and thus become a predominantly used system, but that is beyond the point. They would be under the same logic presented here, regardless of their location. Furthermore, i can limit my websites exposure to Google by denying their spiders access to it. That still does not address the core issues at stake here. Google would still be utilizing everything that belongs to economic system of which i’m part of, which at minimum, in the narrowest sense, is the national economy to which i pay taxes, in which i live and work, in which i produce and consume. As a member of such entity, as a citizen of a state, i want to assert the ability to dictate conditions under which anyone, including Google, utilizes anything produced by any members of the state i live in.

In other words, a state ought to control its economic affairs. Yet, with the Web, such affairs, economic activity, cannot be fully geographically located. Although i work in London/UK, the product of my work may appear is text based, and as such can be hosted by any of the servers i choose for hosting, in large number of states worldwide. Who should have a say in the economic benefits derived from what i produce? The state does it by having me immediately pay taxes on what i earn from it. Institutions which might impose and enforce copyright or patent over it might benefit long term from it too. Yet, organizations such as Google benefit economically from it as well. While the state and institutions i work for have a more direct and historical claims over my work, and while these relations are known, regulated and even democratically controlled to a very limited extent, entities like Google derive economic benefits from it without any regulation or democratic control.

Any organization that seeks to utilize the commons and that does it on the large scale should be, under the some form of democratic management of the commons. No entity should be allowed to utilize the commons without a form of such control. In order to give credit to the remaining Google company and to keep it developing, part of the revenue from the adds would have to go to the company. The difference would be that in this case accountable organization would be setting what kind of adverts to accept, or reject, instead of relying on couple of super rich people and their sense of good and evil. Although, banning adverts for guns is a welcome decision (Lowe 2009, 140).

In short, the issue of utilization of the commons ought not be left to the capitalist corporations. First the states, like the French are trying to do now, and then us, the political multitude in becoming, should intervene. The disruption that Google’s project introduce into the sectors adopting the possibilities of new technologies, mass book scanning for example, are welcome. But not under the rules chosen by the Google’s board." (http://hackthestate.org/2010/03/05/series-on-commuonism-open-process-the-organizational-spirit-of-the-internet-model-2/)


Weakness of Google Search results

Robin Good [1]:

"1. Quality of Search Results

Depending on the topic or industry you happen to be searching in, your mileage may vary, but in my personal experience the quality of Google search results cannot really be rated as ideal.

Often results are representative more of big brands and companies operating in the space, rather than being a true collection of the best resources on the topic.

Here some relevant comments and opinions:


"...higher quality sites are often further down in the search results because they're not as popular as the sites that are ranked higher". Source: Is Google Dumbing Down Search Results? by Chris Crum, September 2013 - WebProNews


"...This is part of why I think I've developed a reflex, after searching Google, to skip over the first few results after the sponsored links and start looking near the middle of the page. W3Schools, Wikipedia, and a few others. And it's a great example of the central failure of the pagerank idea: if the strongest signal is popularity measured through linkage, the highest quality results will rarely be at or even near the top".

Source: Lee Philips comment to the article Why I'm Planning to Kill W3Schools, September 2013 -YCombinator


"Google ranks results by popularity (by how many sites link to each result). This isn't necessarily the same as quality.

Google individualizes search results and may connect you to information that fits with your past searches rather than providing a balanced view of a topic."

Source: Finding Quality on the Internet, September 2013 - Laurier Library



"...what is authority to the Google algo, is not what is authoritative to a human, and what its measure of quality is not human either". By Graeme_p

"The search engine may or may not come up with the best site... Google are delivering what will satisfy the majority of searchers and not the most accurate, practical or relevant information". By EditorialGuy

"It really doesn't matter whether Google can measure quality or not, because it's only a very minor factor in their current ranking system. They had to de-emphasize it because it was preventing them from getting big brands and big organizations to the top of their search results." By Aakk9999

Source: Authority and Quality: Google Definitions vs Common Sense , July 2013 - WebMasterWorld


To check the original sources for the quotes: http://www.masternewmedia.org/future-of-search/


Credibility Issues

By Robin Good [2]:

"Credibility

a) Automation Google relies heavily on automation, and algorithms to index, organize and rank search results or it would not be possible for it to scale with the amount of content existing online.

At the same time, automation and algorithms are generally considered of a lower quality when compared to human analysis, especially when it comes to evaluating the quality and credibility of an information object. In this respect, Google may have already reached the peak of what can be done by solely automating the organization and ranking of search results.

And one of the reasons of why the Google superpower may be soon losing its mojo is because people are increasingly much less interested in pure listings of relevant web pages created by an algorithm that tries to guess what's best, and much more in finding bundles of selected resources suggested to them by trusted references. Possibly human beings, they know (directly or not).


b) Conflict of Interest

Google is in control of what information you see when you search for something and how this information bits are ranked and organized. Wheter you like it or not it helps you build a world view and may influence it by selecting for you criteria by which to rank its results.


My question then is:

How can Google remain credible if,

a) It is totally secretive about it, even in the face of its own mistakes?

b) It is the world dominant search advertising platform, which pivots 100% around Google search engine, and which makes the majority of Google earnings?


That is: if you have a monopoly of the search market and control the search results, whether for the good or the bad, you can control the advertising market to your own benefit.

Then, while you could be saintest of the saints, how credible can you then be in such situation?


Link to check the original sources of the quotes: http://www.masternewmedia.org/future-of-search/


Secrecy Issues

By Robin Good [3]:

"Secretness

Google has always been very secretive about its search ranking algorithm to avoid unscrupulous marketers to exploit it to gain visibility in search results. While one can understand the logic of this approach, the results of undertaking it are also in front of everyone.


a) There are probably more people investing very significant time and resources to game Google than those who do not.

b) Google search results do not offer a qualitative search experience as they are often dominated by big brands and not by sites and pages that provide true, valuable information.

c) Google keeps investing large amount of resources to counter this gigantic and ever-increasing effort to spam and "game the system" with limited (in my opinion) results so far.

d) Due to the above listed items, and due to the vast damage done to quality content web sites, and the limited results in improving the quality of search results, public trust in Google's ability to truly distinguish high-value content from spam and web site scrapers is generally quite low.


"While this is presumably done to prevent people from gaming the system (or competitors from copying features), it makes it a lot harder to determine whether Google is unfairly penalizing websites..."

"As the Electronic Frontier Foundation points out in a blog post criticizing the move, Google's search algorithms are opaque by design, and so there is no way of knowing what kind of criteria they will be using to decide which sites to penalize and which to leave untouched."

Source: Should We Trust Google When It Comes to Piracy and Search? by Mathew Ingram, August 2012 - Gigaom


When so much of our life depends on having instant access to the right information, don't you think it would be very risky to depend on a centralized and secret system, driven exclusively by financial gains, to continuously influence how information is organized, ranked and classified?

In my view everything should be transparent. No company or brand should be able to game the system without being vulnerable to everyone seeing it.


Link to check the original sources of the quotes: http://www.masternewmedia.org/future-of-search/


Discussion 3: Google and the Environment

Over the last half decade alone, Google’s gross carbon emissions have more than doubled

Nafeez Ahmed:

"We might refer to Google’s oft-stated declarations of reducing its carbon footprint to zero while transitioning to 100% renewable energy by 2018.

Yet according to Lux Research, Google uses an obsolete tool to calculate its data center emissions from purchasing electricity from the power grid. Consequently, in four out of seven data centers, Google underestimates its dependence on coal by 30 percent or more.

And then there is Google’s primary technique for reducing its footprint to zero: buying carbon offsets — that is, investing in outsider green energy projects, allowing Google to claim the equivalent in ‘emissions reductions’ on its own books. Although their actual real-world emissions have not reduced at all.

While Google has been trumpeting its zero carbon trajectory — receiving accolades from Greenpeace along the way — its gross carbon emissions have actually increased.

Over the last half decade alone, Google’s gross carbon emissions have more than doubled. In 2011, Google recorded its gross CO2 emissions at 1,677,423 metric tons.

In 2012, the company reported a 9% drop in its gross emissions to 1.5 metric tons. Yet even here, the drop was achieved not by a real material drop in emissions, but by factoring in deductions from Google’s power purchase agreements (PPA). So the real gross emissions figure for that year, calculated in the same way the 2011 figures were calculated, was 2,024,444 metric tons.

By 2016, Google’s gross carbon emissions had grown to 2.9 million metric tons according to the company’s 2017 progress report.

The net result?

Google’s actual carbon footprint is growing exponentially. Yet environmental certifications, such as that produced by Greenpeace, are being used to sanitize and legitimize this growth.

Google now claims that “because of our renewable energy and carbon offset programs, our net operational carbon emissions were zero. Because of our emissions-reduction efforts, our carbon intensity has steadily decreased even as our company has grown and our energy use has correspondingly increased.”

All this is true, but it is ultimately a clever carbon accounting trick that allows Google’s real-world carbon emissions to continue accelerating. Which is why, despite all this self-congratulatory nonsense about the ‘greening’ of the internet, our current global emissions trajectory is so bad it could end up heading to an uninhabitable 8C planet by end of century.

Google hosted a fundraiser for notorious climate-denying Senator James Inhofe; donated $50,000 for a fundraising dinner for the Competitive Enterprise Institute, an ultra-conservative outfit that attempts to sue climate scientists for fraud; and is a member of the US Chamber of Commerce, which consistently lobbies to block action on climate change and promotes fossil fuels.

But it’s all okay, because Google got an ‘A’ certification from Greenpeace.

In other words, such sustainability metrics might be good for business; but for the planet, they are meaningless.

Their application isn’t slowing the pace of fossil fuel extraction — they are accelerating extraction under the cover of saving the climate.

And thus, with the help of misleading number crunching, an exponentially increasing carbon footprint is misreported as a decreasing carbon footprint.

Unfortunately, you won’t find any dissecting of Google’s grand claims from the mainstream liberal press. Instead, Huffington Post — Otto Scharmer’s media partner of choice to ‘transform capitalism’ — bravely reported Google’s acclaimed clean carbon footprint trajectory without any investigation of the facts." (https://medium.com/insurge-intelligence/google-huffpo-greenpeace-the-liberal-progressive-resistance-are-lying-to-you-45c3fd7c920b)


More Information