Название: The Digital Economy
Автор: Tim Jordan
Издательство: John Wiley & Sons Limited
Жанр: Кинематограф, театр
isbn: 9781509517596
isbn:
The third set of intersecting activities, or third point of view, making up the economic practices of Google search are those of Google itself. These activities split between the structures set up by the company that allow it to offer services and mediate between search users and advertisers, and the implementation of those structures in the software/hardware that allows practices to be automated. This connection transformed Google and established one kind of digital economic practice as a money gusher, as noted earlier in the company’s turn from loss to profit once Adwords was implemented. To sustain this, Google’s economic practices have a dual character, with a never-ending process of improving search alongside never-ending developments in advertising.
We have followed a single search from the point of view of the individual searcher, but from Google’s point of view things appear differently. Instead of the individual who searches, Google has to first see the collective and its social relations, which it can read to judge what search results to deliver. From this point of view, a search question is the last point of a search enquiry; it is what leads up to the delivery of certain results in a certain order that determines whether a search engine will be good or bad. This also highlights a recurrent frustration in trying to follow digital economic practices, as the algorithms and programs that fuel search engines are generally industry (or government) secrets. In the case of Google, however, the broad principles are known because its theoretical foundation, the PageRank algorithm, is publicly available (Page et al. 1999).
PageRank was the first method Google used to generate search results and was the basis of its early success, on which everything else depended. The fundamental insight was that the World Wide Web could be read through techniques modelled on academic citation practices. Citations are a means of judging how important an article is by measuring how many people cite that article in later papers; it is in this sense a ‘backlink’ because the links, here in the form of citations, appear after the article is published. To read the World Wide Web in this way, Google’s founders Larry Page and Sergey Brin developed a model that treats the links from one website to another as a backlink similar to an academic citation and then judges the importance of a site in relation to a particular subject by the number of backlinks. Further, they created a recursion through which, having worked out what sites were important on a particular topic (by reading the numbers of links to that site), they could weight those sites more heavily. This meant that their model generated complexity, as many links from unimportant sites might be balanced by a site having only a few links if those few links came from important sites (Page et al. 1999).
To fully grasp the significance of this use of the World Wide Web we need to remember that what Google were (and are) reading through PageRank is a collectively created store of information to which anyone with access to the internet can add on topics of their choosing, including linking as website creators feel is appropriate. The WWW is created by following a set of formal standards that define how you have to form information and load it on a networked computer for it to be visible to other sites (as will be discussed further in Chapter 5). Once a website is visible other sites can link to that site just as anyone can link to their sites. The standards were released to be freely available and are maintained by a not-for-profit consortium. Much of the content that was created was done so freely by ordinary users with internet access and computing resources, though over time corporate and government sites run by paid employees have played a greater role. The WWW is then a collective creation formed of a series of groups that link to each other because they choose to do so in order to ensure that relevant information is connected and available. Although it was heavily commercialised once it became popular, the WWW preceded the birth of Google, and remains a space in which groups of people with similar interests can generate and share information resources (Berners-Lee 2000; Gillies and Cailliau 2000).
PageRank was a means of reading these linked groups and their social relations. Once PageRank had read, for example, sites devoted to surfing it had evidence of the most important sites based on those who loved surfing and had created sites on the subject, including what those people thought were the most important sites and topics. This was the key work done in the initial Google search engine which can be drawn on when someone makes a surf-related search query. In this sense, any search query comes last in the practices of answering it, after the work has been done to read the relevant topics represented on the WWW.
The PageRank algorithm did not, however, last long in its original form. As Google gained a reputation as a good search engine and traffic to it began to increase, it became possible to raise a site up the search rankings by adding fake links to it. Large farms of sites which did nothing but try to game Google’s rankings by faking links appeared in the first rounds of the then emerging and now never-ending struggle between Google’s attempts to deliver the search results it deems best and the attempts of individual sites to ensure they are returned as high as possible in the results. As one information expert in search put it: ‘there’s definitely a kind of, ah, a kind of a war going on between the search engine and the marketers, marketers are pressuring the search engines to be more crafty, more authentic in how they rank’ (cited in Mager 2012: 777). Google then has to commit considerable labour to constantly monitoring and then upgrading its search mechanisms, which then feeds through to changes in advertising. This leads to the second set of practices necessary to understand Google search, which involves the elaboration of the original algorithm with more algorithms (Hillis et al. 2012).
One of the best-known early additions to PageRank was the Random Surfer Model, which injected, as its name implies, randomness by assuming that at certain points anyone following web links would randomly jump to some other link. Further improvements were made, some in response to attempts to game the system and others to improve search results. For example, the Hilltop algorithm aims to divide the Web up into thematic sections and then judge if a site has links to it from experts who are not connected to that site. If many independent experts link to the site, then it is deemed an authority in its thematic area and can be used to judge the importance of other sites. Hilltop thus builds on citation practices while developing them in a specific direction. This algorithm was initially developed independently of Google and was bought by them to be integrated into its own set of tools. There are no doubt many other adjustments and wholly new algorithms integrated into PageRank and because of trade secrecy there will be more than we know about. But these examples are enough to establish the basic principle that, however it is implemented, Google’s successful search – successful both in terms of delivering useful results and in terms of popularity – derives from reading the creations of the pre-existing community of the World Wide Web (Turrow 2011: 64–8; Vaidhyanathan 2012: 60–4; Hillis et al. 2012).
The second key area of search development was opened up by Google only after the first algorithms for reading the WWW proved successful. This second area was that of personalisation, which only became possible once Google became big enough to start collecting significant datasets on those using its search engine. Exploring these datasets enabled the targeting of search results, with different users receiving different search results. This is particularly the case if the searcher uses other Google services, such as Gmail, and has a Google account. Personalisation appears to many to be the process whereby Google judges whether a searcher who uses a term like ‘surf’ is interested in surfing on water, musical channels, or the Web and so on. It also seems to identify users individually, each having a certain age, location, gender, race and so on, bringing users the results that are judged appropriate to their demographics. However, reading personalisation in this way is to read it from the point of view of the user’s practices rather than Google’s. For the latter, the key is not so much each individual but the correlations between many individuals; it is the inter-relations that are key to producing a useful result for an individual, not the other way around. This is because the inference has to be constantly made that if many individuals of a certain type favour a particular search result then this can be delivered to individuals who fit that type. It is these kinds of mass correlations that allow for the targeting of particular groups of people – assuming, for example, СКАЧАТЬ