A password will be e-mailed to you.

GameSpot recently re-published a delicate, if not superficial interpretation of their scoring measures.

http://www.gamespot.com/articles/gamespots-complete-list-of-10-10-reviews-and-how-t/1100-6422955/

According to a certain sentiment, videogames.com once had a prestigious mode of evaluating games. Eight years ago, differentiating between an 9.1 and a 9.2 was a big deal. A lot of thought went into discerning that decimal. From 2007 onwards, however, scales were revamped to incorporate intervals of .5. Committing to this new method made higher scores more telling. There were no more 9.7s. No 9.9sAs nearly perfect no longer existed, obtaining a 10 was even more significant.

The jump was from 9.5 to 10, instead of a possible 9.9 to 10. I’m uncertain whether GameSpot still awards .5 scores anymore. In any case, distinguishing between a 9 and a 9.5 or even a 10 is a lot more intelligible than 9.5 to 9.6 or 9.6 to another 9.6, for that matter. Not to mention, of course, the even more nebulous 4.3 to 4.4. It’s all very subjective. Very technical. It’s sort of like inside baseball, except in baseball there is a direct correlation between play and the scores you earn. I’m referring to both the development of and activity in a game.

Typically, how well you play corresponds to the order of your results––unless you’re playing some old Madden or Mario Kart (not too old, though). Critical analysis relies more heavily on causation, which is to say that if a game ticks certain boxes without ticking you off, it’s probably doing something right. It will be received well. All told, however, each of us responds differently to content. Some people like the ‘challenge’ of rubber-band A.I. You can’t standardize the collective reactions to a game, but, hopefully, you can standardize your own.

Do you really want a peek behind the curtain? Do you want to see what’s under the hood?

The GameSpot revisions served to make the grading process more intuitive for writers and readers alike. And that’s really what you want on a video game review site––synergy symbiosis…purity of essence. Instead of estimating down to the decimal by a single tenths place, reviewers have the slightly more objective task of rounding up or down by a five tenths decimal place, if any. Changing the scale has not altered very much. Getting a 10 is still truly uncommon. With that said, however, you have to wonder what ‘that’ really means.

http://static.tvtropes.org/pmwiki/pub/images/9_5_game_534.png

The infrequency suggests rarity, if not being qualitatively rarified. Sometimes, however, it’s the other way around. Lower scores can be inexplicable. On these occasions, it’s great to have another resource––like IGN, which is more respectful, if not reflective, of fanfare. Excitement. This is partly demonstrated by the category afforded by their highest accolades: a 10 on GameSpot is “Essential,” while a 10 on IGN is a “Masterpiece.” IGN still uses the 100-point scale, although it has flirted with a 20––and it is possible to get lower than a .5.

IGN also labels 0-.9 a “Disaster.” Scores at either extreme transcend their medium and indicate a place in history, if not existence. I think the decimal gives IGN a little more artistic license than critical license. They actually describe their scoring methods pretty well. GameSpot isn’t entirely opaque, either. Instead of comparing separate organizations, though, let’s stick with the most contentious one. What does it mean to get a 10 on GameSpot? What does it mean to them and what does it mean for us?

As a case study, I think we should look at the 2014 Wii U Game of the Year. These thoughts are based on some I expressed on the forum at the time. There were three competitive candidates: Super Smash Bros. Wii U, Mario Kart 8, and Bayonetta 2. Each of these games is objectively solid. If Smash Bros. had Tourney mode when it was released, it might have won. Regardless, it received a 9––Mario Kart got an 8 (it’s almost onomatopoeic), and, hold your horses, Bayonetta had a 10. Naturally, Mario Kart 8 won.

Wii U Game of the Year:

Here’s the problem: you look down the list of reviews and they all say, “GameSpot.” There is a score beneath that word. What they are saying here is that we should believe that number reflects a review that exists on or is sponsored by GameSpot, but does not necessarily reflect the opinion of ‘GameSpot.’ By ‘GameSpot,’ I mean, apparently, the majority of GameSpot writers and editors.

Then again, if they would have us believe that the opinion of one staff member does not reflect the opinion of GameSpot, as it does not here, at what point does the same logic stop? Instead of saying Mario Kart is GameSpot’s Wii U Game of the Year, what they should really do is list all the writers and editors who felt this way, as we know at least one other preferred Bayonetta. Maybe it was just roulette determining whether that was someone who would give it a 10.

You might say that the majority consensus entails the official GameSpot seal. At the same time, when you have a single review, you should clarify that this review does not actually reflect GameSpot’s seal of approval. What does it mean for a review to be on GameSpot? It seems like it means one guy they hired felt a certain way about a game. In that sense, we are not looking to trust GameSpot as an entity that has some consistency over time.

Instead, we should evaluate each review according to the specific individual responsible for it. What does this mean? It means that every 10 they have given out is completely relative, unless it was the same person on multiple occasions. Then it depends on how they were feeling. It is reservation that results in the alleged phenomenon of lowballing an 8.8.

“Sometimes you get the feeling that 8.8 situations are simply the fans are making a mountain out of a molehill.”

http://www.gamespot.com/reviews/metal-gear-solid-4-guns-of-the-patriots-review/1900-6192543/

You’d think when somebody finishes their report and tells everyone, “Hey, guys, I’m about to give out a 10, concerning which only seven other games (up to that point) have been positively assessed”, the other people––if not the high-level editors themselves––would say, “We don’t give those out very regularly, so let’s have somebody else check it out and we can come to a consensus. This way we can all feel good about doing something that has a significant bearing on the orientation of our reviewing platform.”

Take this as really salty, if not just with a grain of salt:

There is a significant difference between an 8 and a 10 (and especially a 10, as opposed to a 7 and a 9 or even a 7.5 and a 9.5, because there is an historical element). It doesn’t matter what kind of game is involved. If enough GameSpot editors felt the scores of Bayonetta and Mario Kart should have been closer to reversed, what does that tell you about the meaning of the reviews of Bayonetta and Mario Kart?

It tells you the reviews don’t or may not actually tell you what Gamespot ‘proper’ thinks about each game. If this is true, they should either give a second opinion or remove numbers all together. 8 and 10 are meaningless unless they synthesize the value of each experience––you can look at them and know something about their subjects. Even though they relate to their respective reviews, they are not communicating with each other.

Of course, getting an 8 or above anywhere implies the suggestion of certain quality.

You might say, “Well, of course they’re not communicating with each other. They’re apples and oranges. You can’t really compare Mario Kart and Bayonetta.” Yet that is exactly what they are doing here. The entire premise of selecting one game out of the nominees is that you can determine which one is the best and, more precisely, better than the other ones.

I can see how this is where ideas about accessibility and replayability factor in for some people. It is thought, ‘If you can’t compare these games based on their scores, then there must be some other criterion for assessment. We do know they are all at least good games. Beyond that, the aforementioned (accessibility, replayability) features could be a means of reaching a conclusion.’

One such feature is the legendary water of Pokémon Omega Ruby and Alpha Sapphire:

file:///Users/Justin/Desktop/gcbc/10%20Spot/619.png

The problem here is we are ultimately talking about the better game. We are not talking about the more featured game or the more technical game. We are discussing which game the GameSpot staff rates highest (maybe you say, “Clearly not, since the 8 was selected over the 9 and the 10.”). When I say highest, I don’t mean numerically––though, again, it is questionable as to what, if the number doesn’t mean as much here, it does mean.

You have to figure that these adjacent components (accessibility, replayability, etc.) played a role in their initial evaluation. If you really want to say GameSpot is figuring out the best game in terms of these variables, then you have to allow that the various reviews really don’t matter because they kind of cancel each other out––they’re all good, but, here, we are describing something that transcends the gaming experience. We are talking about packaging and presentation––or…I don’t know. I really have no idea, because I believe this line of thought is false.

Another commentator suggested the following hypothetical explanation: the vote is mistakenly identified as recognizing the “best” game; rather, it is supposed to reflect “the game that best represents the year of gaming as a whole.” Secondary factors like “polish,” “popularity,” “sales,” and “well-received DLC” are most reflective of the 2014 year in gaming. Of course, with Smash Bros.’ interim downloadable content, this argument is less salient now. Nevertheless, this explanation would have made a lot more sense. 

Kotaku uses a different system where you know if the game is recommended, what it’s about, pros, cons, and technical features. Kotaku is not the ideal model for organization, but when they review something, they call it “The Kotaku Review.” At least then you can trust you are receiving some familiar information. Use judgment. Ultimately, enjoyment arises from how everything feels. Critics like GameSpot may proffer some score, if not settle the scores. Any review is the lay of the land, not the law. A map. Reading reviews is helpful, but you know better. Everyone’s a critic.

After picking up on some of their thought processes, we can see the difficulty of establishing a standard that meets everyone’s expectations and yet defies them delightfully. Almost no one does this consistently…you can’t please ‘everyone in their mother’s basement’ and their mother. Anyway, I’m done with games for now. And now that we have cut GameSpot some slack, how about they return the favor? Will videogames.com spot me a 10? I’m saving up for MGS V.

http://i.imgur.com/biKPTjG.jpg

http://i.imgur.com/biKPTjG.jpg