Archive October 2011
Sunday 30 October 2011
A quite fun free PC game: Beret. It's a puzzle-platformer, more puzzle than platform. There are occasional moments of careful jumping, but the game's single keystroke "save this position" and "restore that position" mechanism removes the bulk of the horribleness of "timing puzzles". Also a backup "restore the saved position before that" option for if you accidentally save in a deathly precarious position, and always the option to just start a room again. Pleasingly, all these things also rewind the time, so you're allowed to rewind as much as you need to for the speed challenges, should you decide to do them.

Also fun is that a level's speed challenge (running from start to finish) is frequently around 45 seconds, but doing all the other subtasks (kill all monsters, collect 100 small easy fragments, collect 4 medium difficult fragments, and collect 1 large extra-difficult medallion) is more like a ten minute task per level. Since you only need a subset of medallions to move on, you can mostly play in your preferred style - good design there.

This was recommended by the Caravel Games Newsletter - it's not one of their games, but it is kind of like a platform game version of their DROD series (or this link includes the free, oldest one), which I also partially recommend (I don't recommend the RPG one). [21:02] [1 comment]


Saturday 29 October 2011
I like the Professor Layton games, but they are always really annoying with the inconsistent puzzle logic. One puzzle will say "you must take two adjacent objects each turn" and mean that if you had four objects in a row, and took the middle two, the remaining two are now adjacent, and another will say "you must take the two books to the right" and mean the two books that were originally to the right, so if one of them has been removed you can't make that move, even if afterwards there are still two other books (further) to the right. It would be slightly annoying to have the ambiguity in the puzzle wording at all, but having it ambiguous and inconsistent is really not good.

One puzzle goes "ha ha, tricked you, you didn't think of it that way!" and the next puzzle it goes "no your answer is wrong because we didn't think of it that way this time!" [22:39] [1 comment]


Monday 10 October 2011
I was thinking about this study, and a similar one from 30 years ago where Monsanto pretended Roundup didn't cause health problems; I was trying to figure out why such studies would exist. Specifically, studies where the numbers show one thing and then huge error bars and fudging "lead to" the conclusion that was paid for.

I'm not questioning why bad conclusions exist, obviously that's money, but why would they perform an actual study and then fudge with error bars rather than, say, fudging the numbers so the things look actually how you want, or, even cleverer, fudging the experiment (perhaps even without the scientists' knowledge) so that the results look like what you want. (eg. for the HFCS experiment, to rig it simply provide HFCS as the sugar syrup, or sugar syrup as the HFCS, tada, genuine identical results!)

I really doubt that the scientists think that fudged error bars and a false conclusion are significantly more ethical than fudged numbers for the same false conclusion, and I'm pretty sure fudging numbers or fudging the experiment would be easier than fudging error bars, as well as producing a more convincing study, so why would they do it the more difficult stupider seeming way?

One possibility, of course, is "idiots", always a good answer to a "why do people do something" question. But I find it hard to imagine idiocy that endorses doing something more difficult for worse results that is also obviously more difficult and worse results. (Jokes about Microsoft Access notwithstanding.)

Then another possibility struck me, and if this is the case it's fucking amazingly brilliant and Machiavellian: if you had a study in which the numbers falsely showed HFCS and sugar to behave identically, and someone else performed a 'verification' study whose numbers differed, it would be scandalous, terrible publicity. But if you have a study with real numbers and bullshit conclusion, then any scientist who might believe otherwise comes along, looks at the study, and goes "hey, that doesn't show what the conclusion says." There's no point in him performing a study to see if the numbers show otherwise, because the numbers already show otherwise.

You know what makes news? "Hey, we did this study and it shows that other study to be completely fraudulent, and also this stuff that's in everything is terribly poisonous."

You know what doesn't make news? "Hey, the conclusion of this study from 5 years ago doesn't match with its results."

Genius. Evil, evil genius. [00:04] [1 comment]