There are muli layered flaws and fallacies in your remarks here... I'll work with it over coming days. Its twisted beyond all reasonable hope of recovery beginning with your crazy remark that it's not possible to sum something up in CF--- that you have to sfit through posts... I'm hearing an insult there... an insult buried so deeply and cunningly that I know this is not going to go well. I have to say up front, I do not trust you.... how am I supposed to sum up philosophy in a CF post? You can't make short points and say "Gotcha!" You need to have details, or else we're just going to keep needlessly responding back and forth. Gary's book doesn't "sum up philosophy" either, but it's better than a CF post. He goes on at length about the "universal principle" of one-boxing Newcomb's Problem, and even though I feel some of it is superfluous (and I disagree to an extent), it's still refreshing to read the whole thing.
By the way, just because he has a background in AI doesn't mean anything in regards to his arguments. And what do you mean that you "reject AI"? Obviously, you can't mean weak AI, because then you wouldn't have been able to do that Google search on Gary. So I guess you mean strong AI. I know Yudkowsky isn't exactly popular here, but I do wonder how many people actually read his essay:http://yudkowsky.net/singularity/ai-risk
My ultimate point is this: cryonicists are not showing "unprincipled behavior" (Cryonicists aren't in this for a popularity contest). You can most certainly disagree with the principles they abide by, but nevertheless, they are not "unprincipled."
Thanks for responding, though, and don't let up on me. I'm simply curious, so please point me to some books you recommend. It will save me time knowing where you are coming from, as opposed to sifting through your posts.