New fourteen

New fourteen

Joined: August 31st, 2007, 2:14 pm

September 26th, 2010, 3:23 pm #1

In recent discussions on Cryonet and on the CI Yahoo groups board I had some success--nearly unprecedented--in persuading a few uploaders to modify their positions. This shows both that existing misunderstandings are stubborn and that these misunderstandings are not exempt from change. I think it is also true that the uploading stance tends to damage cryonics by diverting attention and through guilt by association.

What I am working on is a revision of my "fourteen points" for the next edition of YOUNIVERSE. These will probably be sent to Cryonet, the CI Yahoo groups list, Cold Filter, and possibly others--one at a time, not all at once. If the moderators think they are too far off topic, so be it. In any case, readers will be warned by the subject line and can just delete without reading if they choose. The first one should appear within a week or two.



Robert Ettinger
Quote
Like
Share

Joined: May 17th, 2009, 5:13 pm

September 27th, 2010, 8:34 pm #2

Here's a great article on the problems of identity, and how it leads to people opting out of cryonics and hence allowing themselves to be killed.

http://lesswrong.com/lw/qx/timeless_identity/

I'd enjoy watching you two have a video debate on this topic some time. EY has done some great things so far on carrying the cryonics torch forward to the next generation.
Quote
Like
Share

Joined: October 2nd, 2004, 8:27 pm

September 28th, 2010, 6:07 am #3

I'm probably missing something buried either shallowly in your post or deeply in the site you reference, but I'm having trouble understanding how Yudkowski and his several websites "has done some great things so far on carrying the cryonics torch forward" while at the same time you contradict that with "how it leads to people opting out of cryonics and hence allowing themselves to be killed".

You may explain, if you wish, and if there is an explanation. That there is a satisfactory one I doubt, since Y advocates and promotes an upcoming Singularity AI and its "Singularity Institute" promo vehicle, with no controls on same, which logically would evolve to have no regard or care for humans, cryopreserved or otherwise, uploaded or otherwise. Waste of planet space for two, waste of bandwidth for the other.

And I have heard he is signed up for cryonics. For what reason, I cannot figure.

Cheers,

FD
Quote
Like
Share

Joined: October 9th, 2009, 9:26 pm

September 28th, 2010, 6:43 am #4

Just to clarify, the article is arguing in favor of cryonics. Luke was stating that there are people who opt out of cryonics due to their own conceptions of identity, and the article rails against those people. That said, I have a feeling they're not really opting out due to sincere concern over identity.... they're simply using identity as a quick scapegoat....
Quote
Like
Share

Joined: May 17th, 2009, 5:13 pm

September 28th, 2010, 2:44 pm #5

EY promotes rationality in general, and tucks cryonics in a commonsense corner of the mix -- where it should be.

So many people have a million different weird excuses for rejecting cryonics, one of which is alleged concern for whether their real identity would survive the journey. This concern is understandably higher in the case of uploading, but one sees it in reference to a brain reconstructed by nanobots, or even (for those ignorant of observed phenomena like hypothermia) the cessation of electrical activity.

Of course, we all should know by now that belief in uploading and belief in cryonics are two separate circles on the Venn diagram of general futuristic beliefs. Plenty of cryonicists are not in favor of uploading -- even considering that it would "work" to create a functional copy, they reject the concept that they would survive the process personally.

This is ultimately a question of philosophy and meta-ethics. Where do you get your sense of self that you work to defend from death?

Personally I think the sense of self comes from a variety of fundamentally different sources, and survival is a spectrum. It's better for there to be an upload than no you at all -- but being a flesh and blood continuation of your original meatbag self is even better. If it comes down to it, I would prefer my genes and ideas to be passed on rather than nothing at all -- but uploading is a definite step above that.
Quote
Like
Share

Joined: May 17th, 2009, 5:13 pm

September 29th, 2010, 1:25 pm #6

I'm probably missing something buried either shallowly in your post or deeply in the site you reference, but I'm having trouble understanding how Yudkowski and his several websites "has done some great things so far on carrying the cryonics torch forward" while at the same time you contradict that with "how it leads to people opting out of cryonics and hence allowing themselves to be killed".

You may explain, if you wish, and if there is an explanation. That there is a satisfactory one I doubt, since Y advocates and promotes an upcoming Singularity AI and its "Singularity Institute" promo vehicle, with no controls on same, which logically would evolve to have no regard or care for humans, cryopreserved or otherwise, uploaded or otherwise. Waste of planet space for two, waste of bandwidth for the other.

And I have heard he is signed up for cryonics. For what reason, I cannot figure.

Cheers,

FD
You seem really confused on EY's position on AI. Read his paper here:http://yudkowsky.net/singularity/ai-risk

Basically the problem he's trying to fix is that the universe doesn't have any built-in virus protection. You seem to be saying the answer to that problem is "don't make viruses". But your solution won't work, because there are idiots out there who will (accidentally or on purpose) make viruses the moment you hand them enough computing power to do so. The solution is to create good virus protection.
Quote
Like
Share

Joined: October 2nd, 2004, 8:27 pm

September 29th, 2010, 5:23 pm #7

Brevity is not one of his obvious skills; one can more easily determine his position by reading the conclusion to the article you cite. He discourages discussion of same by making it impossible to copy and paste quotations from his text. Nonetheless, he seems to me to conclude that the only way to solve the world's problems is to create an entity smarter than ourselves, admits there are risks in doing so, and then says charge ahead and do it. No mention of any plug to pull to shut it down if it misbehaves. If he says somewhere how he proposes to fix that "viral" situation, perhaps you could attempt to quote for us the part you think addresses it. Certainly no mention of it in his conclusion

And his lack of it is reprehensibly irresponsible.

Sorry,

FD
Quote
Like
Share

Joined: May 17th, 2009, 5:13 pm

September 29th, 2010, 7:56 pm #8

Quote
Like
Share

Joined: January 25th, 2007, 2:45 pm

September 29th, 2010, 8:38 pm #9

Brevity is not one of his obvious skills; one can more easily determine his position by reading the conclusion to the article you cite. He discourages discussion of same by making it impossible to copy and paste quotations from his text. Nonetheless, he seems to me to conclude that the only way to solve the world's problems is to create an entity smarter than ourselves, admits there are risks in doing so, and then says charge ahead and do it. No mention of any plug to pull to shut it down if it misbehaves. If he says somewhere how he proposes to fix that "viral" situation, perhaps you could attempt to quote for us the part you think addresses it. Certainly no mention of it in his conclusion

And his lack of it is reprehensibly irresponsible.

Sorry,

FD
The basic idea is to put our resources into a study of how to best create a "friendly" artificial intelligence before somebody else stumbles into it accidentally or even intentionally creates a malignant version. He believes that there is no way to stop every person on the planet from ever pursuing this technology, and I tend to agree with that. I'm not sure his solution is the best answer, but then I'm not sure that there is an answer. Short of ushering in a technological standstill, I don't see how you can really stop some sort of singularity from happening.
Quote
Like
Share

Joined: October 2nd, 2004, 8:27 pm

September 30th, 2010, 1:58 am #10

Brevity is not one of his obvious skills; one can more easily determine his position by reading the conclusion to the article you cite. He discourages discussion of same by making it impossible to copy and paste quotations from his text. Nonetheless, he seems to me to conclude that the only way to solve the world's problems is to create an entity smarter than ourselves, admits there are risks in doing so, and then says charge ahead and do it. No mention of any plug to pull to shut it down if it misbehaves. If he says somewhere how he proposes to fix that "viral" situation, perhaps you could attempt to quote for us the part you think addresses it. Certainly no mention of it in his conclusion

And his lack of it is reprehensibly irresponsible.

Sorry,

FD
It appears you guys are reading into Yudkowski what you want to see. Although he mentions the idea of a "friendly AI" in passing, I've yet to read anything as to

1) any strong statement from him advocating the necessity of either creating a friendly AI or keeping future AIs friendly,

2) any official program by the Singularity Institute or any other entity he is involved with, to effect the above, and last but not least,

3) any recognition of the need for, much less program to develop, effective safeguards against supercomputers gone berserk (i.e., an "OFF" button, if necessary one loaded with explosives).

Instead I read in his conclusion to the original article Luke referenced, his plan to actively develop a Singularity AI, no mention of friendly or otherwise; I suppose we can assume he hopes it will be friendly, as he does admit such AIs could come in many unpredictably different forms (part of his "they do not think like humans" rant). I think it should be tediously obvious to anyone that such entities would not think like humans, which should give humans cause for extreme concern.

FD
Quote
Like
Share