Jump to content

mattyaneo

Member
  • Content Count

    1
  • Joined

  • Last visited

Community Reputation

0 Neutral

About mattyaneo

  • Rank
    Novice
  1. Nick Bostrom, Professor, Faculty of Philosophy, Oxford University, 2002, "Existential Risks: Analyzing Human Extinction Scenarios," Journal Of Evolution And Technology, http://www.nickbostrom.com/existential/risks.html yasu Previous sections have argued that the combined probability of the existential risks is very substantial. Although there is still a fairly broad range of differing estimates that responsible thinkers could make, it is nonetheless arguable that because the negative utility of an existential disaster is so enormous, the objective of reducing existential risks should be a dominant consideration when acting out of concern for humankind as a whole. It may be useful to adopt the following rule of thumb for moral action; we can call it Maxipok: Maximize the probability of an okay outcome, where an “okay outcome” is any outcome that avoids existential disaster. At best, this is a rule of thumb, a prima facie suggestion, rather than a principle of absolute validity, since there clearly are other moral objectives than preventing terminal global disaster. Its usefulness consists in helping us to get our priorities straight. Moral action is always at risk to diffuse its efficacy on feel-good projects[24] rather on serious work that has the best chance of fixing the worst ills. The cleft between the feel-good projects and what really has the greatest potential for good is likely to be especially great in regard to existential risk. Since the goal is somewhat abstract and since existential risks don’t currently cause suffering in any living creature[25], there is less of a feel-good dividend to be derived from efforts that seek to reduce them. This suggests an offshoot moral project, namely to reshape the popular moral perception so as to give more credit and social approbation to those who devote their time and resources to benefiting humankind via global safety compared to other philanthropies. Maxipok, a kind of satisficing rule, is different from Maximin (“Choose the action that has the best worst-case outcome.”)[26]. Since we cannot completely eliminate existential risks (at any moment we could be sent into the dustbin of cosmic history by the advancing front of a vacuum phase transition triggered in a remote galaxy a billion years ago) using maximin in the present context has the consequence that we should choose the act that has the greatest benefits under the assumption of impending extinction. In other words, maximin implies that we should all start partying as if there were no tomorrow. Also inherency: https://www.rt.com/usa/263745-patriot-act-expiration-surveilance/ http://csis.org/publication/electronic-surveillance-after-section-215-usa-patriot-act http://www.newsweek.com/nsa-destroy-data-collected-mass-phone-surveillance-357500 White terror DA seems good against structural violence Alternately, a PIC of exclude white supremacist organizations
×
×
  • Create New...