News

lesswrong.com
lesswrong.com > posts > L2gGnmiuJq7FXQDhu > total-utilitarianism-is-fine

Total utilitarianism is fine — LessWrong

1+ week, 10+ hour ago   (538+ words) There are two ways to interpret the words "the action of a super-agent: (a) is always (b): if "wiUi is maximized, then it is not possible for some Uj to be increased without decreasing the other Uis; as that would increase the…...

lesswrong.com
lesswrong.com > posts > ZvAQfLQMSuKJFsion > the-reality-of-wholes-why-the-universe-isn-t-just-a-cellular

The Reality of Wholes: Why the Universe Isn’t Just a Cellular Automaton — LessWrong

1+ week, 2+ day ago   (1680+ words) ~Qualia of the Day: PageRank Monadology~ Before diving in, let me be explicit about what a successful theory of consciousness needs to explain, at minimum (cf. Breaking Down the Problem of Consciousness): The framework I'm sketching here, building on David…...

lesswrong.com
lesswrong.com > posts > sCPtkhs4FhhEjjFP9 > moral-epistemic-scrupulosity-a-cross-framework-failure-mode

Moral-Epistemic Scrupulosity: A Cross-Framework Failure Mode of Truth-Seeking

1+ week, 6+ day ago   (500+ words) If you do this correctly, you'll be safe from error. This has come with some heavy costs. At the same time, I couldn't allow myself to drop any tradition before going deep enough into it (in thought or practice), which…...

lesswrong.com
lesswrong.com > posts > HQjExpjZrrjBEi7xZ > uploaded-human-intelligence

Uploaded Human Intelligence

3+ week, 6+ day ago   (348+ words) If you take a drop of water out of the Earth's ocean, put it on a slide, and then view it under a microscope. You will have a high probability of seeing at least one microorganism. If it is a…...

lesswrong.com
lesswrong.com > posts > Hxg7XomcPmGvDaBGo > human-values

Human Values — LessWrong

Human Values — LessWrong4+ week, 2+ day ago   (806+ words) [This is an entry for lsusr's write-like-lsusr competition.] "I solved the alignment problem," said Qianyi. "You what?" said postdoc Timothy. It was late at the university computer laboratory and Timothy' skepticism was outvoted by his eagerness to think about anything…...

lesswrong.com
lesswrong.com > posts > FxoiGnY3pDDTjhPot > leading-by-example

Leading by example — LessWrong

Leading by example — LessWrong1+ mon, 4+ day ago   (310+ words) With a few exceptions, individual actions have negligible large-scale impact, provided everything else remains unchanged. Then, what does it mean to lead by example and is it actually a valuable strategy? Near the end of 2021, I got moderately sick from…...

lesswrong.com
lesswrong.com > posts > 2BtTe4jSLDLoAMArK > information-in-circulation-is-self-organised-critical-small

Information in circulation is self-organised critical. Small changes in environment can make large, discontinuous changes in the information space. — LessWrong

Information in circulation is self-organised critical. Small changes in environment can make large, discontinuous changes in the information space. — LessWrong1+ mon, 1+ week ago   (396+ words) Any information that remains in circulation stays at'R0=1[1], analogous to an endemic infectious disease[2]. This is an example of self-organised criticality, where the system as a whole tunes itself to the critical point without requiring external intervention. For individual ideas:…...

lesswrong.com
lesswrong.com > posts > EdPzyBwzMyJrJCTJs > a-falsifiable-causal-argument-for-substrate-independence

A Falsifiable Causal Argument for Substrate Independence — LessWrong

A Falsifiable Causal Argument for Substrate Independence — LessWrong1+ mon, 2+ week ago   (545+ words) Here's a deceptively simple argument that derives an empirically falsifiable conclusion from two uncontroversial premises. No logical leaps. No metaphysics or philosophy. Just premises, deduction, and a clear way to falsify. I'll present the argument first, then defend each piece…...

lesswrong.com
lesswrong.com > posts > hq9bbAiaCrN3TGnRY > zen-wisdom-diffused

Zen Wisdom, Diffused — LessWrong

1+ mon, 2+ week ago   (1352+ words) I did it. I built an oracle AI. Or at least, I did for one definition of "oracle. It's called Kaku-Ora, and it's an AI divination oracle inspired by the likes of the I, Ching, but trained on Zen koans....

lesswrong.com
lesswrong.com > posts > nMoF3bWpKKcaWfQJu > 10-aphorisms-from

10 Aphorisms from 𝘛𝘩𝘦 𝘉𝘦𝘥 𝘰𝘧 𝘗𝘳𝘰𝘤𝘳𝘶𝘴𝘵𝘦𝘴 — LessWrong

10 Aphorisms from 𝘛𝘩𝘦 𝘉𝘦𝘥 𝘰𝘧 𝘗𝘳𝘰𝘤𝘳𝘶𝘴𝘵𝘦𝘴 — LessWrong1+ mon, 3+ week ago   (203+ words) Never explain why something important is important. If you must explain it, don't. Knowing stuff others don't know is most effective when others don't know you know stuff they don't know. Language is largely made to show off, gossip, confuse…...