WebNews
Please enter a web search for web results.
NewsWeb
If It's Worth Arguing, It's Worth Arguing With Whiteboards " Less Wrong
4+ hour, 54+ min ago (261+ words) It's easy to disagree with people. You just say, "That's wrong" and decline to elaborate. But that's not very interesting. If you want to be making progress " instead of ragebaiting " it usually helps to find a way for your disagreement…...
The value of moral diversity " Less Wrong
3+ day, 15+ hour ago (1152+ words) Concentrated power likely means fewer value systems among the people who collectively shape the future'that is, reduced moral diversity among powerholders. Moral diversity has both costs and benefits: it enables moral trade and plausibly improves reflection, but also raises the…...
Can AI make advancements in moral philosophy by writing proofs? " Less Wrong
4+ day, 10+ hour ago (846+ words) Cross-posted from my website. If civilization advances its technological capabilities without advancing its wisdom, we may miss out on most of the potential of the long-term future. Unfortunately, it's likely that that ASI will have a comparative disadvantage at philosophical…...
Kegan, Teach, Rao: Stages of Moral Development " Less Wrong
4+ day, 12+ hour ago (1035+ words) I recently read Chapman's texts on Robert Kegan's levels of moral development and meaning-making, namely: Developing ethical, social, and cognitive competence and the more psychedelic What is stage five (like)? . Scott Alexander also has some interesting thoughts on the first…...
Morale " Less Wrong
5+ day, 14+ hour ago (699+ words) One particularly pernicious condition is low morale. Morale is, roughly, "the belief that if you work hard, your conditions will improve." If your morale is low, you can't push through adversity. It's also very easy to accidentally drop your morale…...
The Unintelligibility is Ours: Notes on Chain-of-Thought " Less Wrong
1+ week, 16+ hour ago (1287+ words) Many people seem to think that the chains-of-thought in RL-trained LLMs are under a great deal of "pressure" to cease being English. The idea is that, as LLMs solve harder and harder problems, they will eventually slide into inventing a…...
Foundational Beliefs " Less Wrong
1+ week, 19+ hour ago (632+ words) I see a lot of AI safety strategies that don't fully engage with the complexity of the real world'and therefore are unlikely to succeed in the real world. My thinking about AI safety strategy is anchored by six foundational beliefs…...
Inside Omega " Less Wrong
1+ week, 4+ day ago (741+ words) This is a philosophical thought experiment which aims to explore what I consider to be the crux of many alignment problems: That of the unrescuability of moral internalism, which basically says we have not been able to rescue the philosophical…...
My forays into cyborgism: theory, pt. 1 " Less Wrong
1+ week, 5+ day ago (820+ words) In this post, I share the thinking that lies behind the Exobrain system I have built for myself. In another post, I'll describe the actual system. I think the standard way of relating to LLM/AIs is as an external…...
Does consciousness and suffering even matter: LLMs and moral relevance " Less Wrong
2+ week, 1+ day ago (1791+ words) (This is a light edit of a real-time conversation me and Victors had. The topic of consciousness and whether it was the right frame at all often came up when talking together, and we wanted to document all the frequent…...