I was reading something that suggested that trauma "tries" to spread itself. ie that the reason why intergenerational trauma is a thing is that the traumatized part in a parent will take action to recreate that trauma in the child.
More like, "here's a story for how this could work.")
Why on earth would trauma be agenty, in that way? It sounds like too much to swallow.
If some traumas try to replicate them selves in other minds, but most don't pretty soon the world will be awash in the replicator type.
Its unclear how high the fidelity of transmission is.
Thinking through this has given me a new appreciation of what @DavidDeutschOxf calls "anti-rational memes". I think he might be on to something that they are more-or-less at the core of all our problems on earth.
More from Eli Tyre
I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with
CritRats!
— Eli Tyre (@EpistemicHope) December 26, 2020
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with me?https://t.co/Sdm4SSfQZv
So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.
(Links to some of those that are public at the bottom of the thread.)
But in addition to that, there has been a much more sprawling conversation happening on twitter, involving a much larger number of people.
Having talked to a number of people, I then offered a paraphrase of the basic counter that I was hearing from people of the Crit Rat persuasion.
ELI'S PARAPHRASE OF THE CRIT RAT STORY ABOUT AGI AND AI RISK
— Eli Tyre (@EpistemicHope) January 5, 2021
There are two things that you might call "AI".
The first is non-general AI, which is a program that follows some pre-set algorithm to solve a pre-set problem. This includes modern ML.
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with
In general, I am super up for short (1 to 10 hour) adversarial collaborations.
— Eli Tyre (@EpistemicHope) December 23, 2020
If you think I'm wrong about something, and want to dig into the topic with me to find out what's up / prove me wrong, DM me.
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.
(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."
More from Culture
For three years I have wanted to write an article on moral panics. I have collected anecdotes and similarities between today\u2019s moral panic and those of the past - particularly the Satanic Panic of the 80s.
— Ashe Schow (@AsheSchow) September 29, 2018
This is my finished product: https://t.co/otcM1uuUDk
The 3 big things that made the 1980's/early 1990's surreal for me.
1) Satanic Panic - satanism in the day cares ahhhh!
2) "Repressed memory" syndrome
3) Facilitated Communication [FC]
All 3 led to massive abuse.
"Therapists" -and I use the term to describe these quacks loosely - would hypnotize people & convince they they were 'reliving' past memories of Mom & Dad killing babies in Satanic rituals in the basement while they were growing up.
Other 'therapists' would badger kids until they invented stories about watching alligators eat babies dropped into a lake from a hot air balloon. Kids would deny anything happened for hours until the therapist 'broke through' and 'found' the 'truth'.
FC was a movement that started with the claim severely handicapped individuals were able to 'type' legible sentences & communicate if a 'helper' guided their hands over a keyboard.
You May Also Like
Ironies of Luck https://t.co/5BPWGbAxFi
— Morgan Housel (@morganhousel) March 14, 2018
"Luck is the flip side of risk. They are mirrored cousins, driven by the same thing: You are one person in a 7 billion player game, and the accidental impact of other people\u2019s actions can be more consequential than your own."
I’ve always felt that the luckiest people I know had a talent for recognizing circumstances, not of their own making, that were conducive to a favorable outcome and their ability to quickly take advantage of them.
In other words, dumb luck was just that, it required no awareness on the person’s part, whereas “smart” luck involved awareness followed by action before the circumstances changed.
So, was I “lucky” to be born when I was—nothing I had any control over—and that I came of age just as huge databases and computers were advancing to the point where I could use those tools to write “What Works on Wall Street?” Absolutely.
Was I lucky to start my stock market investments near the peak of interest rates which allowed me to spend the majority of my adult life in a falling rate environment? Yup.