A Brazilian Butt Lift has a mortality rate of 1 in 3000. Beauty is subjective, yet it matters enough to human beings that we give our lives for it. If we consider masks to be ugly, then that can be sufficient reason not to wear them. Aesthetics matter.

Cave diving ends in death once every 3286 dives. We don't prohibit it, we let people decide for themselves whether this existentially meaningful experience is worth the risk to them. It's not the role of public health to decide which risks human beings are allowed to take.
This my friends, is the real issue to comprehend. "Masks don't work" or "lockdowns don't work" are all nice, but ultimately secondary arguments. A cabal of pencil-necked conscientious technocrats should not have the right to make these type of decisions for us.
If you go down the road of masks and lockdowns don't work/the virus is not very deadly, you're not hitting the core of the problem: They have no right to decide for us what the human experience should entail. We want freedom because we want freedom, not because of a pie chart.
"But what about seatbelts?!?" There's a difference between natural rights and legal rights. Natural rights are derived from observing nature. There are no cars in nature. What the public health class wants is to violate natural rights.
Freedom of assembly, religion, medical self-determination, these are the sort of products of natural rights, that derive from observing how the natural world functions, that are now under threat.

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)
The best morning routine?

Starts the night before.

9 evening habits that make all the difference:

1. Write down tomorrow's 3:3:3 plan

• 3 hours on your most important project
• 3 shorter tasks
• 3 maintenance activities

Defining a "productive day" is crucial.

Or else you'll never be at peace (even with excellent output).

Learn more


2. End the workday with a shutdown ritual

Create a short shutdown ritual (hat-tip to Cal Newport). Close your laptop, plug in the charger, spend 2 minutes tidying your desk. Then say, "shutdown."

Separating your life and work is key.

3. Journal 1 beautiful life moment

Delicious tacos, presentation you crushed, a moment of inner peace. Write it down.

Gratitude programs a mindset of abundance.

4. Lay out clothes

Get exercise clothes ready for tomorrow. Upon waking up, jump rope for 2 mins. It will activate your mind + body.

You May Also Like