2/19
The last Hindu king of Pakistan.
The name Amarkot is known to all in our subcontinent as the birthplace of Emperor Akbar. When Humayun was fleeing from Sher Shah, Rana Prasad of Amarkot sheltered him and this is where Emperor Akbar was born.
#Thread
1/19
@talesofBharat

2/19
But another reason why the name of this Amarkot is inextricably linked with the history of the subcontinent is the partition of the country.
Sodha is the family name of the Maharajas of..
3/19
4/19
After the capture of the fort by the
5/19
6/19
In 1946,Jawaharlal Nehru went to Amarkot to invite the then Maharaja Rana Arjun Singh to join the Congress. At that time Amarkot had a population of 12,000 Hindus which was 90% of the
7/19
8/19
In fact, the reason behind Rana joining the Muslim League was different. The Maharaja of Jodhpur had a perpetual hostile relationship with the Rana of Amarkot from the twelfth century onwards.
9/19
10/19
During the war on the western battlefield in 1971,
11/19
12/19
Rana Chander Singh Sodha is a very conservative person personally and he even adheres to the caste system very well even though he maintained perfect
Harmony
13/19
14/19
15/19
16/19
His son Hamir Singh Sodha was also the Minister of Agriculture in the Sindh Provincial Council and is currently the Member of Parliament for Pakistan.
17/19
More from All
How can we use language supervision to learn better visual representations for robotics?
Introducing Voltron: Language-Driven Representation Learning for Robotics!
Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z
🧵👇(1 / 12)
Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.
Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).
The secret is *balance* (3/12)
Starting with a masked autoencoder over frames from these video clips, make a choice:
1) Condition on language and improve our ability to reconstruct the scene.
2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)
By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.
Why is the ability to shape this balance important? (5/12)
Introducing Voltron: Language-Driven Representation Learning for Robotics!
Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z
🧵👇(1 / 12)

Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.
Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).
The secret is *balance* (3/12)
Starting with a masked autoencoder over frames from these video clips, make a choice:
1) Condition on language and improve our ability to reconstruct the scene.
2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)
By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.
Why is the ability to shape this balance important? (5/12)