Ankitsrihbti's Categories

Ankitsrihbti's Authors

Latest Saves

I’m at Day 23 of my 30 posts (on Object Detection) in 30 days challenge

I gathered 12 visual summaries on OD Modeling 🎁

A lot of people find those posts helpful, follow @ai_fast_track to catch the upcoming posts, and give this tweet a quick retweet 🙏

Summary of summaries👇


1- Common Object Detector Architecture you should be familiar


2- Four Feature Pyramid Network (FPN) Designs you should


3- Seven things you should know about the Focal


4- FCOS is the first anchor-free object detector that beat two-stage
Let's talk about a common problem in ML - imbalanced data ⚖️

Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?

Actually, this model is useless ❌

Let me explain 👇


The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.

If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!

What can we do? 👇

Let me tell you about 3 ways of dealing with imbalanced data:

▪️ Choose the right evaluation metric
▪️ Undersampling your dataset
▪️ Oversampling your dataset
▪️ Adapting the loss

Let's dive in 👇

1️⃣ Evaluation metrics

Looking at the overall accuracy is a very bad idea when dealing with imbalanced data. There are other measures that are much better suited:
▪️ Precision
▪️ Recall
▪️ F1 score

I wrote a whole thread on


2️⃣ Undersampling

The idea is to throw away samples of the overrepresented classes.

One way to do this is to randomly throw away samples. However, ideally, we want to make sure we are only throwing away samples that look similar.

Here is a strategy to achieve that 👇
Some great lectures on AI and ML from Oxford University. A thread... [1/32] @CompSciOxford @UniofOxford

So in the Dept of Computer Science at Oxford Uni, we run a distinguished lecture series called the Strachey Lectures, after Christopher Strachey, the first director of Oxford's computer
lab:
https://t.co/kjYbG2IifM [2/32]

(Strachey was a fascinating character - he got interested in computing after writing to Turing. He died young, and for this reason his story is not so well-known outside Oxford. But this thread is not about him - another time maybe...) [3/32]

Given the boom in AI/ML, it isn't surprising that we've had many leaders in AI/ML give Strachey Lectures, and since they have been recorded, I thought I'd share them... [4/32]

First up: @demishassabis. By coincidence, Demis's lecture was in Feb 2016, just a couple of weeks before the now-famous AlphaGo competition in Seoul with Lee Sedol. [5/32]
🎥 What YouTube channels to follow to learn deep learning research? Here's a thread of awesome YouTube channels we've been watching for a while.

We started linking their YouTube videos to https://t.co/BrdfkD9Qof
Let us know if we missed any cool channels.

🧵👇

2/5
Yannic Kilcher @ykilcher
https://t.co/pZoXD2qN3U
⏰~50 min long

His explanations of latest AI papers are awesome. Videos walk through the papers in detail with side notes and highlighting.

3/5 The AI Epiphany @gordic_aleksa
https://t.co/H2dVIkMITx
⏰ ~40 min long

He started discussing latest research about 6 months ago. His videos walk through the papers in detail with notes. He sometimes do coding projects as well.

4/5 Henry AI Labs @labs_henry
https://t.co/LTqwfpFbh8
⏰ ~15 min long

They have a mix of paper explanations, Keras tutorials and weekly updates on new papers published. Paper explanations use slides and cover the important points of the paper.

5/5 Two Minute Papers @twominutepapers
https://t.co/UatgFONZO3
⏰ ~8 min long

They cover of highlights of papers in interesting short videos that are easy to understand. They seem to focus on vision and graphics related research.