Buzz Chronicles
Follow
  • Home
  • Threads
    • Daily Charts
    • Most Popular
    • Most Recent
  • Authors
  • Categories
    • Life
    • Tech
    • Culture
    • Politics
    • Society
    • Fun
    • See All Categories
  • About

Jacobhtml Authors Prashant ๐Ÿ“š

7 days 30 days All time Recent Popular
Prashant \U0001f4da
Prashant ๐Ÿ“š
@capeandcode
Ever heard of Autoencoders?

The first time I saw a Neural Network with more output neurons than in the hidden layers, I couldn't figure how it would work?!

#DeepLearning #MachineLearning
Here's a little something about them: ๐Ÿงต๐Ÿ‘‡


Autoencoders are unsupervised neural networks whose architecture you can picture as two funnels connect from the narrow ends.

These networks are primary focus for compression tasks of data in Machine Learning.

We feed them the data so that they can learn the most important features, a smaller representation while keep the integrity of the data.

Later when someone needs, can just take that small representation and recreate the original, just like a zip file.๐Ÿ“ฅ

Being unsupervised, they require no labels.
Our inputs and outputs are same and a simple euclidean distance can be used as a loss function for measuring the reconstruction.

Of course, we wouldn't expect a perfect reconstruction.

We can think of an autoencoder having two components, encoder and decoder, represented by the below equations:

We are just trying to minimize the L here. All the backpropagation rules still hold.
MACHINE LEARNING
  • Page 1 of 1
How does it work?
  • 💬 Reply to a thread with "@buzz_chronicles save" or "@buzz_chronicles save as category"
    🤖 Our bot will send you a link to your own folder on Buzz Chronicles. The thread will be saved in a form of an easy-to-read article
    📁 All your saved threads will be available at buzzchronicles.com/your_twitter_handle
Buzz Chronicles
  • Explore
  • Threads
  • Daily Charts
  • Authors
  • Categories
  • About
  • Terms of Service

Copyright © 2021 Buzz Chronicles - All right reserved