10 PYTHON ๐Ÿ libraries for machine learning.

Retweets are appreciated.
[ Thread ]

1. NumPy (Numerical Python)

- The most powerful feature of NumPy is the n-dimensional array.

- It contains basic linear algebra functions, Fourier transforms, and tools for integration with other low-level languages.

Ref: https://t.co/XY13ILXwSN
2. SciPy (Scientific Python)

- SciPy is built on NumPy.

- It is one of the most useful libraries for a variety of high-level science and engineering modules like discrete Fourier transform, Linear Algebra, Optimization, and Sparse matrices.

Ref: https://t.co/ALTFqM2VUo
3. Matplotlib

- Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python.

- You can also use Latex commands to add math to your plot.

- Matplotlib makes hard things possible.

Ref: https://t.co/zodOo2WzGx
4. Pandas

- Pandas is for structured data operations and manipulations.

- It is extensively used for data munging and preparation.

- Pandas were added relatively recently to Python and have been instrumental in boosting Pythonโ€™s usage.

Ref: https://t.co/IFzikVHht4
5. Scikit Learn

- Built on NumPy, SciPy, and matplotlib, this library contains a lot of efficient tools for machine learning and statistical modeling including classification, regression, clustering, and dimensionality reduction.

Ref: https://t.co/TCaQXPvKkk
6. Statsmodels

- Statsmodels for statistical modeling.

- Statsmodels is a Python module that allows users to explore data, estimate statistical models, and perform statistical tests.

Ref: https://t.co/5CXswFvpPx
7. Seaborn

- Seaborn for statistical data visualization.

- Seaborn is a library for making attractive and informative statistical graphics in Python. It is based on matplotlib.

- Seaborn aims to make visualization a central part of exploring.

Ref: https://t.co/cSxJlr09mq
8. Blaze

- Blaze for extending the capability of Numpy and Pandas to distributed and streaming datasets.

- It can be used to access data from a multitude of sources including Bcolz, MongoDB, SQLAlchemy, Apache Spark, PyTables, etc.

Ref: https://t.co/5NhpM0reaH
9. Scrapy

- Scrapy for web crawling.

- It is a very useful framework for getting specific patterns of data.

- It has the capability to start at a website home URL and then dig through web-pages within the website to gather information.

Ref: https://t.co/iEYIazAd2B
10. SymPy

- SymPy for symbolic computation.

- It has wide-ranging capabilities from basic symbolic arithmetic to calculus, algebra, discrete mathematics, and quantum physics.

- Use for formatting the result of the computations as LaTeX code.

Ref : https://t.co/hesVmRJLVj
Additional libraries, you might need:

- OS for Operating system and file operations.

- Networkx for graph-based data manipulations.

- Regular expressions for finding patterns in text data.

- BeautifulSoup for scrapping the web.

More from Machine learning

This is a Twitter series on #FoundationsOfML.

โ“ Today, I want to start discussing the different types of Machine Learning flavors we can find.

This is a very high-level overview. In later threads, we'll dive deeper into each paradigm... ๐Ÿ‘‡๐Ÿงต

Last time we talked about how Machine Learning works.

Basically, it's about having some source of experience E for solving a given task T, that allows us to find a program P which is (hopefully) optimal w.r.t. some metric


According to the nature of that experience, we can define different formulations, or flavors, of the learning process.

A useful distinction is whether we have an explicit goal or desired output, which gives rise to the definitions of 1๏ธโƒฃ Supervised and 2๏ธโƒฃ Unsupervised Learning ๐Ÿ‘‡

1๏ธโƒฃ Supervised Learning

In this formulation, the experience E is a collection of input/output pairs, and the task T is defined as a function that produces the right output for any given input.

๐Ÿ‘‰ The underlying assumption is that there is some correlation (or, in general, a computable relation) between the structure of an input and its corresponding output and that it is possible to infer that function or mapping from a sufficiently large number of examples.

You May Also Like

๐™Ž๐™๐™–๐™ง๐™ž๐™ฃ๐™œ ๐™ข๐™ฎ ๐™ฌ๐™ž๐™จ๐™™๐™ค๐™ข ๐‘พ๐’๐’'๐’• ๐’ƒ๐’† ๐’”๐’–๐’“๐’‘๐’“๐’Š๐’”๐’†๐’… ๐’Š๐’‡ ๐’•๐’๐’Ž๐’๐’“๐’“๐’๐’˜ ๐’– ๐’“๐’†๐’‚๐’… ๐’•๐’‰๐’† ๐’”๐’‚๐’Ž๐’† ๐’”๐’•๐’–๐’‡๐’‡ ๐’Š๐’ 50๐’Œ ๐’˜๐’๐’“๐’Œ๐’”๐’‰๐’๐’‘ ๐’๐’“ ๐’”๐’๐’Ž๐’†๐’๐’๐’† ๐’Ž๐’‚๐’๐’‚๐’ˆ๐’Š๐’๐’ˆ ๐’š๐’๐’–๐’“ ๐’Ž๐’๐’๐’†๐’š ๐’˜๐’Š๐’•๐’‰ ๐’”๐’‚๐’Ž๐’† ๐’๐’๐’ˆ๐’Š๐’„
Simple and effective way 2 make Money


Idea 1:- Use pivot level like 14800 in case of nifty and sell 14800straddle monthly expiry (365+335) exit if nifty closes on daily basis below S1 or above R1

After closing below S1 if it closes above S1 next day or any day enter the same position again vice versa for R1

Idea2:- Use R1 and S1 corresponding strikes multiple
Incase of R1 15337 take 15300ce
N in case of S1 14221 use 14200pe
Sell both and hold till expiry or exit if nifty closes below S1 or above R1 around closing
If the same bounces above S1 and falls below R1 re-enfer same strikes

Use same criteria for nifty, usdinr and banknifty

(This is must)Use this margin rule for 1lot banknifty pair keep 4Lax margin
For nifty one lot keep 3Lax
For usdinr 100lots keep 4Lax

I bet you if you do this on consistent basis your ROI will be more than 70% on yearly basis.

Couldn't explain easier than this

Criticisms are most welcomed.