Notes on the Nash embedding theorem

Terence Tao notes on embedding theorem.

What's new

Throughout this post we shall always work in the smooth category, thus all manifolds, maps, coordinate charts, and functions are assumed to be smooth unless explicitly stated otherwise.

A (real) manifold $latex {M}&fg=000000$ can be defined in at least two ways. On one hand, one can define the manifold extrinsically, as a subset of some standard space such as a Euclidean space $latex {{bf R}^d}&fg=000000$. On the other hand, one can define the manifold intrinsically, as a topological space equipped with an atlas of coordinate charts. The fundamental embedding theorems show that, under reasonable assumptions, the intrinsic and extrinsic approaches give the same classes of manifolds (up to isomorphism in various categories). For instance, we have the following (special case of) the Whitney embedding theorem:

Theorem 1 (Whitney embedding theorem) Let $latex {M}&fg=000000$ be a compact manifold. Then there exists an embedding $latex {u: M rightarrow {bf…

View original post 4,211 more words

Homomorphism vs Homeomorphism

PERPETUAL ENIGMA

1 mainDid you get the joke in the picture to the left? If not, you will do so in a few minutes. I was recently reading an article and I came across the terms mentioned in the title. From the looks of it, they are very close to each other, right? In many fields within mathematics, we talk about objects and the maps between them. Now you may ask why we would want to do that? Well, transformation is one of the most fundamental things in any field. For example, how do we transform a line into a circle, or fuel into mechanical energy, or words into numbers? There are infinitely many types of transformations that can exist. Obviously, we cannot account for every single type of transformation that can possibly exist. So we limit ourselves to only the interesting ones. So what exactly is it all about? How does it even relate to the title of…

View original post 998 more words

Tensor Methods in Machine Learning

From http://www.offconvex.org/2015/12/17/tensor-decompositions/

 

Tensors are high dimensional generalizations of matrices. In recent years tensor decompositions were used to design learning algorithms for estimating parameters of latent variable models like Hidden Markov Model, Mixture of Gaussians and Latent Dirichlet Allocation (many of these works were considered as examples of “spectral learning”, read on to find out why). In this post I will briefly describe why tensors are useful in these settings. Continue reading
Hiệu Minh Blog

Càng đi xa, càng thấy mình nhỏ bé

Data Analytics & R

Broaden your Horizon

The Clever Machine

Topics in Computational Neuroscience & Machine Learning

Tim Dettmers

Making deep learning accessible.

Memming

the inconsistent

The Spectator

Shakir's Machine Learning Blog

Blog của 5xu

Rẻ Vừa Thôi

when trees fall...

Programming, machine learning, and artificial intelligence.

Vuhavan's Blog

Just another WordPress.com weblog

Tuấn Khanh's Blog

Nơi lưu trữ và cập nhật các bài viết, hình ảnh từ Tuấn Khanh (ns)