Do you want to start learning Machine Learning and keep hearing buzzwords related to Linear Algebra? Vector, matrix, tensor? Those are basic Linear Algebra Definitions.
Perhaps you want to learn Linear Algebra for Machine Learning and looking for a place to hit first?
In one of the previous articles, we discussed the importance of Linear Algebra in Machine Learning. Here, we introduce some of the most commonly used Linear Algebra definitions.
This article aims to accomplish the following:
- Provide the basic definitions in Linear Algebra that we hear every day while dealing with Machine Learning.
- Describe the connection between those basic concepts.
- Demonstration of the commonly used notations.
If you already know Linear Algebra and/or familiar with basic definitions, you may still find some aspects of this article useful, such as the representation of “the commonly used notations.”
Let’s get started!
Before You Move On
You may find the following resource helpful to better understand the concept of this article:
- The Remarkable Importance of Linear Algebra in Machine Learning: This article talks about why you should care about Linear Algebra if you want to master Machine Learning.
Scalar and Vector
By using mathematical definition, a scalar is “an element of a field, which is used to define a vector space, usually the field of real numbers.” Simply speaking, a scalar is just a number. We usually denote a scalar with a lower case symbol, such as .
It is worth to have a comparison between scalars and vectors. You are familiar with measures such as the velocity of a moving car and the direction of moving, which have both quantity and direction. Such elements are called vector as opposed to scalars, which have magnitude only. We denote vectors with bold symbols such as .
Subscripts usually denote the elements of the vector. For example the second element of the vector is denoted as . In general, we show a k-dimensional vector as an array of elements:
NOTATION: We denote a k-dimensional vector as .
In mathematics, a matrix is a rectangular array of elements, such as numbers organized in rows and columns. Usually, the matrices are denoted with bold uppercase, such as . An example of a matrix is below:
We denote the matrix elements with subscripts. In the matrix , the element in row and column is denoted as . So you can easily say for the example above, and .
Another important notation is when we desire to refer a row or a column as a whole, i.e., all elements in that particular row/column. In this case, we use “:” sign in the subscript. For example, assume we want to refer to all elements of row for matrix . We denote that row with . For matrix , is as below:
NOTE: We usually represent a vector using one column and multiple rows of elements. Henceforth, we can call a vector of size as a matrix. In general, we can informally say vectors are special kind of matrices which are 1-dimensional.
NOTATION: We denote a matrix as in which and are the number of rows and columns, respectively.
We usually represent data in machine learning, numerically. Tensors are used for such numerical representation. A tensor is simply a container that holds data in N-dimensional space. A Tensor can host and represent multiple matrices. An example is shown below:
You can consider a simple 3D tensor as the data cube depicted below:
In this tensor, each color represents a matrix and the concatenation of the matrices, form the tensor.
In this post, we described some of the most commonly used Linear Algebra definitions in Machine Learning. Scalar, vector, matrix, and tensor are what you hear the most. It was crucial to address them before we proceed to explain other concepts in Linear Algebra.
P.S. Please share with me your thoughts by commenting below. I might be wrong in what I say, and I love to know when I am wrong. Furthermore, your questions might be my questions, as well. It’s always good to become better even if being the best is impossible in our belief system. So let’s help each other to become better.