Synthetic data is widely used in various domains. This is because many modern algorithms require lots of data for efficient training, and data collection and labeling usually are a time-consuming process and are prone to errors. Furthermore, some real-world data, due to its nature, is confidential and cannot be shared.
Some methods, such as generative adversarial network¹, are proposed to generate time series data. However, GAN is hard to train and might not be stable; besides, it requires a large volume of data for efficient training.
This article will introduce the tsBNgen, a python library, to generate synthetic time series…
This is the third part of the series of articles written in “Comprehensive Study of Least Square Estimation”. In the first two parts, I discuss ordinary least square and multi-objective least-square problems. If you have not already read the first two parts, you can check them out here:
In this article, I will talk about the constrained least-squares problem, which arises frequently in machine learning, control, signal processing, and statistics. …
In the first part, I discussed the ordinary least square problem in detail. In this part (part 2) I will go over the multi-objective least-square problems. If you have not already read the first part please refer to the following article for more information:
For more information and a more detailed explanation, please refer to my YouTube channel.
Assuming we are looking for a solution that makes more than one objective/cost functions small. This problem is referred to as a multi-objective optimization problem. …
The least-square estimation is one of the most widely used techniques used in machine learning, signal processing, and statistics. It is the common way of solving the linear regression used widely to model continuous outcomes.
It can be modeled as an MMSE estimator or a Bayes estimator with a quadratic cost. I have already written comprehensive articles about various estimators used in machine learning and signal processing. Please check the following article for more information.
I have also made a series of YouTube videos to go more in-depth about various estimation techniques. …
This is a two-part article. In part 2 (This part), I will go over random processes ( stochastic processes), their properties, and their response to the linear time-invariant (LTI) channel. In part 1, I discussed probabilities, random variables, and their properties. If you have not read the first part yet, please read it here first:
The first part is fundamental to understand this part since random processes are the general extension of random variables and random vectors.
Random variables and random processes play important roles in the real-world. …
This is a two-part article. In part 1 (This part), I will go over random variables, random vectors, and their properties. In part 2, I will discuss random processes and their properties.
Random variables and random processes play important roles in the real-world. They are used extensively in various fields such as machine learning, signal processing, digital communication, statistics, etc.
The following is the outline of this article:
There are many different programming languages for various applications, such as data science, machine learning, signal processing, numerical optimization, and web development. Therefore, it is essential to know how to decide which programming language is more suitable for your application.
In this article, I will discuss the advantages and disadvantages of using Python, R, and Matlab. I will explain when and for what applications these programming languages are more suitable. I organize the outline based on popular research and work done extensively in the real-world.
Following is the outline of this article:
Parameter estimation plays a vital role in machine learning, statistics, communication system, radar, and many other domains. For example, in a digital communication system, you sometimes need to estimate the parameters of the fading channel, the variance of AWGN (additive white Gaussian noise) noise, IQ (in-phase, quadrature) imbalance parameters, frequency offset, etc. In machine learning and statistics, you constantly need to estimate and learn the parameters of the probability distributions. For example, in Bayesian and causal networks, this corresponds to estimating the CPT (conditional probability table) for discrete nodes and the mean and the variance for the continuous nodes. …
PhD student in ECE department at UCLA, doing research in machine learning, probabilistic graphical models and causal inference.