# Supervised Learning

In supervised learning, every example in our dataset was labelled, which means the ‘correct answer’ has been told to the computer. Therefore, the algorithm will predict the results following the dataset has been given. If you want to predict a target value, you need to look into the supervised learning algorithm. In supervised learning, there are two main tasks of the algorithm which are classification and regression problem. For classification problem, the prediction output is discrete value. For regression problem, the prediction output is a continuous value. This is the main difference between these two problems [1].

If supervised learning is decided to use for classification or regression problem, the next step is to train the algorithm and let it learn from the dataset. The data to use training the algorithm is called training set. A training set is a set of many training examples which have the different features and one labelled value (target variable) [1]. In general, the total data set is divided into three parts which are training set, cross validation set and test set. Each of these occupies 60%, 20% and 20% of the total data set size, respectively. How to use these three sets will be discussed later. The learning algorithm uses these training examples to get the hypothesis which is result function. This function can take a new data as the input to predict the output. In Supervised learning area, there are many different powerful algorithms but they cannot be all in this report. In this part, the linear regression, logistic regression, neural networks and Support Vector Machines (SVMs) are introduced as follow.

The linear regression model is a simple and easy model in the supervised learning algorithm. The hypothesis model of linear regression is $$h=Theta_0+theta_1*X_1+…+theta_n*X_n$$ where $X_n$ is the value of feature n and $theta$ is the parameters of the model.

[1]P. Harrington, Machine learning in action. Shelter Island, N.Y.: Manning Publications Co., 2012.

# The Introduction of Machine Learning

Machine learning is a popular topic in the computer science area, but it can be used in many other different areas. For example, it can predict the probability that it might rain or classify the species of bird. Machine learning is a powerful technic which will be used in our future life. Machine learning is a good tool for us to insight data and asks the computer to help us deeply understand the data meaning [1]. According to Mitchell, the definition of the machine learning is shown as follow: “a computer program is said to learn from experience E, with respect to some task T, and some performance measure P, if its performance on T as measured by P improves with experience E.”. Three features in this definition are the class of the tasks, the measure of performance to be improved and the source of experience [2]. The experience E is like where the machine learns from. In general, the machine learning algorithm learns from the dataset. The task T is what we want the machine doing at the final or after training. The possible P is the possibility of machine completing the task or the accuracy of result from the algorithm. In the training process, a computer is more patient than people, so it can repeat analysis same data many times and may find a better choice than people thinking. There are two general classifications of the machine learning. One is supervised learning and another is unsupervised learning. The difference between these two types of machine learning algorithm is whether people teach the computer how to do something or tell the machine which choice is right. In other words, when the data is labelled, it is a supervised learning.  In unsupervised learning, there is no labelled data and the computer does not know what is right and it will find the answer by itself [1]. The detail of supervised and unsupervised learning will be discussed in a series of later article. Moreover, how to choose and improve the algorithm is another important thing when design the machine learning system.

[1]P. Harrington, Machine learning in action. Shelter Island, N.Y.: Manning Publications Co., 2012.

[2]T. Mitchell, Machine learning. Singapore: McGraw-Hill, 1997, pp. 1-4.

# Fast Fourier Transform

Fast Fourier Transform (FFT) is an algorithm to simplify the calculation of the Discrete Fourier Transform (DFT). In the traditional method, the calculation of the DFT is complicated and costs much memory of the computer. In the digital signal processing area, the superposition principle is important to deal with the problem and FFT use this principle well. FFT decompose the original DFT series into much smaller length DFT which can combine to original DFT after calculation. After decomposing, these smaller length DFTs may be calculated by using the direct methods or further decomposed into even smaller length DFTs. It can help the calculation of the DFT faster and easier. There are two basic classes of the FFT algorithms. One is decimation-in-time FFT and another is decimation-in-frequency FFT. In the following part, the basic principle of these two type FFTs will be discussed.

The principle of Decimation-in-time FFT is easy to understand. The basic operation is to divide the N-point DFT series into odd-numbered and even-numbered points series. After calculation, they can be easily recovered to original DFT by re-interleaving the two sequences. Using an example to illustrate this process can be understood easily. The mathematic equation does not present in this article, but the flow graph of 8-point DFT using the butterfly computation will be shown.  That figure shows the process of FFT.

The decimation-in-time FFT algorithm was divided the input sequence into a smaller sequence. Further, the output sequence also can be divided into small subsequence.  The difference between these two methods is which part is divided into the smaller sequence.

# Alton Tower一日游

Alton Tower是欧洲十一个最多游客游览的主题公园之一，世界十大著名游乐园之一，位于英国英格兰的斯塔福德郡。 是一个游玩机动游戏为主的主题公园，到这里不仅可以感受先进的游乐设施，还可以欣赏到美丽的自然风光。其中古树参天、溪水潺潺，还有成群结队的松鼠。

Alton tower如果人不多真的就是一日游的绝佳地点。我们是在大学考试季刚结束一个星期过去的，人真的不多。也可能是英国经济不景气导致的。

# Introduction of convolution code

Convolution code which generates parity bits to the data is a powerful technique for correcting the random error. Moreover, in the convolution process, the shift register is used, so the output is a function of current and previous data. Constraint length K can be defined as the degree of memory a code includes. The rate =1/2 in WSPR means two output bits are generated for each input bit. Fig. 2 shows a simple convolution encoder example.

In the beginning, the shift register was initially loaded with 111 and the input bit stream is 11011. Input bits from the left put into the shift register and the output bits O1 and O2 are calculated by modulo-2 adders. From the Table I, the output is 10 01 00 01 10 10 10 for input bits 11011. Till now, the process of encoding convolution code is discussed. In order to understand how memory helps it to correct the random error, the state diagram of this example is shown in Fig. 3. There are four states of R1 and R2. R0 is the input bit and the number adjacent to the arrow is code output. According to Fig. 3, some transitions are not allowed because of the memory of the system, for example, state 10 to 00. The output data stream corresponds to a set of allowed transitions between the states. If an error is introduced during the transmission then the decoding process would result in non-permitted transitions, which means an error has been introduced. Next, the convolution decoder uses soft decision to determine the correct message. After this process, the random error would be corrected.

# 建站小结

NB：注意分类一开始就做好，活用js文件。js文件可以让你省下很多繁杂重复的工作。也可以给你的网站增添很多的东西，一定要好好的研究。