R语言基本数据分析

Python077

R语言基本数据分析,第1张

R语言基本数据分析

本文基于R语言进行基本数据统计分析,包括基本作图,线性拟合,逻辑回归,bootstrap采样和Anova方差分析的实现及应用。

不多说,直接上代码,代码中有注释。

1. 基本作图(盒图,qq图)

#basic plot

boxplot(x)

qqplot(x,y)

2. 线性拟合

#linear regression

n = 10

x1 = rnorm(n)#variable 1

x2 = rnorm(n)#variable 2

y = rnorm(n)*3

mod = lm(y~x1+x2)

model.matrix(mod) #erect the matrix of mod

plot(mod) #plot residual and fitted of the solution, Q-Q plot and cook distance

summary(mod) #get the statistic information of the model

hatvalues(mod) #very important, for abnormal sample detection

3. 逻辑回归

#logistic regression

x <- c(0, 1, 2, 3, 4, 5)

y <- c(0, 9, 21, 47, 60, 63) # the number of successes

n <- 70 #the number of trails

z <- n - y #the number of failures

b <- cbind(y, z) # column bind

fitx <- glm(b~x,family = binomial) # a particular type of generalized linear model

print(fitx)

plot(x,y,xlim=c(0,5),ylim=c(0,65)) #plot the points (x,y)

beta0 <- fitx$coef[1]

beta1 <- fitx$coef[2]

fn <- function(x) n*exp(beta0+beta1*x)/(1+exp(beta0+beta1*x))

par(new=T)

curve(fn,0,5,ylim=c(0,60)) # plot the logistic regression curve

3. Bootstrap采样

# bootstrap

# Application: 随机采样,获取最大eigenvalue占所有eigenvalue和之比,并画图显示distribution

dat = matrix(rnorm(100*5),100,5)

no.samples = 200 #sample 200 times

# theta = matrix(rep(0,no.samples*5),no.samples,5)

theta =rep(0,no.samples*5)

for (i in 1:no.samples)

{

j = sample(1:100,100,replace = TRUE)#get 100 samples each time

datrnd = dat[j,]#select one row each time

lambda = princomp(datrnd)$sdev^2#get eigenvalues

# theta[i,] = lambda

theta[i] = lambda[1]/sum(lambda)#plot the ratio of the biggest eigenvalue

}

# hist(theta[1,]) #plot the histogram of the first(biggest) eigenvalue

hist(theta)#plot the percentage distribution of the biggest eigenvalue

sd(theta)#standard deviation of theta

#上面注释掉的语句,可以全部去掉注释并将其下一条语句注释掉,完成画最大eigenvalue分布的功能

4. ANOVA方差分析

#Application:判断一个自变量是否有影响 (假设我们喂3种维他命给3头猪,想看喂维他命有没有用)

#

y = rnorm(9)#weight gain by pig(Yij, i is the treatment, j is the pig_id), 一般由用户自行输入

#y = matrix(c(1,10,1,2,10,2,1,9,1),9,1)

Treatment <- factor(c(1,2,3,1,2,3,1,2,3)) #each {1,2,3} is a group

mod = lm(y~Treatment) #linear regression

print(anova(mod))

#解释:Df(degree of freedom)

#Sum Sq: deviance (within groups, and residuals) 总偏差和

# Mean Sq: variance (within groups, and residuals) 平均方差和

# compare the contribution given by Treatment and Residual

#F value: Mean Sq(Treatment)/Mean Sq(Residuals)

#Pr(>F): p-value. 根据p-value决定是否接受Hypothesis H0:多个样本总体均数相等(检验水准为0.05)

qqnorm(mod$residual) #plot the residual approximated by mod

#如果qqnorm of residual像一条直线,说明residual符合正态分布,也就是说Treatment带来的contribution很小,也就是说Treatment无法带来收益(多喂维他命少喂维他命没区别)

如下面两图分别是

(左)用 y = matrix(c(1,10,1,2,10,2,1,9,1),9,1)和

(右)y = rnorm(9)

的结果。可见如果给定猪吃维他命2后体重特别突出的数据结果后,qq图种residual不在是一条直线,换句话说residual不再符合正态分布,i.e., 维他命对猪的体重有影响。

数据分析之美:决策树R语言实现

R语言实现决策树

1.准备数据

[plain] view plain copy

>install.packages("tree")

>library(tree)

>library(ISLR)

>attach(Carseats)

>High=ifelse(Sales<=8,"No","Yes") //set high values by sales data to calssify

>Carseats=data.frame(Carseats,High) //include the high data into the data source

>fix(Carseats)

2.生成决策树

[plain] view plain copy

>tree.carseats=tree(High~.-Sales,Carseats)

>summary(tree.carseats)

[plain] view plain copy

//output training error is 9%

Classification tree:

tree(formula = High ~ . - Sales, data = Carseats)

Variables actually used in tree construction:

[1] "ShelveLoc" "Price" "Income" "CompPrice" "Population"

[6] "Advertising" "Age" "US"

Number of terminal nodes: 27

Residual mean deviance: 0.4575 = 170.7 / 373

Misclassification error rate: 0.09 = 36 / 400

3. 显示决策树

[plain] view plain copy

>plot(tree . carseats )

>text(tree .carseats ,pretty =0)

4.Test Error

[plain] view plain copy

//prepare train data and test data

//We begin by using the sample() function to split the set of observations sample() into two halves, by selecting a random subset of 200 observations out of the original 400 observations.

>set . seed (1)

>train=sample(1:nrow(Carseats),200)

>Carseats.test=Carseats[-train,]

>High.test=High[-train]

//get the tree model with train data

>tree. carseats =tree (High~.-Sales , Carseats , subset =train )

//get the test error with tree model, train data and predict method

//predict is a generic function for predictions from the results of various model fitting functions.

>tree.pred = predict ( tree.carseats , Carseats .test ,type =" class ")

>table ( tree.pred ,High. test)

High. test

tree. pred No Yes

No 86 27

Yes 30 57

>(86+57) /200

[1] 0.715

5.决策树剪枝

[plain] view plain copy

/**

Next, we consider whether pruning the tree might lead to improved results. The function cv.tree() performs cross-validation in order to cv.tree() determine the optimal level of tree complexitycost complexity pruning is used in order to select a sequence of trees for consideration.

For regression trees, only the default, deviance, is accepted. For classification trees, the default is deviance and the alternative is misclass (number of misclassifications or total loss).

We use the argument FUN=prune.misclass in order to indicate that we want the classification error rate to guide the cross-validation and pruning process, rather than the default for the cv.tree() function, which is deviance.

If the tree is regression tree,

>plot(cv. boston$size ,cv. boston$dev ,type=’b ’)

*/

>set . seed (3)

>cv. carseats =cv. tree(tree .carseats ,FUN = prune . misclass ,K=10)

//The cv.tree() function reports the number of terminal nodes of each tree considered (size) as well as the corresponding error rate(dev) and the value of the cost-complexity parameter used (k, which corresponds to α.

>names (cv. carseats )

[1] " size" "dev " "k" " method "

>cv. carseats

$size //the number of terminal nodes of each tree considered

[1] 19 17 14 13 9 7 3 2 1

$dev //the corresponding error rate

[1] 55 55 53 52 50 56 69 65 80

$k // the value of the cost-complexity parameter used

[1] -Inf 0.0000000 0.6666667 1.0000000 1.7500000

2.0000000 4.2500000

[8] 5.0000000 23.0000000

$method //miscalss for classification tree

[1] " misclass "

attr (," class ")

[1] " prune " "tree. sequence "

[plain] view plain copy

//plot the error rate with tree node size to see whcih node size is best

>plot(cv. carseats$size ,cv. carseats$dev ,type=’b ’)

/**

Note that, despite the name, dev corresponds to the cross-validation error rate in this instance. The tree with 9 terminal nodes results in the lowest cross-validation error rate, with 50 cross-validation errors. We plot the error rate as a function of both size and k.

*/

>prune . carseats = prune . misclass ( tree. carseats , best =9)

>plot( prune . carseats )

>text( prune .carseats , pretty =0)

//get test error again to see whether the this pruned tree perform on the test data set

>tree.pred = predict ( prune . carseats , Carseats .test , type =" class ")

>table ( tree.pred ,High. test)

High. test

tree. pred No Yes

No 94 24

Yes 22 60

>(94+60) /200

[1] 0.77