R语言学习笔记之聚类分析

Python017

R语言学习笔记之聚类分析,第1张

R语言学习笔记之聚类分析

使用k-means聚类所需的包:

factoextra

cluster #加载包

library(factoextra)

library(cluster)l

#数据准备

使用内置的R数据集USArrests

#load the dataset

data("USArrests")

#remove any missing value (i.e, NA values for not available)

#That might be present in the data

USArrests <- na.omit(USArrests)#view the first 6 rows of the data

head(USArrests, n=6)

在此数据集中,列是变量,行是观测值

在聚类之前我们可以先进行一些必要的数据检查即数据描述性统计,如平均值、标准差等

desc_stats <- data.frame( Min=apply(USArrests, 2, min),#minimum

Med=apply(USArrests, 2, median),#median

Mean=apply(USArrests, 2, mean),#mean

SD=apply(USArrests, 2, sd),#Standard deviation

Max=apply(USArrests, 2, max)#maximum

)

desc_stats <- round(desc_stats, 1)#保留小数点后一位head(desc_stats)

变量有很大的方差及均值时需进行标准化

df <- scale(USArrests)

#数据集群性评估

使用get_clust_tendency()计算Hopkins统计量

res <- get_clust_tendency(df, 40, graph = TRUE)

res$hopkins_stat

## [1] 0.3440875

#Visualize the dissimilarity matrix

res$plot

Hopkins统计量的值<0.5,表明数据是高度可聚合的。另外,从图中也可以看出数据可聚合。

#估计聚合簇数

由于k均值聚类需要指定要生成的聚类数量,因此我们将使用函数clusGap()来计算用于估计最优聚类数。函数fviz_gap_stat()用于可视化。

set.seed(123)

## Compute the gap statistic

gap_stat <- clusGap(df, FUN = kmeans, nstart = 25, K.max = 10, B = 500)

# Plot the result

fviz_gap_stat(gap_stat)

图中显示最佳为聚成四类(k=4)

#进行聚类

set.seed(123)

km.res <- kmeans(df, 4, nstart = 25)

head(km.res$cluster, 20)

# Visualize clusters using factoextra

fviz_cluster(km.res, USArrests)

#检查cluster silhouette图

Recall that the silhouette measures (SiSi) how similar an object ii is to the the other objects in its own cluster versus those in the neighbor cluster. SiSi values range from 1 to - 1:

A value of SiSi close to 1 indicates that the object is well clustered. In the other words, the object ii is similar to the other objects in its group.

A value of SiSi close to -1 indicates that the object is poorly clustered, and that assignment to some other cluster would probably improve the overall results.

sil <- silhouette(km.res$cluster, dist(df))

rownames(sil) <- rownames(USArrests)

head(sil[, 1:3])

#Visualize

fviz_silhouette(sil)

图中可以看出有负值,可以通过函数silhouette()确定是哪个观测值

neg_sil_index <- which(sil[, "sil_width"] <0)

sil[neg_sil_index, , drop = FALSE]

##          cluster    neighbor     sil_width

## Missouri    3          2        -0.07318144

#eclust():增强的聚类分析

与其他聚类分析包相比,eclust()有以下优点:

简化了聚类分析的工作流程

可以用于计算层次聚类和分区聚类

eclust()自动计算最佳聚类簇数。

自动提供Silhouette plot

可以结合ggplot2绘制优美的图形

#使用eclust()的K均值聚类

# Compute k-means

res.km <- eclust(df, "kmeans")

# Gap statistic plot

fviz_gap_stat(res.km$gap_stat)

# Silhouette plotfviz_silhouette(res.km)

##    cluster size ave.sil.width

## 1     1     13      0.31

## 2     2     29      0.38

## 3     3      8      0.39

#使用eclust()的层次聚类

# Enhanced hierarchical clustering

res.hc <- eclust(df, "hclust") # compute hclust

fviz_dend(res.hc, rect = TRUE) # dendrogam

#下面的R代码生成Silhouette plot和分层聚类散点图。

fviz_silhouette(res.hc) # silhouette plot

##   cluster size ave.sil.width

## 1    1     19      0.26

## 2    2     19      0.28

## 3    3     12      0.43

fviz_cluster(res.hc) # scatter plot

#Infos

This analysis has been performed using R software (R version 3.3.2)

R语言基本数据分析

本文基于R语言进行基本数据统计分析,包括基本作图,线性拟合,逻辑回归,bootstrap采样和Anova方差分析的实现及应用。

不多说,直接上代码,代码中有注释。

1. 基本作图(盒图,qq图)

#basic plot

boxplot(x)

qqplot(x,y)

2. 线性拟合

#linear regression

n = 10

x1 = rnorm(n)#variable 1

x2 = rnorm(n)#variable 2

y = rnorm(n)*3

mod = lm(y~x1+x2)

model.matrix(mod) #erect the matrix of mod

plot(mod) #plot residual and fitted of the solution, Q-Q plot and cook distance

summary(mod) #get the statistic information of the model

hatvalues(mod) #very important, for abnormal sample detection

3. 逻辑回归

#logistic regression

x <- c(0, 1, 2, 3, 4, 5)

y <- c(0, 9, 21, 47, 60, 63) # the number of successes

n <- 70 #the number of trails

z <- n - y #the number of failures

b <- cbind(y, z) # column bind

fitx <- glm(b~x,family = binomial) # a particular type of generalized linear model

print(fitx)

plot(x,y,xlim=c(0,5),ylim=c(0,65)) #plot the points (x,y)

beta0 <- fitx$coef[1]

beta1 <- fitx$coef[2]

fn <- function(x) n*exp(beta0+beta1*x)/(1+exp(beta0+beta1*x))

par(new=T)

curve(fn,0,5,ylim=c(0,60)) # plot the logistic regression curve

3. Bootstrap采样

# bootstrap

# Application: 随机采样,获取最大eigenvalue占所有eigenvalue和之比,并画图显示distribution

dat = matrix(rnorm(100*5),100,5)

no.samples = 200 #sample 200 times

# theta = matrix(rep(0,no.samples*5),no.samples,5)

theta =rep(0,no.samples*5)

for (i in 1:no.samples)

{

j = sample(1:100,100,replace = TRUE)#get 100 samples each time

datrnd = dat[j,]#select one row each time

lambda = princomp(datrnd)$sdev^2#get eigenvalues

# theta[i,] = lambda

theta[i] = lambda[1]/sum(lambda)#plot the ratio of the biggest eigenvalue

}

# hist(theta[1,]) #plot the histogram of the first(biggest) eigenvalue

hist(theta)#plot the percentage distribution of the biggest eigenvalue

sd(theta)#standard deviation of theta

#上面注释掉的语句,可以全部去掉注释并将其下一条语句注释掉,完成画最大eigenvalue分布的功能

4. ANOVA方差分析

#Application:判断一个自变量是否有影响 (假设我们喂3种维他命给3头猪,想看喂维他命有没有用)

#

y = rnorm(9)#weight gain by pig(Yij, i is the treatment, j is the pig_id), 一般由用户自行输入

#y = matrix(c(1,10,1,2,10,2,1,9,1),9,1)

Treatment <- factor(c(1,2,3,1,2,3,1,2,3)) #each {1,2,3} is a group

mod = lm(y~Treatment) #linear regression

print(anova(mod))

#解释:Df(degree of freedom)

#Sum Sq: deviance (within groups, and residuals) 总偏差和

# Mean Sq: variance (within groups, and residuals) 平均方差和

# compare the contribution given by Treatment and Residual

#F value: Mean Sq(Treatment)/Mean Sq(Residuals)

#Pr(>F): p-value. 根据p-value决定是否接受Hypothesis H0:多个样本总体均数相等(检验水准为0.05)

qqnorm(mod$residual) #plot the residual approximated by mod

#如果qqnorm of residual像一条直线,说明residual符合正态分布,也就是说Treatment带来的contribution很小,也就是说Treatment无法带来收益(多喂维他命少喂维他命没区别)

如下面两图分别是

(左)用 y = matrix(c(1,10,1,2,10,2,1,9,1),9,1)和

(右)y = rnorm(9)

的结果。可见如果给定猪吃维他命2后体重特别突出的数据结果后,qq图种residual不在是一条直线,换句话说residual不再符合正态分布,i.e., 维他命对猪的体重有影响。

方法一:规范化方法 也叫离差标准化,是对原始数据的线性变换,使结果映射到[0,1]区间。 方法二:正规化方法 这种方法基于原始数据的均值(mean)和标准差(standard deviation)进行数据的标准化。将A的原始值x使用z-score标准化到x’。