R语言学习笔记之聚类分析
使用k-means聚类所需的包:
factoextra
cluster #加载包
library(factoextra)
library(cluster)l
#数据准备
使用内置的R数据集USArrests
#load the dataset
data("USArrests")
#remove any missing value (i.e, NA values for not available)
#That might be present in the data
USArrests <- na.omit(USArrests)#view the first 6 rows of the data
head(USArrests, n=6)
在此数据集中,列是变量,行是观测值
在聚类之前我们可以先进行一些必要的数据检查即数据描述性统计,如平均值、标准差等
desc_stats <- data.frame( Min=apply(USArrests, 2, min),#minimum
Med=apply(USArrests, 2, median),#median
Mean=apply(USArrests, 2, mean),#mean
SD=apply(USArrests, 2, sd),#Standard deviation
Max=apply(USArrests, 2, max)#maximum
)
desc_stats <- round(desc_stats, 1)#保留小数点后一位head(desc_stats)
变量有很大的方差及均值时需进行标准化
df <- scale(USArrests)
#数据集群性评估
使用get_clust_tendency()计算Hopkins统计量
res <- get_clust_tendency(df, 40, graph = TRUE)
res$hopkins_stat
## [1] 0.3440875
#Visualize the dissimilarity matrix
res$plot
Hopkins统计量的值<0.5,表明数据是高度可聚合的。另外,从图中也可以看出数据可聚合。
#估计聚合簇数
由于k均值聚类需要指定要生成的聚类数量,因此我们将使用函数clusGap()来计算用于估计最优聚类数。函数fviz_gap_stat()用于可视化。
set.seed(123)
## Compute the gap statistic
gap_stat <- clusGap(df, FUN = kmeans, nstart = 25, K.max = 10, B = 500)
# Plot the result
fviz_gap_stat(gap_stat)
图中显示最佳为聚成四类(k=4)
#进行聚类
set.seed(123)
km.res <- kmeans(df, 4, nstart = 25)
head(km.res$cluster, 20)
# Visualize clusters using factoextra
fviz_cluster(km.res, USArrests)
#检查cluster silhouette图
Recall that the silhouette measures (SiSi) how similar an object ii is to the the other objects in its own cluster versus those in the neighbor cluster. SiSi values range from 1 to - 1:
A value of SiSi close to 1 indicates that the object is well clustered. In the other words, the object ii is similar to the other objects in its group.
A value of SiSi close to -1 indicates that the object is poorly clustered, and that assignment to some other cluster would probably improve the overall results.
sil <- silhouette(km.res$cluster, dist(df))
rownames(sil) <- rownames(USArrests)
head(sil[, 1:3])
#Visualize
fviz_silhouette(sil)
图中可以看出有负值,可以通过函数silhouette()确定是哪个观测值
neg_sil_index <- which(sil[, "sil_width"] <0)
sil[neg_sil_index, , drop = FALSE]
## cluster neighbor sil_width
## Missouri 3 2 -0.07318144
#eclust():增强的聚类分析
与其他聚类分析包相比,eclust()有以下优点:
简化了聚类分析的工作流程
可以用于计算层次聚类和分区聚类
eclust()自动计算最佳聚类簇数。
自动提供Silhouette plot
可以结合ggplot2绘制优美的图形
#使用eclust()的K均值聚类
# Compute k-means
res.km <- eclust(df, "kmeans")
# Gap statistic plot
fviz_gap_stat(res.km$gap_stat)
# Silhouette plotfviz_silhouette(res.km)
## cluster size ave.sil.width
## 1 1 13 0.31
## 2 2 29 0.38
## 3 3 8 0.39
#使用eclust()的层次聚类
# Enhanced hierarchical clustering
res.hc <- eclust(df, "hclust") # compute hclust
fviz_dend(res.hc, rect = TRUE) # dendrogam
#下面的R代码生成Silhouette plot和分层聚类散点图。
fviz_silhouette(res.hc) # silhouette plot
## cluster size ave.sil.width
## 1 1 19 0.26
## 2 2 19 0.28
## 3 3 12 0.43
fviz_cluster(res.hc) # scatter plot
#Infos
This analysis has been performed using R software (R version 3.3.2)
reshape2包的进化版—tidyr包
tidyr包的作者是Hadley Wickham。这个包常跟dplyr结合使用。
本文将演示tidyr包中下述四个函数的用法:
gather—宽数据转为长数据。类似于reshape2包中的melt函数
spread—长数据转为宽数据。类似于reshape2包中的cast函数
unit—多列合并为一列
separate—将一列分离为多列
下面使用datasets包中的mtcars数据集做演示。
library(tidyr)
library(dplyr)
head(mtcars)
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
为方便处理,在数据集中增加一列car
mtcars$car <- rownames(mtcars)
mtcars <- mtcars[, c(12, 1:11)]
gather
gather的调用格式为:
gather(data, key, value, ..., na.rm = FALSE, convert = FALSE)
这里,...表示需要聚合的指定列。
与reshape2包中的melt函数一样,得到如下结果:
mtcarsNew <- mtcars %>% gather(attribute, value, -car)
head(mtcarsNew)
car attribute value
1 Mazda RX4 mpg 21.0
2 Mazda RX4 Wag mpg 21.0
3 Datsun 710 mpg 22.8
4 Hornet 4 Drive mpg 21.4
5 Hornet Sportabout mpg 18.7
6 Valiant mpg 18.1
tail(mtcarsNew)
car attribute value
347 Porsche 914-2 carb 2
348 Lotus Europa carb 2
349 Ford Pantera L carb 4
350 Ferrari Dino carb 6
351 Maserati Bora carb 8
352 Volvo 142E carb 2
如你所见,除了car列外,其余列聚合成两列,分别命名为attribute和value。
tidyr很好的一点是可以只gather若干列而其他列保持不变。如果你想gather在map和gear之间的所有列而保持carb和car列不变,可以像下面这样做:
mtcarsNew <- mtcars %>% gather(attribute, value, mpg:gear)
head(mtcarsNew)
car carb attribute value
1 Mazda RX4 4 mpg 21.0
2 Mazda RX4 Wag 4 mpg 21.0
3 Datsun 710 1 mpg 22.8
4 Hornet 4 Drive 1 mpg 21.4
5 Hornet Sportabout 2 mpg 18.7
6 Valiant 1 mpg 18.1
spread
spread的调用格式为:
spread(data, key, value, fill = NA, convert = FALSE, drop = TRUE)
与reshape2包中的cast函数一样,得到如下结果:
mtcarsSpread <- mtcarsNew %>% spread(attribute, value)
head(mtcarsSpread)
car carb mpg cyl disp hp drat wt qsec vs am gear
1 AMC Javelin 2 15.2 8 304 150 3.15 3.435 17.30 0 0 3
2 Cadillac Fleetwood 4 10.4 8 472 205 2.93 5.250 17.98 0 0 3
3 Camaro Z28 4 13.3 8 350 245 3.73 3.840 15.41 0 0 3
4 Chrysler Imperial 4 14.7 8 440 230 3.23 5.345 17.42 0 0 3
5 Datsun 710 1 22.8 4 108 93 3.85 2.320 18.61 1 1 4
6 Dodge Challenger 2 15.5 8 318 150 2.76 3.520 16.87 0 0 3
unite
unite的调用格式如下:
unite(data, col, ..., sep = "_", remove = TRUE)
where ... represents the columns to unite and col represents the c
这里,...表示需要合并的列,col表示合并后的列。
我们先虚构一些数据:
set.seed(1)
date <- as.Date('2016-01-01') + 0:14
hour <- sample(1:24, 15)
min <- sample(1:60, 15)
second <- sample(1:60, 15)
event <- sample(letters, 15)
data <- data.frame(date, hour, min, second, event)
data
date hour min second event
1 2016-01-01 7 30 29 u
2 2016-01-02 9 43 36 a
3 2016-01-03 13 58 60 l
4 2016-01-04 20 22 11 q
5 2016-01-05 5 44 47 p
6 2016-01-06 18 52 37 k
7 2016-01-07 19 12 43 r
8 2016-01-08 12 35 6 i
9 2016-01-09 11 7 38 e
10 2016-01-10 1 14 21 b
11 2016-01-11 3 20 42 w
12 2016-01-12 14 1 32 t
13 2016-01-13 23 19 52 h
14 2016-01-14 21 41 26 s
15 2016-01-15 8 16 25 o
现在,我们需要把date,hour,min和second列合并为新列datetime。通常,R中的日期时间格式为"Year-Month-Day-Hour:Min:Second"。
dataNew <- data %>%
unite(datehour, date, hour, sep = ' ') %>%
unite(datetime, datehour, min, second, sep = ':')
dataNew
datetime event
1 2016-01-01 7:30:29 u
2 2016-01-02 9:43:36 a
3 2016-01-03 13:58:60 l
4 2016-01-04 20:22:11 q
5 2016-01-05 5:44:47 p
6 2016-01-06 18:52:37 k
7 2016-01-07 19:12:43 r
8 2016-01-08 12:35:6 i
9 2016-01-09 11:7:38 e
10 2016-01-10 1:14:21 b
11 2016-01-11 3:20:42 w
12 2016-01-12 14:1:32 t
13 2016-01-13 23:19:52 h
14 2016-01-14 21:41:26 s
15 2016-01-15 8:16:25 o
separate
separate的调用格式为:
separate(data, col, into, sep = "[^[:alnum:]]+", remove = TRUE,
convert = FALSE, extra = "warn", fill = "warn", ...)
我们可以用separate函数将数据恢复到刚创建的时候,如下所示:
data1 <- dataNew %>%
separate(datetime, c('date', 'time'), sep = ' ') %>%
separate(time, c('hour', 'min', 'second'), sep = ':')
data1
date hour min second event
1 2016-01-01 07 30 29 u
2 2016-01-02 09 43 36 a
3 2016-01-03 13 59 00 l
4 2016-01-04 20 22 11 q
5 2016-01-05 05 44 47 p
6 2016-01-06 18 52 37 k
7 2016-01-07 19 12 43 r
8 2016-01-08 12 35 06 i
9 2016-01-09 11 07 38 e
10 2016-01-10 01 14 21 b
11 2016-01-11 03 20 42 w
12 2016-01-12 14 01 32 t
13 2016-01-13 23 19 52 h
14 2016-01-14 21 41 26 s
15 2016-01-15 08 16 25 o
首先,将datetime分为date列和time列。然后,将time列分为hour,min,second列。
aggregate_function,指的是“聚合函数”,如sum、count、max、min都属于“聚合函数”。SQL有个基本的规则就是,若出现group by 子句,select后from前必须有“聚合函数”出现,否则报错。你把aggregate_function替换成sum、count、max等试试,语句是可以执行通过的。祝好运。