R语言学习笔记之聚类分析

Python011

R语言学习笔记之聚类分析,第1张

R语言学习笔记之聚类分析

使用k-means聚类所需的包:

factoextra

cluster #加载包

library(factoextra)

library(cluster)l

#数据准备

使用内置的R数据集USArrests

#load the dataset

data("USArrests")

#remove any missing value (i.e, NA values for not available)

#That might be present in the data

USArrests <- na.omit(USArrests)#view the first 6 rows of the data

head(USArrests, n=6)

在此数据集中,列是变量,行是观测值

在聚类之前我们可以先进行一些必要的数据检查即数据描述性统计,如平均值、标准差等

desc_stats <- data.frame( Min=apply(USArrests, 2, min),#minimum

Med=apply(USArrests, 2, median),#median

Mean=apply(USArrests, 2, mean),#mean

SD=apply(USArrests, 2, sd),#Standard deviation

Max=apply(USArrests, 2, max)#maximum

)

desc_stats <- round(desc_stats, 1)#保留小数点后一位head(desc_stats)

变量有很大的方差及均值时需进行标准化

df <- scale(USArrests)

#数据集群性评估

使用get_clust_tendency()计算Hopkins统计量

res <- get_clust_tendency(df, 40, graph = TRUE)

res$hopkins_stat

## [1] 0.3440875

#Visualize the dissimilarity matrix

res$plot

Hopkins统计量的值<0.5,表明数据是高度可聚合的。另外,从图中也可以看出数据可聚合。

#估计聚合簇数

由于k均值聚类需要指定要生成的聚类数量,因此我们将使用函数clusGap()来计算用于估计最优聚类数。函数fviz_gap_stat()用于可视化。

set.seed(123)

## Compute the gap statistic

gap_stat <- clusGap(df, FUN = kmeans, nstart = 25, K.max = 10, B = 500)

# Plot the result

fviz_gap_stat(gap_stat)

图中显示最佳为聚成四类(k=4)

#进行聚类

set.seed(123)

km.res <- kmeans(df, 4, nstart = 25)

head(km.res$cluster, 20)

# Visualize clusters using factoextra

fviz_cluster(km.res, USArrests)

#检查cluster silhouette图

Recall that the silhouette measures (SiSi) how similar an object ii is to the the other objects in its own cluster versus those in the neighbor cluster. SiSi values range from 1 to - 1:

A value of SiSi close to 1 indicates that the object is well clustered. In the other words, the object ii is similar to the other objects in its group.

A value of SiSi close to -1 indicates that the object is poorly clustered, and that assignment to some other cluster would probably improve the overall results.

sil <- silhouette(km.res$cluster, dist(df))

rownames(sil) <- rownames(USArrests)

head(sil[, 1:3])

#Visualize

fviz_silhouette(sil)

图中可以看出有负值,可以通过函数silhouette()确定是哪个观测值

neg_sil_index <- which(sil[, "sil_width"] <0)

sil[neg_sil_index, , drop = FALSE]

##          cluster    neighbor     sil_width

## Missouri    3          2        -0.07318144

#eclust():增强的聚类分析

与其他聚类分析包相比,eclust()有以下优点:

简化了聚类分析的工作流程

可以用于计算层次聚类和分区聚类

eclust()自动计算最佳聚类簇数。

自动提供Silhouette plot

可以结合ggplot2绘制优美的图形

#使用eclust()的K均值聚类

# Compute k-means

res.km <- eclust(df, "kmeans")

# Gap statistic plot

fviz_gap_stat(res.km$gap_stat)

# Silhouette plotfviz_silhouette(res.km)

##    cluster size ave.sil.width

## 1     1     13      0.31

## 2     2     29      0.38

## 3     3      8      0.39

#使用eclust()的层次聚类

# Enhanced hierarchical clustering

res.hc <- eclust(df, "hclust") # compute hclust

fviz_dend(res.hc, rect = TRUE) # dendrogam

#下面的R代码生成Silhouette plot和分层聚类散点图。

fviz_silhouette(res.hc) # silhouette plot

##   cluster size ave.sil.width

## 1    1     19      0.26

## 2    2     19      0.28

## 3    3     12      0.43

fviz_cluster(res.hc) # scatter plot

#Infos

This analysis has been performed using R software (R version 3.3.2)

Bayes判别,它是基于Bayes准则的判别方法,判别指标为定量资料,它的判别规则和最大似然判别、Bayes公式判别相似,都是根据概率大小进行判别,要求各类近似服从多元正态分布。

1. Bayes准则:寻求一种判别规则,使得属于第k类的样品在第k类中取得最大的后验概率。

基于以上准则,假定已知个体分为g类,各类出现的先验概率为P(Yk),且各类均近似服从多元正态分布,当各类的协方差阵相等时,可获得由m个指标建立的g个线性判别函数Y1,Y2,…,Yg,分别表示属于各类的判别函数值:

其中Cjk即为判别系数,通过合并协方差阵代入即可计算得各个指标的判别系数,而C0k中则加以考虑了先验概率P(Yk):

2. 先验概率的确定:若未知各类的先验概率时,一般可用:

(1)等概率(先验无知):P(Yk)= 1/g(all groups equal)。

(2)频率:P(Yk)= nk/N (当样本较大且无选择偏倚时用,compute from sample size)

3. 判别规则:

(1)计算样品属于各类的判别函数值,把对象判别为Y值最大的类。

(2)根据所得Y值,我们亦可以进一步计算属于k类的后验概率,再将对象判给后验概率最大的一类。

以上两种判别规则的结果是完全一致的。

函数介绍

实现Bayes判别可以调用程序包klaR中NaiveBayes()函数,其调用格式为:

NaiveBayes(x,grouping,prior,usekernel =FALSE,fL = 0, ...)

复制

x为训练样本的矩阵或数据框,grouping表示训练样本的分类情况,prior可为各个类别指定先验概率,默认情况下用各个类别的样本比例作为先验概率,usekernel指定密度估计的方法,默认情况下使用标准的密度估计,设为TRUE时,则使用核密度估计方法;fL指定是否进行拉普拉斯修正,默认情况下不对数据进行修正,当数据量较小时,可以设置该参数为1,即进行拉普拉斯修正。

例子:利用Iris数据集进行Bayes判别

>install.packages("klaR")

>X<-iris[1:100,1:4]

>G<-as.factor(gl(2,50))

>library(klaR)

>x<-NaiveBayes(X,G)

>predict(x)

$class

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54

1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2

55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

91 92 93 94 95 96 97 98 99 100

2 2 2 2 2 2 2 2 2 2

复制

由分析结果可知,根据已知分类的训练样品建立的判别规则,出现了0个样本错判,回代的判别正确率为100%。

我的亲师弟最近也开始学习R语言了,然后师弟每天“师姐,师姐...",“我这个怎么弄...”,“我怎么又报错了...”,“师姐师姐...”...我快被他搞疯了,于是有了这篇文章。

新手在学习R语言的过程中一定会出现各种各种问题,问题多到令人抓耳挠腮。

但其实不要觉得害怕或有打退堂鼓的心里,R的使用,就是不断报错不断找问题的过程。但是出现问题,第一反应一定要是上网搜索,找答案,不要第一时间就问身边的人,错失了思考的过程。生信的学习,其实就是一个漫长的自学过程。

推荐 搜索引擎:必应,必应,必应 !不要再用某度啦拜托!当然如果你能想办法用Google,那当然再好不过了。

搜索能解决百分之九十以上的问题 ,就算解决不了,如果解决不了,可能是因为你的搜索能力还不够高。在这个搜索、尝试解决以及思考的过程,对新学者来说也是一大收获。本身搜索能力的提升就是一个巨大收获。

如果自己尝试了好久,最终实在解决不了,那。。。就再去请教有经验的前辈吧~

其实这种搜索并独立解决问题的思维,我还是在同济大学, 生信大牛刘小乐教授 课题组学到的。刘小乐教授课题组每年都有为期一个月的生信培训,本人有幸学习过一段时间。她们会给很多生信相关的题目给到学员,然后附上一些教学视频,培训的大部分时间,其实就是写作业,自己想方设法找到解决方案的过程。那些大牛师兄师姐们虽然一直在陪伴我们,但是并不会直接告诉我们答案,而是引导我们自己思考,自己去解决。当时真的很崩溃,因为真的啥也不会,怎么搞。一天下来有可能一个问题都答不上来。

但是现在回头想想,我真的获益良多。因为我慢慢学会了独立思考,现在遇到R相关的问题,配合上搜索功能,基本上已经完全能自己驾驭了。

这可能就是“ 授人以鱼不如授人以渔 ”的道理吧。

R语言很简单,只要你想学,就一定能学会。

以下附上同济大学刘小乐课题组在培训时针对初学者第一周的初级练习题。希望对大家有所帮助。

首先你需要先安装几个最常用的数据处理软件

You can use the mean() function to compute the mean of a vector like

so:

However, this does not work if the vector contains NAs:

Please use R documentation to find the mean after excluding NA's (hint: ?mean )

In this question, we will practice data manipulation using a dataset

collected by Francis Galton in 1886 on the heights of parents and their

children. This is a very famous dataset, and Galton used it to come up

with regression and correlation.

The data is available as GaltonFamilies in the HistData package.

Here, we load the data and show the first few rows. To find out more

information about the dataset, use ?GaltonFamilies .

a. Please report the height of the 10th child in the dataset.

b. What is the breakdown of male and female children in the dataset?

c. How many observations are in Galton's dataset? Please answer this

question without consulting the R help.

d. What is the mean height for the 1st child in each family?

e. Create a table showing the mean height for male and female children.

f. What was the average number of children each family had?

g. Convert the children's heights from inches to centimeters and store

it in a column called childHeight_cm in the GaltonFamilies dataset.

Show the first few rows of this dataset.

In the code above, we generate r ngroups groups of r N observations

each. In each group, we have X and Y, where X and Y are independent

normally distributed data and have 0 correlation.

a. Find the correlation between X and Y for each group, and display

the highest correlations.

Hint: since the data is quite large and your code might take a few

moments to run, you can test your code on a subset of the data first

(e.g. you can take the first 100 groups like so):

In general, this is good practice whenever you have a large dataset:

If you are writing new code and it takes a while to run on the whole

dataset, get it to work on a subset first. By running on a subset, you

can iterate faster.

However, please do run your final code on the whole dataset.

b. The highest correlation is around 0.8. Can you explain why we see

such a high correlation when X and Y are supposed to be independent and

thus uncorrelated?

Show a plot of the data for the group that had the highest correlation

you found in Problem 4.

We generate some sample data below. The data is numeric, and has 3

columns: X, Y, Z.

a. Compute the overall correlation between X and Y.

b. Make a plot showing the relationship between X and Y. Comment on

the correlation that you see.

c. Compute the correlations between X and Y for each level of Z.

d. Make a plot showing the relationship between X and Y, but this

time, color the points using the value of Z. Comment on the result,

especially any differences between this plot and the previous plot.