R语言的ggtree展示进化树的一些常用操作

Python017

R语言的ggtree展示进化树的一些常用操作,第1张

现在假设你已经拿到了nwk格式的进化树文件,如下

现在进化树的所有信息都存储在了 tree 这个变量里

用到的的 geom_tiplab()

可以首先加上 theme_tree2() 函数显示出坐标轴范围,然后用 xlim() 函数更改坐标轴范围

这里布局的参数就不一一介绍了,可以参考 https://yulab-smu.top/treedata-book/chapter4.html

rpart包的处理方式:首先对所有自变量和所有分割点进行评估,最佳的选择是使分割后组内的数据更为“一致”(pure)。这里的“一致”是指组内数据的因变量取值变异较小。rpart包对这种“一致”性的默认度量是Gini值。确定停止划分的参数有很多(参见rpart.control),确定这些参数是非常重要而微妙的,因为划分越细,模型越复杂,越容易出现过度拟合的情况,而划分过粗,又会出现拟合不足。处理这个问题通常是使用“剪枝”(prune)方法。即先建立一个划分较细较为复杂的树模型,再根据交叉检验(Cross-Validation)的方法来估计不同“剪枝”条件下,各模型的误差,选择误差最小的树模型。party包的处理方式:它的背景理论是“条件推断决策树”(conditional inference trees):它根据统计检验来确定自变量和分割点的选择。即先假设所有自变量与因变量均独立。再对它们进行卡方独立检验,检验P值小于阀值的自变量加入模型,相关性最强的自变量作为第一次分割的自变量。自变量选择好后,用置换检验来选择分割点。用party包建立的决策树不需要剪枝,因为阀值就决定了模型的复杂程度。所以如何决定阀值参数是非常重要的(参见ctree_control)。较为流行的做法是取不同的参数值进行交叉检验,选择误差最小的模型参数。

数据分析之美:决策树R语言实现

R语言实现决策树

1.准备数据

[plain] view plain copy

>install.packages("tree")

>library(tree)

>library(ISLR)

>attach(Carseats)

>High=ifelse(Sales<=8,"No","Yes") //set high values by sales data to calssify

>Carseats=data.frame(Carseats,High) //include the high data into the data source

>fix(Carseats)

2.生成决策树

[plain] view plain copy

>tree.carseats=tree(High~.-Sales,Carseats)

>summary(tree.carseats)

[plain] view plain copy

//output training error is 9%

Classification tree:

tree(formula = High ~ . - Sales, data = Carseats)

Variables actually used in tree construction:

[1] "ShelveLoc" "Price" "Income" "CompPrice" "Population"

[6] "Advertising" "Age" "US"

Number of terminal nodes: 27

Residual mean deviance: 0.4575 = 170.7 / 373

Misclassification error rate: 0.09 = 36 / 400

3. 显示决策树

[plain] view plain copy

>plot(tree . carseats )

>text(tree .carseats ,pretty =0)

4.Test Error

[plain] view plain copy

//prepare train data and test data

//We begin by using the sample() function to split the set of observations sample() into two halves, by selecting a random subset of 200 observations out of the original 400 observations.

>set . seed (1)

>train=sample(1:nrow(Carseats),200)

>Carseats.test=Carseats[-train,]

>High.test=High[-train]

//get the tree model with train data

>tree. carseats =tree (High~.-Sales , Carseats , subset =train )

//get the test error with tree model, train data and predict method

//predict is a generic function for predictions from the results of various model fitting functions.

>tree.pred = predict ( tree.carseats , Carseats .test ,type =" class ")

>table ( tree.pred ,High. test)

High. test

tree. pred No Yes

No 86 27

Yes 30 57

>(86+57) /200

[1] 0.715

5.决策树剪枝

[plain] view plain copy

/**

Next, we consider whether pruning the tree might lead to improved results. The function cv.tree() performs cross-validation in order to cv.tree() determine the optimal level of tree complexitycost complexity pruning is used in order to select a sequence of trees for consideration.

For regression trees, only the default, deviance, is accepted. For classification trees, the default is deviance and the alternative is misclass (number of misclassifications or total loss).

We use the argument FUN=prune.misclass in order to indicate that we want the classification error rate to guide the cross-validation and pruning process, rather than the default for the cv.tree() function, which is deviance.

If the tree is regression tree,

>plot(cv. boston$size ,cv. boston$dev ,type=’b ’)

*/

>set . seed (3)

>cv. carseats =cv. tree(tree .carseats ,FUN = prune . misclass ,K=10)

//The cv.tree() function reports the number of terminal nodes of each tree considered (size) as well as the corresponding error rate(dev) and the value of the cost-complexity parameter used (k, which corresponds to α.

>names (cv. carseats )

[1] " size" "dev " "k" " method "

>cv. carseats

$size //the number of terminal nodes of each tree considered

[1] 19 17 14 13 9 7 3 2 1

$dev //the corresponding error rate

[1] 55 55 53 52 50 56 69 65 80

$k // the value of the cost-complexity parameter used

[1] -Inf 0.0000000 0.6666667 1.0000000 1.7500000

2.0000000 4.2500000

[8] 5.0000000 23.0000000

$method //miscalss for classification tree

[1] " misclass "

attr (," class ")

[1] " prune " "tree. sequence "

[plain] view plain copy

//plot the error rate with tree node size to see whcih node size is best

>plot(cv. carseats$size ,cv. carseats$dev ,type=’b ’)

/**

Note that, despite the name, dev corresponds to the cross-validation error rate in this instance. The tree with 9 terminal nodes results in the lowest cross-validation error rate, with 50 cross-validation errors. We plot the error rate as a function of both size and k.

*/

>prune . carseats = prune . misclass ( tree. carseats , best =9)

>plot( prune . carseats )

>text( prune .carseats , pretty =0)

//get test error again to see whether the this pruned tree perform on the test data set

>tree.pred = predict ( prune . carseats , Carseats .test , type =" class ")

>table ( tree.pred ,High. test)

High. test

tree. pred No Yes

No 94 24

Yes 22 60

>(94+60) /200

[1] 0.77