r语言cv.glmnet的lambda.1se怎么计算出来的

Python013

r语言cv.glmnet的lambda.1se怎么计算出来的,第1张

将解释变量的系数加入到Cost Function中,并对其进行最小化,本质上是对过多的参数实施了惩罚。而两种方法的区别在于惩罚函数不同。

但这种微小的区别却使LASSO有很多优良的特质(可以同时选择和缩减参数)。

函数主体可以是一系列表达式,这些表达式需要用大括号括起来:

function(param1, ..., paramN) {

expr1

.

.

.

exprM

}

讨论

函数的定义告诉R软件“用何种方式进行计算”。例如,R软件没有内置计算变异系数的函数,因此你可以定义函数如下:

>cv <- function(x) sd(x)/mean(x)

>cv(1:10)

[1] 0.5504819

第一行定义了名为cv的函数,第二行引用该函数,以1∶10作为其参数x的值。函数对参数应用函数主体中的表达式sd(x)/mean(x)进行计算并返回结果。

定义函数后,我们可以在任何需要函数的地方应用它,例如可以作为lapply函数的第二个参数(参见方法6.2):

>cv <- function(x) sd(x)/mean(x)

>lapply(lst, cv)

函数主体如果包含多行表达式,则需要使用大括号来确定函数内容的起始和结束位置。下面这一函数采用了欧几里德算法计算两个整数的最大公约数:

>gcd <- function(a,b) {

+ if (b == 0) return(a)

+ else return(gcd(b, a %% b))

+ }

R软件也允许使用匿名函数,匿名函数是没有函数名称但在单行的语句中很实用的函数。先前的例子中我们提到将cv函数作为lapply函数的一个参数,而若使用匿名函数直接作为lapply函数的参数,则能将原先的命令简化至同一行中:

>lapply(lst, function(x) sd(x)/mean(x))

由于本书重点不在于介绍R的编程语言,这里不对R函数编程的细微之处进行解释。下面给出几个需要注意的地方:

返回值

所有函数都有一个返回值,即函数主体最后一个表达式值。你也可以通过return(expr)命令给出函数的返回值。

值调用

函数参数是“值调用”——如果你改变了函数中的参数值,改变只是局部的,并不会影响该参数所引用的变量值。

局部变量

你可以简单地通过赋值来创建一个局部变量,函数结束后该局部变量会消失。

条件执行

R语法中包含if语句,更多详情可以使用help(Control)命令查看。

循环语句

数据分析之美:决策树R语言实现

R语言实现决策树

1.准备数据

[plain] view plain copy

>install.packages("tree")

>library(tree)

>library(ISLR)

>attach(Carseats)

>High=ifelse(Sales<=8,"No","Yes") //set high values by sales data to calssify

>Carseats=data.frame(Carseats,High) //include the high data into the data source

>fix(Carseats)

2.生成决策树

[plain] view plain copy

>tree.carseats=tree(High~.-Sales,Carseats)

>summary(tree.carseats)

[plain] view plain copy

//output training error is 9%

Classification tree:

tree(formula = High ~ . - Sales, data = Carseats)

Variables actually used in tree construction:

[1] "ShelveLoc" "Price" "Income" "CompPrice" "Population"

[6] "Advertising" "Age" "US"

Number of terminal nodes: 27

Residual mean deviance: 0.4575 = 170.7 / 373

Misclassification error rate: 0.09 = 36 / 400

3. 显示决策树

[plain] view plain copy

>plot(tree . carseats )

>text(tree .carseats ,pretty =0)

4.Test Error

[plain] view plain copy

//prepare train data and test data

//We begin by using the sample() function to split the set of observations sample() into two halves, by selecting a random subset of 200 observations out of the original 400 observations.

>set . seed (1)

>train=sample(1:nrow(Carseats),200)

>Carseats.test=Carseats[-train,]

>High.test=High[-train]

//get the tree model with train data

>tree. carseats =tree (High~.-Sales , Carseats , subset =train )

//get the test error with tree model, train data and predict method

//predict is a generic function for predictions from the results of various model fitting functions.

>tree.pred = predict ( tree.carseats , Carseats .test ,type =" class ")

>table ( tree.pred ,High. test)

High. test

tree. pred No Yes

No 86 27

Yes 30 57

>(86+57) /200

[1] 0.715

5.决策树剪枝

[plain] view plain copy

/**

Next, we consider whether pruning the tree might lead to improved results. The function cv.tree() performs cross-validation in order to cv.tree() determine the optimal level of tree complexitycost complexity pruning is used in order to select a sequence of trees for consideration.

For regression trees, only the default, deviance, is accepted. For classification trees, the default is deviance and the alternative is misclass (number of misclassifications or total loss).

We use the argument FUN=prune.misclass in order to indicate that we want the classification error rate to guide the cross-validation and pruning process, rather than the default for the cv.tree() function, which is deviance.

If the tree is regression tree,

>plot(cv. boston$size ,cv. boston$dev ,type=’b ’)

*/

>set . seed (3)

>cv. carseats =cv. tree(tree .carseats ,FUN = prune . misclass ,K=10)

//The cv.tree() function reports the number of terminal nodes of each tree considered (size) as well as the corresponding error rate(dev) and the value of the cost-complexity parameter used (k, which corresponds to α.

>names (cv. carseats )

[1] " size" "dev " "k" " method "

>cv. carseats

$size //the number of terminal nodes of each tree considered

[1] 19 17 14 13 9 7 3 2 1

$dev //the corresponding error rate

[1] 55 55 53 52 50 56 69 65 80

$k // the value of the cost-complexity parameter used

[1] -Inf 0.0000000 0.6666667 1.0000000 1.7500000

2.0000000 4.2500000

[8] 5.0000000 23.0000000

$method //miscalss for classification tree

[1] " misclass "

attr (," class ")

[1] " prune " "tree. sequence "

[plain] view plain copy

//plot the error rate with tree node size to see whcih node size is best

>plot(cv. carseats$size ,cv. carseats$dev ,type=’b ’)

/**

Note that, despite the name, dev corresponds to the cross-validation error rate in this instance. The tree with 9 terminal nodes results in the lowest cross-validation error rate, with 50 cross-validation errors. We plot the error rate as a function of both size and k.

*/

>prune . carseats = prune . misclass ( tree. carseats , best =9)

>plot( prune . carseats )

>text( prune .carseats , pretty =0)

//get test error again to see whether the this pruned tree perform on the test data set

>tree.pred = predict ( prune . carseats , Carseats .test , type =" class ")

>table ( tree.pred ,High. test)

High. test

tree. pred No Yes

No 94 24

Yes 22 60

>(94+60) /200

[1] 0.77