补充一个Matlab实现方法:
function [cid,nr,centers] = cskmeans(x,k,nc)
% CSKMEANS K-Means clustering - general method.
%
% This implements the more general k-means algorithm, where
% HMEANS is used to find the initial partition and then each
% observation is examined for further improvements in minimizing
% the within-group sum of squares.
%
% [CID,NR,CENTERS] = CSKMEANS(X,K,NC) Performs K-means
% clustering using the data given in X.
%
% INPUTS: X is the n x d matrix of data,
% where each row indicates an observation. K indicates
% the number of desired clusters. NC is a k x d matrix for the
% initial cluster centers. If NC is not specified, then the
% centers will be randomly chosen from the observations.
%
% OUTPUTS: CID provides a set of n indexes indicating cluster
% membership for each point. NR is the number of observations
% in each cluster. CENTERS is a matrix, where each row
% corresponds to a cluster center.
%
% See also CSHMEANS
% W. L. and A. R. Martinez, 9/15/01
% Computational Statistics Toolbox
warning off
[n,d] = size(x)
if nargin <3
% Then pick some observations to be the cluster centers.
ind = ceil(n*rand(1,k))
% We will add some noise to make it interesting.
nc = x(ind,:) + randn(k,d)
end
% set up storage
% integer 1,...,k indicating cluster membership
cid = zeros(1,n)
% Make this different to get the loop started.
oldcid = ones(1,n)
% The number in each cluster.
nr = zeros(1,k)
% Set up maximum number of iterations.
maxiter = 100
iter = 1
while ~isequal(cid,oldcid) &iter <maxiter
% Implement the hmeans algorithm
% For each point, find the distance to all cluster centers
for i = 1:n
dist = sum((repmat(x(i,:),k,1)-nc).^2,2)
[m,ind] = min(dist)% assign it to this cluster center
cid(i) = ind
end
% Find the new cluster centers
for i = 1:k
% find all points in this cluster
ind = find(cid==i)
% find the centroid
nc(i,:) = mean(x(ind,:))
% Find the number in each cluster
nr(i) = length(ind)
end
iter = iter + 1
end
% Now check each observation to see if the error can be minimized some more.
% Loop through all points.
maxiter = 2
iter = 1
move = 1
while iter <maxiter &move ~= 0
move = 0
% Loop through all points.
for i = 1:n
% find the distance to all cluster centers
dist = sum((repmat(x(i,:),k,1)-nc).^2,2)
r = cid(i)% This is the cluster id for x
%%nr,nr+1
dadj = nr./(nr+1).*dist'% All adjusted distances
[m,ind] = min(dadj)% minimum should be the cluster it belongs to
if ind ~= r % if not, then move x
cid(i) = ind
ic = find(cid == ind)
nc(ind,:) = mean(x(ic,:))
move = 1
end
end
iter = iter+1
end
centers = nc
if move == 0
disp('No points were moved after the initial clustering procedure.')
else
disp('Some points were moved after the initial clustering procedure.')
end
warning on
K-means算法是硬聚类算法,是典型的基于原型的目标函数聚类方法的代表,它是数据点到原型的某种距离作为优化的目标函数,利用函数求极值的方法得到迭代运算的调整规则。K-means算法以欧式距离作为相似度测度,它是求对应某一初始聚类中心向量V最优分类,使得评价指标J最小。算法采用误差平方和准则函数作为聚类准则函数。
通常,人们根据样本间的某种距离或者相似性来定义聚类,即把相似的(或距离近的)样本聚为同一类,而把不相似的(或距离远的)样本归在其他类。
所谓聚类问题,就是给定一个元素集合D,其中每个元素具有n个可观察属性,使用某种算法将D划分成k个子集,要求每个子集内部的元素之间相异度尽可能低,而不同子集的元素相异度尽可能高。其中每个子集叫做一个簇。
k-means算法是一种很常见的聚类算法,它的基本思想是:通过迭代寻找k个聚类的一种划分方案,使得用这k个聚类的均值来代表相应各类样本时所得的总体误差最小。
看起来还不错
分析一个公司的客户分类,这样可以对不同的客户使用不同的商业策略,或是电子商务中分析商品相似度,归类商品,从而可以使用一些不同的销售策略,等等。