根据java网络编程相关的内容,使用jdk提供的相关类可以得到url对应网页的html页面代码。
针对得到的html代码,通过使用正则表达式即可得到我们想要的内容。
比如,我们如果想得到一个网页上所有包括“java”关键字的文本内容,就可以逐行对网页代码进行正则表达式的匹配。最后达到去除html标签和不相关的内容,只得到包括“java”这个关键字的内容的效果。
从网页上爬取图片的流程和爬取内容的流程基本相同,但是爬取图片的步骤会多一步。
需要先用img标签的正则表达式匹配获取到img标签,再用src属性的正则表达式获取这个img标签中的src属性的图片url,然后再通过缓冲输入流对象读取到这个图片url的图片信息,配合文件输出流将读到的图片信息写入到本地即可。
网络爬虫是一个自动提取网页的程序,它为搜索引擎从万维网上下载网页,是搜索引擎的重要组成。\x0d\x0a传统爬虫从一个或若干初始网页的URL开始,获得初始网页上的URL,在抓取网页的过程中,不断从当前页面上抽取新的URL放入队列,直到满足系统的一定停止条件。对于垂直搜索来说,聚焦爬虫,即有针对性地爬取特定主题网页的爬虫,更为适合。\x0d\x0a\x0d\x0a以下是一个使用java实现的简单爬虫核心代码:\x0d\x0apublic void crawl() throws Throwable { \x0d\x0awhile (continueCrawling()) { \x0d\x0aCrawlerUrl url = getNextUrl()//获取待爬取队列中的下一个URL\x0d\x0aif (url != null) { \x0d\x0aprintCrawlInfo() \x0d\x0aString content = getContent(url)//获取URL的文本信息\x0d\x0a \x0d\x0a//聚焦爬虫只爬取与主题内容相关的网页,这里采用正则匹配简单处理\x0d\x0aif (isContentRelevant(content, this.regexpSearchPattern)) { \x0d\x0asaveContent(url, content)//保存网页至本地\x0d\x0a\x0d\x0a//获取网页内容中的链接,并放入待爬取队列中\x0d\x0aCollection urlStrings = extractUrls(content, url)\x0d\x0aaddUrlsToUrlQueue(url, urlStrings)\x0d\x0a} else { \x0d\x0aSystem.out.println(url + " is not relevant ignoring ...")\x0d\x0a} \x0d\x0a\x0d\x0a//延时防止被对方屏蔽\x0d\x0aThread.sleep(this.delayBetweenUrls)\x0d\x0a} \x0d\x0a} \x0d\x0acloseOutputStream()\x0d\x0a}\x0d\x0aprivate CrawlerUrl getNextUrl() throws Throwable { \x0d\x0aCrawlerUrl nextUrl = null\x0d\x0awhile ((nextUrl == null) &&(!urlQueue.isEmpty())) { \x0d\x0aCrawlerUrl crawlerUrl = this.urlQueue.remove()\x0d\x0a//doWeHavePermissionToVisit:是否有权限访问该URL,友好的爬虫会根据网站提供的"Robot.txt"中配置的规则进行爬取 \x0d\x0a//isUrlAlreadyVisited:URL是否访问过,大型的搜索引擎往往采用BloomFilter进行排重,这里简单使用HashMap \x0d\x0a//isDepthAcceptable:是否达到指定的深度上限。爬虫一般采取广度优先的方式。一些网站会构建爬虫陷阱(自动生成一些无效链接使爬虫陷入死循环),采用深度限制加以避免 \x0d\x0aif (doWeHavePermissionToVisit(crawlerUrl) \x0d\x0a&&(!isUrlAlreadyVisited(crawlerUrl)) \x0d\x0a&&isDepthAcceptable(crawlerUrl)) { \x0d\x0anextUrl = crawlerUrl\x0d\x0a// System.out.println("Next url to be visited is " + nextUrl)\x0d\x0a} \x0d\x0a} \x0d\x0areturn nextUrl\x0d\x0a}\x0d\x0aprivate String getContent(CrawlerUrl url) throws Throwable { \x0d\x0a//HttpClient4.1的调用与之前的方式不同 \x0d\x0aHttpClient client = new DefaultHttpClient()\x0d\x0aHttpGet httpGet = new HttpGet(url.getUrlString())\x0d\x0aStringBuffer strBuf = new StringBuffer()\x0d\x0aHttpResponse response = client.execute(httpGet)\x0d\x0aif (HttpStatus.SC_OK == response.getStatusLine().getStatusCode()) { \x0d\x0aHttpEntity entity = response.getEntity()\x0d\x0aif (entity != null) { \x0d\x0aBufferedReader reader = new BufferedReader( \x0d\x0anew InputStreamReader(entity.getContent(), "UTF-8"))\x0d\x0aString line = null\x0d\x0aif (entity.getContentLength() >0) { \x0d\x0astrBuf = new StringBuffer((int) entity.getContentLength())\x0d\x0awhile ((line = reader.readLine()) != null) { \x0d\x0astrBuf.append(line)\x0d\x0a} \x0d\x0a} \x0d\x0a} \x0d\x0aif (entity != null) { \x0d\x0ansumeContent()\x0d\x0a} \x0d\x0a} \x0d\x0a//将url标记为已访问 \x0d\x0amarkUrlAsVisited(url)\x0d\x0areturn strBuf.toString()\x0d\x0a}\x0d\x0apublic static boolean isContentRelevant(String content, \x0d\x0aPattern regexpPattern) { \x0d\x0aboolean retValue = false\x0d\x0aif (content != null) { \x0d\x0a//是否符合正则表达式的条件 \x0d\x0aMatcher m = regexpPattern.matcher(content.toLowerCase())\x0d\x0aretValue = m.find()\x0d\x0a} \x0d\x0areturn retValue\x0d\x0a}\x0d\x0apublic List extractUrls(String text, CrawlerUrl crawlerUrl) { \x0d\x0aMap urlMap = new HashMap()\x0d\x0aextractHttpUrls(urlMap, text)\x0d\x0aextractRelativeUrls(urlMap, text, crawlerUrl)\x0d\x0areturn new ArrayList(urlMap.keySet())\x0d\x0a} \x0d\x0aprivate void extractHttpUrls(Map urlMap, String text) { \x0d\x0aMatcher m = (text)\x0d\x0awhile (m.find()) { \x0d\x0aString url = m.group()\x0d\x0aString[] terms = url.split("a href=\"")\x0d\x0afor (String term : terms) { \x0d\x0a// System.out.println("Term = " + term)\x0d\x0aif (term.startsWith("http")) { \x0d\x0aint index = term.indexOf("\"")\x0d\x0aif (index >0) { \x0d\x0aterm = term.substring(0, index)\x0d\x0a} \x0d\x0aurlMap.put(term, term)\x0d\x0aSystem.out.println("Hyperlink: " + term)\x0d\x0a} \x0d\x0a} \x0d\x0a} \x0d\x0a} \x0d\x0aprivate void extractRelativeUrls(Map urlMap, String text, \x0d\x0aCrawlerUrl crawlerUrl) { \x0d\x0aMatcher m = relativeRegexp.matcher(text)\x0d\x0aURL textURL = crawlerUrl.getURL()\x0d\x0aString host = textURL.getHost()\x0d\x0awhile (m.find()) { \x0d\x0aString url = m.group()\x0d\x0aString[] terms = url.split("a href=\"")\x0d\x0afor (String term : terms) { \x0d\x0aif (term.startsWith("/")) { \x0d\x0aint index = term.indexOf("\"")\x0d\x0aif (index >0) { \x0d\x0aterm = term.substring(0, index)\x0d\x0a} \x0d\x0aString s = //" + host + term\x0d\x0aurlMap.put(s, s)\x0d\x0aSystem.out.println("Relative url: " + s)\x0d\x0a} \x0d\x0a} \x0d\x0a} \x0d\x0a\x0d\x0a}\x0d\x0apublic static void main(String[] args) { \x0d\x0atry { \x0d\x0aString url = ""\x0d\x0aQueue urlQueue = new LinkedList()\x0d\x0aString regexp = "java"\x0d\x0aurlQueue.add(new CrawlerUrl(url, 0))\x0d\x0aNaiveCrawler crawler = new NaiveCrawler(urlQueue, 100, 5, 1000L, \x0d\x0aregexp)\x0d\x0a// boolean allowCrawl = crawler.areWeAllowedToVisit(url)\x0d\x0a// System.out.println("Allowed to crawl: " + url + " " + \x0d\x0a// allowCrawl)\x0d\x0acrawler.crawl()\x0d\x0a} catch (Throwable t) { \x0d\x0aSystem.out.println(t.toString())\x0d\x0at.printStackTrace()\x0d\x0a} \x0d\x0a}1、思路:明确需要爬取的信息
分析网页结构
分析爬取流程
优化
2、明确需要爬取的信息
职位名称
工资
职位描述
公司名称
公司主页
详情网页
分析网页结构
3、目标网站-拉勾网
网站使用json作为交互数据,分析json数据,需要的json关键数据
查看需要的信息所在的位置,使用Jsoup来解析网页
4、分析爬取流程
1.获取所有的positionId生成详情页,存放在一个存放网址列表中List<String>joburls
2.获取每个详情页并解析为Job类,得到一个存放Job类的列表List<Job>jobList
3.把List<Job>jobList存进Excel表格中
Java操作Excel需要用到jxl
5、关键代码实现
public List<String>getJobUrls(String gj,String city,String kd){
String pre_url="https://www.lagou.com/jobs/"
String end_url=".html"
String url
if (gj.equals("")){
url="http://www.lagou.com/jobs/positionAjax.json?px=default&city="+city+"&needAddtionalResult=false&first=false&pn="+pn+"&kd="+kd
}else {
url="https://www.lagou.com/jobs/positionAjax.json?gj="+gj+"&px=default&city="+city+"&needAddtionalResult=false&first=false&pn="+pn+"&kd="+kd
}
String rs=getJson(url)
System.out.println(rs)
int total= JsonPath.read(rs,"$.content.positionResult.totalCount")//获取总数
int pagesize=total/15
if (pagesize>=30){
pagesize=30
}
System.out.println(total)
// System.out.println(rs)
List<Integer>posid=JsonPath.read(rs,"$.content.positionResult.result[*].positionId")//获取网页id
for (int j=1j<=pagesizej++){ //获取所有的网页id
pn++ //更新页数
url="https://www.lagou.com/jobs/positionAjax.json?gj="+gj+"&px=default&city="+city+"&needAddtionalResult=false&first=false&pn="+pn+"&kd="+kd
String rs2=getJson(url)
List<Integer>posid2=JsonPath.read(rs2,"$.content.positionResult.result[*].positionId")
posid.addAll(posid2)//添加解析的id到第一个list
}
List<String>joburls=new ArrayList<>()
//生成网页列表
for (int id:posid){
String url3=pre_url+id+end_url
joburls.add(url3)
}
return joburls
}
public Job getJob(String url){ //获取工作信息
Job job=new Job()
Document document= null
document = Jsoup.parse(getJson(url))
job.setJobname(document.select(".name").text())
job.setSalary(document.select(".salary").text())
String joball=HtmlTool.tag(document.select(".job_bt").select("div").html())//清除html标签
job.setJobdesc(joball)//职位描述包含要求
job.setCompany(document.select(".b2").attr("alt"))
Elements elements=document.select(".c_feature")
//System.out.println(document.select(".name").text())
job.setCompanysite(elements.select("a").attr("href"))//获取公司主页
job.setJobdsite(url)
return job
}
void insertExcel(List<Job>jobList) throws IOException, BiffException, WriteException {
int row=1
Workbook wb = Workbook.getWorkbook(new File(JobCondition.filename))
WritableWorkbook book = Workbook.createWorkbook(new File(JobCondition.filename), wb)
WritableSheet sheet=book.getSheet(0)
for (int i=0i<jobList.size()i++){ //遍历工作列表,一行行插入到表格中
sheet.addCell(new Label(0,row,jobList.get(i).getJobname()))
sheet.addCell(new Label(1,row,jobList.get(i).getSalary()))
sheet.addCell(new Label(2,row,jobList.get(i).getJobdesc()))
sheet.addCell(new Label(3,row,jobList.get(i).getCompany()))
sheet.addCell(new Label(4,row,jobList.get(i).getCompanysite()))
sheet.addCell(new Label(5,row,jobList.get(i).getJobdsite()))
row++
}
book.write()
book.close()
}