python爬虫登录知乎后怎样爬取数据

Python012

python爬虫登录知乎后怎样爬取数据,第1张

模拟登录

很多网站,比如知乎、微博、豆瓣,都需要登录之后,才能浏览某些内容。所以想要爬取这类网站,必须先模拟登录。比较简单的方式是利用这个网站的 cookie。cookie 相当于是一个密码箱,里面储存了用户在该网站的基本信息。在一次登录之后,网站会记住你的信息,把它放到cookie里,方便下次自动登录。所以,要爬取这类网站的策略是:先进行一次手动登录,获取cookie,然后再次登录时,调用上一次登录得到的cookie,实现自动登录。

动态爬取

在爬取知乎某个问题的时候,需要将滑动鼠标滚轮到底部,以显示新的回答。静态的爬取方法无法做到这一点,可以引入selenium库来解决这一问题。selenium库模拟人浏览网站、进行操作,简单易懂。

scrapy.FormRequest

login.py

class LoginSpider(scrapy.Spider):

name = 'login_spider'

start_urls = ['hin.com']def parse(self, response):

return [

scrapy.FormRequest.from_response(

response,# username和password要根据实际页面的表单的name字段进行修改

formdata={'username': 'your_username', 'password': 'your_password'},

callback=self.after_login)]def after_login(self, response):

# 登录后的代码

pass123456789101112131415

selenium登录获取cookie

get_cookie_by_selenium.py

import pickleimport timefrom selenium import webdriverdef get_cookies():

url = 'httest.com'

web_driver = webdriver.Chrome()

web_driver.get(url)

username = web_driver.find_element_by_id('login-email')

username.send_keys('username')

password = web_driver.find_element_by_id('login-password')

password.send_keys('password')

login_button = web_driver.find_element_by_id('login-submit')

login_button.click()

time.sleep(3)

cookies = web_driver.get_cookies()

web_driver.close()return cookiesif __name__ == '__main__':

cookies = get_cookies()

pickle.dump(cookies, open('cookies.pkl', 'wb'))12345678910111213141516171819202122232425

获取浏览器cookie(以Ubuntu的Firefox为例)

get_cookie_by_firefox.py

import sqlite3import pickledef get_cookie_by_firefox():

cookie_path = '/home/name/.mozilla/firefox/bqtvfe08.default/cookies.sqlite'

with sqlite3.connect(cookie_path) as conn:

sql = 'select name,value from moz_cookies where baseDomain="test.com"'

cur = conn.cursor()

cookies = [{'name': name, 'value': value} for name, value in cur.execute(sql).fetchall()]return cookiesif __name__ == '__main__':

cookies = get_cookie_from_firefox()

pickle.dump(cookies, open('cookies.pkl', 'wb'))12345678910111213141516

scrapy使用获取后的cookie

cookies = pickle.load(open('cookies.pkl', 'rb'))yield scrapy.Request(url, cookies=cookies, callback=self.parse)12

requests使用获取后的cookie

cookies = pickle.load(open('cookies.pkl', 'rb'))

s = requests.Session()for cookie in cookies:

s.cookies.set(cookie['name'], cookie['value'])1234

selenium使用获取后的cookie

from selenium import webdriver

cookies = pickle.load(open('cookies.pkl', 'rb'))

w = webdriver.Chrome()# 直接添加cookie会报错,下面是一种解决方案,可能有更好的# -- start --w.get('hww.test.com')

w.delete_all_cookies()# -- end --for cookie in cookies:

w.add_cookie(cookie)