S04 - 分页参数分析及翻页爬取
coding=utf-8 import requests from lxml import etree import re base_url = ‘https://spiderbuf.cn/web-scraping-practice/web-pagination-scraper?pageno=%d’ myheaders = { ‘User-Agent’: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91...
S03 - lxml库进阶语法及解析练习
coding=utf-8 import requests from lxml import etree url = ‘https://spiderbuf.cn/web-scraping-practice/lxml-xpath-advanced’ myheaders = {‘User-Agent’:‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36&rsqu...
S02 - http请求分析及头构造使用
coding=utf-8 import requests from lxml import etree url = ‘https://spiderbuf.cn/web-scraping-practice/scraper-http-header’ myheaders = {‘User-Agent’:‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.164 Safari/537.36&rsqu...
S01 - requests库及lxml库入门
coding=utf-8 import requests from lxml import etree url = ‘https://spiderbuf.cn/web-scraping-practice/requests-lxml-for-scraping-beginner’ html = requests.get(url).text f = open(‘01.html’, ‘w’, encoding=‘utf-8’) f.write(html) f.close() root = etree.HTM...
Python调用Selenium爬取网页
# coding=utf-8 from selenium import webdriver if __name__ == '__main__': url = 'http://www.example.com' client = webdriver.Chrome() client.get(url) html = client.page_source print(html) client.quit()...