五月综合激情婷婷六月,日韩欧美国产一区不卡,他扒开我内裤强吻我下面视频 ,无套内射无矿码免费看黄,天天躁,日日躁,狠狠躁

新聞動(dòng)態(tài)

Python中scrapy下載保存圖片的示例

發(fā)布日期:2022-02-22 12:47 | 文章來(lái)源:源碼之家

在日常爬蟲(chóng)練習(xí)中,我們爬取到的數(shù)據(jù)需要進(jìn)行保存操作,在scrapy中我們可以使用ImagesPipeline這個(gè)類(lèi)來(lái)進(jìn)行相關(guān)操作,這個(gè)類(lèi)是scrapy已經(jīng)封裝好的了,我們直接拿來(lái)用即可。

在使用ImagesPipeline下載圖片數(shù)據(jù)時(shí),我們需要對(duì)其中的三個(gè)管道類(lèi)方法進(jìn)行重寫(xiě),其中 — get_media_request 是對(duì)圖片地址發(fā)起請(qǐng)求

— file path 是返回圖片名稱(chēng)

— item_completed 返回item,將其返回給下一個(gè)即將被執(zhí)行的管道類(lèi)

那具體代碼是什么樣的呢,首先我們需要在pipelines.py文件中,導(dǎo)入ImagesPipeline類(lèi),然后重寫(xiě)上述所說(shuō)的3個(gè)方法:

from scrapy.pipelines.images import ImagesPipeline
import  scrapy
import os
 
 
class ImgsPipLine(ImagesPipeline):
 def get_media_requests(self, item, info):
  yield scrapy.Request(url = item['img_src'],meta={'item':item})
 
 
 #返回圖片名稱(chēng)即可
 def file_path(self, request, response=None, info=None):
  item = request.meta['item']
  print('########',item)
  filePath = item['img_name']
  return filePath
 
 def item_completed(self, results, item, info):
  return item

方法定義好后,我們需要在settings.py配置文件中進(jìn)行設(shè)置,一個(gè)是指定圖片保存的位置IMAGES_STORE = 'D:\\ImgPro',然后就是啟用“ImgsPipLine”管道,

ITEM_PIPELINES = {
'imgPro.pipelines.ImgsPipLine': 300,  #300代表優(yōu)先級(jí),數(shù)字越小優(yōu)先級(jí)越高
}

設(shè)置完成后,我們運(yùn)行程序后就可以看到“D:\\ImgPro”下保存成功的圖片。

完整代碼如下:

spider文件代碼:

# -*- coding: utf-8 -*-
import scrapy
from imgPro.items import ImgproItem
 
 
 
class ImgSpider(scrapy.Spider):
 name = 'img'
 allowed_domains = ['www.521609.com']
 start_urls = ['http://www.521609.com/daxuemeinv/']
 
 def parse(self, response):
  #解析圖片地址和圖片名稱(chēng)
  li_list = response.xpath('//div[@class="index_img list_center"]/ul/li')
  for li in li_list:
item = ImgproItem()
item['img_src'] = 'http://www.521609.com/'  + li.xpath('./a[1]/img/@src').extract_first()
item['img_name'] = li.xpath('./a[1]/img/@alt').extract_first() + '.jpg'
# print('***********')
# print(item)
yield item

items.py文件

import scrapy
 
 
class ImgproItem(scrapy.Item):
 # define the fields for your item here like:
 # name = scrapy.Field()
 img_src = scrapy.Field()
 img_name = scrapy.Field()

pipelines.py文件

from scrapy.pipelines.images import ImagesPipeline
import  scrapy
import os
from  imgPro.settings import IMAGES_STORE as IMGS
 
class ImgsPipLine(ImagesPipeline):
 def get_media_requests(self, item, info):
  yield scrapy.Request(url = item['img_src'],meta={'item':item})
 
 
 #返回圖片名稱(chēng)即可
 def file_path(self, request, response=None, info=None):
  item = request.meta['item']
  print('########',item)
  filePath = item['img_name']
  return filePath
 
 def item_completed(self, results, item, info):
  return item

settings.py文件

import random
BOT_NAME = 'imgPro'
 
SPIDER_MODULES = ['imgPro.spiders']
NEWSPIDER_MODULE = 'imgPro.spiders'
 
IMAGES_STORE = 'D:\\ImgPro'#文件保存路徑
LOG_LEVEL = "WARNING"
ROBOTSTXT_OBEY = False
#設(shè)置user-agent
USER_AGENTS_LIST = [
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
  "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
  "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
  "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
  "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
 ]
USER_AGENT = random.choice(USER_AGENTS_LIST)
DEFAULT_REQUEST_HEADERS = {
 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
 'Accept-Language': 'en',
# 'User-Agent':"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36",
 'User-Agent':USER_AGENT
}
 
#啟動(dòng)pipeline管道
ITEM_PIPELINES = {
'imgPro.pipelines.ImgsPipLine': 300,
}

以上即是使用ImagesPipeline下載保存圖片的方法,今天突生一個(gè)疑惑,爬蟲(chóng)爬的好,真的是牢飯吃的飽嗎?還請(qǐng)各位大佬解答!更多相關(guān)Python scrapy下載保存內(nèi)容請(qǐng)搜索本站以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持本站!

香港服務(wù)器租用

版權(quán)聲明:本站文章來(lái)源標(biāo)注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請(qǐng)保持原文完整并注明來(lái)源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非maisonbaluchon.cn所屬的服務(wù)器上建立鏡像,否則將依法追究法律責(zé)任。本站部分內(nèi)容來(lái)源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來(lái),僅供學(xué)習(xí)參考,不代表本站立場(chǎng),如有內(nèi)容涉嫌侵權(quán),請(qǐng)聯(lián)系alex-e#qq.com處理。

相關(guān)文章

實(shí)時(shí)開(kāi)通

自選配置、實(shí)時(shí)開(kāi)通

免備案

全球線(xiàn)路精選!

全天候客戶(hù)服務(wù)

7x24全年不間斷在線(xiàn)

專(zhuān)屬顧問(wèn)服務(wù)

1對(duì)1客戶(hù)咨詢(xún)顧問(wèn)

在線(xiàn)
客服

在線(xiàn)客服:7*24小時(shí)在線(xiàn)

客服
熱線(xiàn)

400-630-3752
7*24小時(shí)客服服務(wù)熱線(xiàn)

關(guān)注
微信

關(guān)注官方微信
頂部