go爬蟲(chóng)框架colly源碼以及軟件架構(gòu)分析

無(wú)意中發(fā)現(xiàn)了colly,我一直是使用python進(jìn)行爬蟲(chóng)的蓖租, 學(xué)習(xí)golang的使用粱侣, 用go參考scrapy架構(gòu)寫(xiě)了一個(gè)爬蟲(chóng)的框架demo辆毡。我一直以為go不適合做爬蟲(chóng), go的領(lǐng)域是后端服務(wù)甜害。然后去搜索了一下colly, 發(fā)現(xiàn)還是很流行舶掖。 我個(gè)人還是比較喜歡爬蟲(chóng), 網(wǎng)絡(luò)上的數(shù)據(jù)就是公開(kāi)的API尔店, 所以眨攘, 爬蟲(chóng)去請(qǐng)求接口獲取數(shù)據(jù)。當(dāng)然我是遵循君子協(xié)議的嚣州。

好鲫售, 下面進(jìn)入正題,介紹colly

colly介紹

Lightning Fast and Elegant Scraping Framework for Gophers

Colly provides a clean interface to write any kind of crawler/scraper/spider.
官方的介紹该肴,gocolly快速優(yōu)雅情竹,在單核上每秒可以發(fā)起1K以上請(qǐng)求;以回調(diào)函數(shù)的形式提供了一組接口匀哄,可以實(shí)現(xiàn)任意類(lèi)型的爬蟲(chóng)秦效;依賴(lài)goquery庫(kù)可以像jquery一樣選擇web元素。

安裝使用

colly
官網(wǎng)

go get -u github.com/gocolly/colly/...
import "github.com/gocolly/colly"

架構(gòu)特點(diǎn)

了解爬蟲(chóng)的都知道一個(gè)爬蟲(chóng)請(qǐng)求的生命周期

  1. 構(gòu)建請(qǐng)求
  2. 發(fā)送請(qǐng)求
  3. 獲取文檔或數(shù)據(jù)
  4. 解析文檔或清洗數(shù)據(jù)
  5. 數(shù)據(jù)處理或持久化

scrapy的設(shè)計(jì)理念是將上面的每一個(gè)步驟抽離出來(lái)涎嚼,然后做出組件的形式阱州, 最后通過(guò)調(diào)度組成流水線的工作形式。
我們看一下scrapy的架構(gòu)圖法梯, 這里只是簡(jiǎn)單的介紹下苔货, 后面有時(shí)間,我深入介紹scrapy


如圖立哑,downloader負(fù)責(zé)請(qǐng)求獲取頁(yè)面夜惭,spiders中寫(xiě)具體解析文檔的邏輯,item PipeLine數(shù)據(jù)最后處理铛绰, 中間有一些中間件诈茧,可以一些功能的裝飾。比如至耻,代理若皱,請(qǐng)求頻率等。

我們介紹一下colly的架構(gòu)特點(diǎn)
colly的邏輯更像是面向過(guò)程編程的尘颓, colly的邏輯就是按上面生命周期的順序管道處理, 只是在不同階段晦譬,加上回調(diào)函數(shù)進(jìn)行過(guò)濾的時(shí)候進(jìn)行處理疤苹。


下面也按照這個(gè)邏輯進(jìn)行介紹

源碼分析

先給一個(gè)??


package main

import (
    "fmt"

    "github.com/gocolly/colly"
)

func main() {
    // Instantiate default collector
    c := colly.NewCollector(
        // Visit only domains: hackerspaces.org, wiki.hackerspaces.org
        colly.AllowedDomains("hackerspaces.org", "wiki.hackerspaces.org"),
    )

    // On every a element which has href attribute call callback
    c.OnHTML("a[href]", func(e *colly.HTMLElement) {
        link := e.Attr("href")
        // Print link
        fmt.Printf("Link found: %q -> %s\n", e.Text, link)
        // Visit link found on page
        // Only those links are visited which are in AllowedDomains
        c.Visit(e.Request.AbsoluteURL(link))
    })

    // Before making a request print "Visiting ..."
    c.OnRequest(func(r *colly.Request) {
        fmt.Println("Visiting", r.URL.String())
    })

    // Start scraping on https://hackerspaces.org
    c.Visit("https://hackerspaces.org/")
}

這是官方給的示例, 可以看到colly.NewCollector創(chuàng)建一個(gè)收集器敛腌, colly的所有處理邏輯都是以Collector為核心進(jìn)行操作的卧土。

我們看一下 Collector結(jié)構(gòu)體的定義

// Collector provides the scraper instance for a scraping job
type Collector struct {
    // UserAgent is the User-Agent string used by HTTP requests
    UserAgent string
    // MaxDepth limits the recursion depth of visited URLs.
    // Set it to 0 for infinite recursion (default).
    MaxDepth int
    // AllowedDomains is a domain whitelist.
    // Leave it blank to allow any domains to be visited
    AllowedDomains []string
    // DisallowedDomains is a domain blacklist.
    DisallowedDomains []string
    // DisallowedURLFilters is a list of regular expressions which restricts
    // visiting URLs. If any of the rules matches to a URL the
    // request will be stopped. DisallowedURLFilters will
    // be evaluated before URLFilters
    // Leave it blank to allow any URLs to be visited
    DisallowedURLFilters []*regexp.Regexp
    // URLFilters is a list of regular expressions which restricts
    // visiting URLs. If any of the rules matches to a URL the
    // request won't be stopped. DisallowedURLFilters will
    // be evaluated before URLFilters

    // Leave it blank to allow any URLs to be visited
    URLFilters []*regexp.Regexp

    // AllowURLRevisit allows multiple downloads of the same URL
    AllowURLRevisit bool
    // MaxBodySize is the limit of the retrieved response body in bytes.
    // 0 means unlimited.
    // The default value for MaxBodySize is 10MB (10 * 1024 * 1024 bytes).
    MaxBodySize int
    // CacheDir specifies a location where GET requests are cached as files.
    // When it's not defined, caching is disabled.
    CacheDir string
    // IgnoreRobotsTxt allows the Collector to ignore any restrictions set by
    // the target host's robots.txt file.  See http://www.robotstxt.org/ for more
    // information.
    IgnoreRobotsTxt bool
    // Async turns on asynchronous network communication. Use Collector.Wait() to
    // be sure all requests have been finished.
    Async bool
    // ParseHTTPErrorResponse allows parsing HTTP responses with non 2xx status codes.
    // By default, Colly parses only successful HTTP responses. Set ParseHTTPErrorResponse
    // to true to enable it.
    ParseHTTPErrorResponse bool
    // ID is the unique identifier of a collector
    ID uint32
    // DetectCharset can enable character encoding detection for non-utf8 response bodies
    // without explicit charset declaration. This feature uses https://github.com/saintfish/chardet
    DetectCharset bool
    // RedirectHandler allows control on how a redirect will be managed
    RedirectHandler func(req *http.Request, via []*http.Request) error
    // CheckHead performs a HEAD request before every GET to pre-validate the response
    CheckHead         bool
    store             storage.Storage
    debugger          debug.Debugger
    robotsMap         map[string]*robotstxt.RobotsData
    htmlCallbacks     []*htmlCallbackContainer
    xmlCallbacks      []*xmlCallbackContainer
    requestCallbacks  []RequestCallback
    responseCallbacks []ResponseCallback
    errorCallbacks    []ErrorCallback
    scrapedCallbacks  []ScrapedCallback
    requestCount      uint32
    responseCount     uint32
    backend           *httpBackend
    wg                *sync.WaitGroup
    lock              *sync.RWMutex
}

上面的具體屬性我就不介紹了惫皱, 看看注釋也就懂了。
我就先按上面的示例解釋源碼

    // 創(chuàng)建一個(gè) Collector對(duì)象
    c := colly.NewCollector(
        // Visit only domains: hackerspaces.org, wiki.hackerspaces.org
        colly.AllowedDomains("hackerspaces.org", "wiki.hackerspaces.org"),
    )

    // 添加一個(gè)HTML的回調(diào)函數(shù)
    c.OnHTML("a[href]", func(e *colly.HTMLElement) {
        link := e.Attr("href")
        // Print link
        fmt.Printf("Link found: %q -> %s\n", e.Text, link)
        // Visit link found on page
        // Only those links are visited which are in AllowedDomains
        c.Visit(e.Request.AbsoluteURL(link))
    })

    // 添加一個(gè) Requset回調(diào)函數(shù)
    c.OnRequest(func(r *colly.Request) {
        fmt.Println("Visiting", r.URL.String())
    })

    // 開(kāi)始爬取
    c.Visit("https://hackerspaces.org/")

回調(diào)函數(shù)如何用尤莺? 什么作用旅敷? 先賣(mài)個(gè)關(guān)子, c.Visit("https://hackerspaces.org/")是入口颤霎, 那就先分析它媳谁,

// Visit starts Collector's collecting job by creating a
// request to the URL specified in parameter.
// Visit also calls the previously provided callbacks
func (c *Collector) Visit(URL string) error {
    if c.CheckHead {
        if check := c.scrape(URL, "HEAD", 1, nil, nil, nil, true); check != nil {
            return check
        }
    }
    return c.scrape(URL, "GET", 1, nil, nil, nil, true)
}

??又出來(lái)一個(gè)新的method,

func (c *Collector) scrape(u, method string, depth int, requestData io.Reader, ctx *Context, hdr http.Header, checkRevisit bool) error {
    // 檢查請(qǐng)求是否合法
    if err := c.requestCheck(u, method, depth, checkRevisit); err != nil {
        return err
    }
    // 解析url友酱,
    parsedURL, err := url.Parse(u)
    if err != nil {
        return err
    }
    if parsedURL.Scheme == "" {
        parsedURL.Scheme = "http"
    }
    if !c.isDomainAllowed(parsedURL.Hostname()) {
        return ErrForbiddenDomain
    }
    // robots協(xié)議
    if method != "HEAD" && !c.IgnoreRobotsTxt {
        if err = c.checkRobots(parsedURL); err != nil {
            return err
        }
    }
     // headers
    if hdr == nil {
        hdr = http.Header{"User-Agent": []string{c.UserAgent}}
    }
    rc, ok := requestData.(io.ReadCloser)
    if !ok && requestData != nil {
        rc = ioutil.NopCloser(requestData)
    }
    // The Go HTTP API ignores "Host" in the headers, preferring the client
    // to use the Host field on Request.
    host := parsedURL.Host
    if hostHeader := hdr.Get("Host"); hostHeader != "" {
        host = hostHeader
    }
    // 構(gòu)造http.Request
    req := &http.Request{
        Method:     method,
        URL:        parsedURL,
        Proto:      "HTTP/1.1",
        ProtoMajor: 1,
        ProtoMinor: 1,
        Header:     hdr,
        Body:       rc,
        Host:       host,
    }
    // 請(qǐng)求的數(shù)據(jù)(requestData)轉(zhuǎn)換成io.ReadCloser接口數(shù)據(jù)
    setRequestBody(req, requestData)
    u = parsedURL.String()
    c.wg.Add(1)
    // 異步方式
    if c.Async {
        go c.fetch(u, method, depth, requestData, ctx, hdr, req)
        return nil
    }
    return c.fetch(u, method, depth, requestData, ctx, hdr, req)
}

上面很大篇幅都是檢查晴音, 現(xiàn)在還在 request的階段, 還沒(méi)有response缔杉,看c.fetch

fetch就是colly的核心內(nèi)容

func (c *Collector) fetch(u, method string, depth int, requestData io.Reader, ctx *Context, hdr http.Header, req *http.Request) error {
    defer c.wg.Done()
    if ctx == nil {
        ctx = NewContext()
    }
    request := &Request{
        URL:       req.URL,
        Headers:   &req.Header,
        Ctx:       ctx,
        Depth:     depth,
        Method:    method,
        Body:      requestData,
        collector: c, // 這里將Collector放到request中锤躁,這個(gè)可以對(duì)請(qǐng)求繼續(xù)處理
        ID:        atomic.AddUint32(&c.requestCount, 1),
    }
    // 回調(diào)函數(shù)處理 request
    c.handleOnRequest(request)

    if request.abort {
        return nil
    }

    if method == "POST" && req.Header.Get("Content-Type") == "" {
        req.Header.Add("Content-Type", "application/x-www-form-urlencoded")
    }

    if req.Header.Get("Accept") == "" {
        req.Header.Set("Accept", "*/*")
    }

    origURL := req.URL
    // 這里是 去請(qǐng)求網(wǎng)絡(luò), 是調(diào)用了 `http.Client.Do`方法請(qǐng)求的
    response, err := c.backend.Cache(req, c.MaxBodySize, c.CacheDir)
    if proxyURL, ok := req.Context().Value(ProxyURLKey).(string); ok {
        request.ProxyURL = proxyURL
    }
    // 回調(diào)函數(shù)或详,處理error
    if err := c.handleOnError(response, err, request, ctx); err != nil {
        return err
    }
    if req.URL != origURL {
        request.URL = req.URL
        request.Headers = &req.Header
    }
    atomic.AddUint32(&c.responseCount, 1)
    response.Ctx = ctx
    response.Request = request

    err = response.fixCharset(c.DetectCharset, request.ResponseCharacterEncoding)
    if err != nil {
        return err
    }
    // 回調(diào)函數(shù) 處理Response
    c.handleOnResponse(response)
    
    // 回調(diào)函數(shù) HTML
    err = c.handleOnHTML(response)
    if err != nil {
        c.handleOnError(response, err, request, ctx)
    }
    // 回調(diào)函數(shù)XML
    err = c.handleOnXML(response)
    if err != nil {
        c.handleOnError(response, err, request, ctx)
    }
    // 回調(diào)函數(shù) Scraped
    c.handleOnScraped(response)

    return err
}


看到了系羞, 這就是一個(gè)完整的流程。 好霸琴, 我們看一下回調(diào)函數(shù)做了什么觉啊?

func (c *Collector) handleOnRequest(r *Request) {
    if c.debugger != nil {
        c.debugger.Event(createEvent("request", r.ID, c.ID, map[string]string{
            "url": r.URL.String(),
        }))
    }
    for _, f := range c.requestCallbacks {
        f(r)
    }
}

核心就 for _, f := range c.requestCallbacks { f(r) }這句,下面我每個(gè)回調(diào)函數(shù)都介紹一下

回調(diào)函數(shù)

這里介紹按生命周期的順序來(lái)介紹

1. OnRequest

// OnRequest registers a function. Function will be executed on every
// request made by the Collector
// 這里是注冊(cè)回調(diào)函數(shù)到 requestCallbacks
func (c *Collector) OnRequest(f RequestCallback) {
   c.lock.Lock()
   if c.requestCallbacks == nil {
       c.requestCallbacks = make([]RequestCallback, 0, 4)
   }
   c.requestCallbacks = append(c.requestCallbacks, f)
   c.lock.Unlock()
}


// 在fetch中調(diào)用最早調(diào)用的
func (c *Collector) handleOnRequest(r *Request) {
   if c.debugger != nil {
       c.debugger.Event(createEvent("request", r.ID, c.ID, map[string]string{
           "url": r.URL.String(),
       }))
   }
   for _, f := range c.requestCallbacks {
       f(r)
   }
}

2. OnResponse & handleOnResponse

// OnResponse registers a function. Function will be executed on every response
func (c *Collector) OnResponse(f ResponseCallback) {
    c.lock.Lock()
    if c.responseCallbacks == nil {
        c.responseCallbacks = make([]ResponseCallback, 0, 4)
    }
    c.responseCallbacks = append(c.responseCallbacks, f)
    c.lock.Unlock()
}


func (c *Collector) handleOnResponse(r *Response) {
    if c.debugger != nil {
        c.debugger.Event(createEvent("response", r.Request.ID, c.ID, map[string]string{
            "url":    r.Request.URL.String(),
            "status": http.StatusText(r.StatusCode),
        }))
    }
    for _, f := range c.responseCallbacks {
        f(r)
    }
}

3. OnHTML & handleOnHTML

// OnHTML registers a function. Function will be executed on every HTML
// element matched by the GoQuery Selector parameter.
// GoQuery Selector is a selector used by https://github.com/PuerkitoBio/goquery
func (c *Collector) OnHTML(goquerySelector string, f HTMLCallback) {
    c.lock.Lock()
    if c.htmlCallbacks == nil {
        c.htmlCallbacks = make([]*htmlCallbackContainer, 0, 4)
    }
    c.htmlCallbacks = append(c.htmlCallbacks, &htmlCallbackContainer{
        Selector: goquerySelector,
        Function: f,
    })
    c.lock.Unlock()
}

// 這個(gè)解析html的邏輯比較多一些
func (c *Collector) handleOnHTML(resp *Response) error {
    if len(c.htmlCallbacks) == 0 || !strings.Contains(strings.ToLower(resp.Headers.Get("Content-Type")), "html") {
        return nil
    }
    doc, err := goquery.NewDocumentFromReader(bytes.NewBuffer(resp.Body))
    if err != nil {
        return err
    }
    if href, found := doc.Find("base[href]").Attr("href"); found {
        resp.Request.baseURL, _ = url.Parse(href)
    }
    for _, cc := range c.htmlCallbacks {
        i := 0
        doc.Find(cc.Selector).Each(func(_ int, s *goquery.Selection) {
            for _, n := range s.Nodes {
                e := NewHTMLElementFromSelectionNode(resp, s, n, i)
                i++
                if c.debugger != nil {
                    c.debugger.Event(createEvent("html", resp.Request.ID, c.ID, map[string]string{
                        "selector": cc.Selector,
                        "url":      resp.Request.URL.String(),
                    }))
                }
                cc.Function(e)
            }
        })
    }
    return nil
}

4. OnXML & handleOnXML

// OnXML registers a function. Function will be executed on every XML
// element matched by the xpath Query parameter.
// xpath Query is used by https://github.com/antchfx/xmlquery
func (c *Collector) OnXML(xpathQuery string, f XMLCallback) {
    c.lock.Lock()
    if c.xmlCallbacks == nil {
        c.xmlCallbacks = make([]*xmlCallbackContainer, 0, 4)
    }
    c.xmlCallbacks = append(c.xmlCallbacks, &xmlCallbackContainer{
        Query:    xpathQuery,
        Function: f,
    })
    c.lock.Unlock()
}



func (c *Collector) handleOnXML(resp *Response) error {
    if len(c.xmlCallbacks) == 0 {
        return nil
    }
    contentType := strings.ToLower(resp.Headers.Get("Content-Type"))
    isXMLFile := strings.HasSuffix(strings.ToLower(resp.Request.URL.Path), ".xml") || strings.HasSuffix(strings.ToLower(resp.Request.URL.Path), ".xml.gz")
    if !strings.Contains(contentType, "html") && (!strings.Contains(contentType, "xml") && !isXMLFile) {
        return nil
    }

    if strings.Contains(contentType, "html") {
        doc, err := htmlquery.Parse(bytes.NewBuffer(resp.Body))
        if err != nil {
            return err
        }
        if e := htmlquery.FindOne(doc, "http://base"); e != nil {
            for _, a := range e.Attr {
                if a.Key == "href" {
                    resp.Request.baseURL, _ = url.Parse(a.Val)
                    break
                }
            }
        }

        for _, cc := range c.xmlCallbacks {
            for _, n := range htmlquery.Find(doc, cc.Query) {
                e := NewXMLElementFromHTMLNode(resp, n)
                if c.debugger != nil {
                    c.debugger.Event(createEvent("xml", resp.Request.ID, c.ID, map[string]string{
                        "selector": cc.Query,
                        "url":      resp.Request.URL.String(),
                    }))
                }
                cc.Function(e)
            }
        }
    } else if strings.Contains(contentType, "xml") || isXMLFile {
        doc, err := xmlquery.Parse(bytes.NewBuffer(resp.Body))
        if err != nil {
            return err
        }

        for _, cc := range c.xmlCallbacks {
            xmlquery.FindEach(doc, cc.Query, func(i int, n *xmlquery.Node) {
                e := NewXMLElementFromXMLNode(resp, n)
                if c.debugger != nil {
                    c.debugger.Event(createEvent("xml", resp.Request.ID, c.ID, map[string]string{
                        "selector": cc.Query,
                        "url":      resp.Request.URL.String(),
                    }))
                }
                cc.Function(e)
            })
        }
    }
    return nil
}



5. OnError & handleOnError

這個(gè)會(huì)多次調(diào)用沈贝, 如果 err != nil情況下調(diào)用比較多杠人, 爬蟲(chóng)異常的情況下,會(huì)調(diào)用

// OnError registers a function. Function will be executed if an error
// occurs during the HTTP request.
func (c *Collector) OnError(f ErrorCallback) {
    c.lock.Lock()
    if c.errorCallbacks == nil {
        c.errorCallbacks = make([]ErrorCallback, 0, 4)
    }
    c.errorCallbacks = append(c.errorCallbacks, f)
    c.lock.Unlock()
}


func (c *Collector) handleOnError(response *Response, err error, request *Request, ctx *Context) error {
    if err == nil && (c.ParseHTTPErrorResponse || response.StatusCode < 203) {
        return nil
    }
    if err == nil && response.StatusCode >= 203 {
        err = errors.New(http.StatusText(response.StatusCode))
    }
    if response == nil {
        response = &Response{
            Request: request,
            Ctx:     ctx,
        }
    }
    if c.debugger != nil {
        c.debugger.Event(createEvent("error", request.ID, c.ID, map[string]string{
            "url":    request.URL.String(),
            "status": http.StatusText(response.StatusCode),
        }))
    }
    if response.Request == nil {
        response.Request = request
    }
    if response.Ctx == nil {
        response.Ctx = request.Ctx
    }
    for _, f := range c.errorCallbacks {
        f(response, err)
    }
    return err
}

6. OnScraped & handleOnScraped

最后一步的回調(diào)函數(shù)處理

// OnScraped registers a function. Function will be executed after
// OnHTML, as a final part of the scraping.
func (c *Collector) OnScraped(f ScrapedCallback) {
    c.lock.Lock()
    if c.scrapedCallbacks == nil {
        c.scrapedCallbacks = make([]ScrapedCallback, 0, 4)
    }
    c.scrapedCallbacks = append(c.scrapedCallbacks, f)
    c.lock.Unlock()
}

func (c *Collector) handleOnScraped(r *Response) {
    if c.debugger != nil {
        c.debugger.Event(createEvent("scraped", r.Request.ID, c.ID, map[string]string{
            "url": r.Request.URL.String(),
        }))
    }
    for _, f := range c.scrapedCallbacks {
        f(r)
    }
}

注冊(cè)回調(diào)函數(shù)的method還有幾個(gè)沒(méi)有列出來(lái)宋下,感興趣的嗡善,自己看一下,

上面介紹完了学歧, 再回頭看??

    // On every a element which has href attribute call callback
    c.OnHTML("a[href]", func(e *colly.HTMLElement) {
        link := e.Attr("href")
        // Print link
        fmt.Printf("Link found: %q -> %s\n", e.Text, link)
        // Visit link found on page
        // Only those links are visited which are in AllowedDomains
        c.Visit(e.Request.AbsoluteURL(link))
    })

    // Before making a request print "Visiting ..."
    c.OnRequest(func(r *colly.Request) {
        fmt.Println("Visiting", r.URL.String())
    })

一般文檔解析放在html, xml 中

頁(yè)面跳轉(zhuǎn)爬取

一般處理就2種罩引,一種是相同邏輯的頁(yè)面,比如下一頁(yè)枝笨,另一種袁铐,就是不同邏輯的,比如子頁(yè)面

  1. html,xml横浑,解析出來(lái)以后剔桨,構(gòu)建新的請(qǐng)求,我們看一下徙融,相同頁(yè)面
   // On every a element which has href attribute call callback
   c.OnHTML("a[href]", func(e *colly.HTMLElement) {
       // If attribute class is this long string return from callback
       // As this a is irrelevant
       if e.Attr("class") == "Button_1qxkboh-o_O-primary_cv02ee-o_O-md_28awn8-o_O-primaryLink_109aggg" {
           return
       }
       link := e.Attr("href")
       // If link start with browse or includes either signup or login return from callback
       if !strings.HasPrefix(link, "/browse") || strings.Index(link, "=signup") > -1 || strings.Index(link, "=login") > -1 {
           return
       }
       // start scaping the page under the link found
       e.Request.Visit(link)
   })

上面是 HTML的回調(diào)函數(shù)洒缀,解析頁(yè)面,獲取了url,使用 e.Request.Visit(link), 其實(shí)就是 e.Request.collector.Visit(link)
我解釋一下

func (c *Collector) fetch(u, method string, depth int, requestData io.Reader, ctx *Context, hdr http.Header, req *http.Request) error {
    defer c.wg.Done()
    if ctx == nil {
        ctx = NewContext()
    }
    request := &Request{
        URL:       req.URL,
        Headers:   &req.Header,
        Ctx:       ctx,
        Depth:     depth,
        Method:    method,
        Body:      requestData,
        collector: c, // 這個(gè)上面有介紹
        ID:        atomic.AddUint32(&c.requestCount, 1),
    }
    ....
    }}


// Visit continues Collector's collecting job by creating a
// request and preserves the Context of the previous request.
// Visit also calls the previously provided callbacks
func (r *Request) Visit(URL string) error {
    return r.collector.scrape(r.AbsoluteURL(URL), "GET", r.Depth+1, nil, r.Ctx, nil, true)
}

這種方法在實(shí)際開(kāi)發(fā)中經(jīng)常會(huì)用到。

  1. 子頁(yè)面的處理邏輯
    colly中主要是以Collector為中心树绩, 然后各種回調(diào)函數(shù)進(jìn)行處理萨脑,子頁(yè)面需要不同的回調(diào)函數(shù),所以就需要新的 Collector
    // Instantiate default collector
    c := colly.NewCollector(
        // Visit only domains: coursera.org, www.coursera.org
        colly.AllowedDomains("coursera.org", "www.coursera.org"),

        // Cache responses to prevent multiple download of pages
        // even if the collector is restarted
        colly.CacheDir("./coursera_cache"),
    )

    // Create another collector to scrape course details
    detailCollector := c.Clone()

    // Before making a request print "Visiting ..."
    c.OnRequest(func(r *colly.Request) {
        log.Println("visiting", r.URL.String())
    })

    // On every a HTML element which has name attribute call callback
    c.OnHTML(`a[name]`, func(e *colly.HTMLElement) {
        // Activate detailCollector if the link contains "coursera.org/learn"
        courseURL := e.Request.AbsoluteURL(e.Attr("href"))
        if strings.Index(courseURL, "coursera.org/learn") != -1 {
           // 子頁(yè)面或其他頁(yè)面
            detailCollector.Visit(courseURL)
        }
    })

持久化

Collector對(duì)象有一個(gè)屬性 store storage.Storage是存儲(chǔ)的饺饭,這個(gè)是將數(shù)據(jù)直接存儲(chǔ)下來(lái)渤早,沒(méi)有清洗。
比如瘫俊, 我需要將數(shù)據(jù)持久化到數(shù)據(jù)庫(kù)中鹊杖,其實(shí)很簡(jiǎn)單, 在回調(diào)函數(shù)中處理军援。

給個(gè)例子

    c.OnHTML("#currencies-all tbody tr", func(e *colly.HTMLElement) {
        mysql.WriteObjectStrings([]string{
            e.ChildText(".currency-name-container"),
            e.ChildText(".col-symbol"),
            e.ChildAttr("a.price", "data-usd"),
            e.ChildAttr("a.volume", "data-usd"),
            e.ChildAttr(".market-cap", "data-usd"),
            e.ChildAttr(".percent-change[data-timespan=\"1h\"]", "data-percentusd"),
            e.ChildAttr(".percent-change[data-timespan=\"24h\"]", "data-percentusd"),
            e.ChildAttr(".percent-change[data-timespan=\"7d\"]", "data-percentusd"),
        })
    })

總結(jié)

好了仅淑,介紹完了,我沒(méi)有介紹如何使用胸哥,我自己也沒(méi)有寫(xiě)任何的代碼涯竟, 我只想分享給你這種軟件架構(gòu)的特點(diǎn)以及設(shè)計(jì)模式, 希望你可以借鑒應(yīng)用到工作中空厌,一般寫(xiě)框架都是采用這種思維庐船。
下面這張圖很形象,爬蟲(chóng)框架就這些東西嘲更。

通用爬蟲(chóng)框架架構(gòu)

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末筐钟,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子赋朦,更是在濱河造成了極大的恐慌篓冲,老刑警劉巖,帶你破解...
    沈念sama閱讀 218,204評(píng)論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件宠哄,死亡現(xiàn)場(chǎng)離奇詭異壹将,居然都是意外死亡,警方通過(guò)查閱死者的電腦和手機(jī)毛嫉,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,091評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門(mén)诽俯,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人承粤,你說(shuō)我怎么就攤上這事暴区。” “怎么了辛臊?”我有些...
    開(kāi)封第一講書(shū)人閱讀 164,548評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵仙粱,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我浪讳,道長(zhǎng)缰盏,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 58,657評(píng)論 1 293
  • 正文 為了忘掉前任淹遵,我火速辦了婚禮口猜,結(jié)果婚禮上透揣,老公的妹妹穿的比我還像新娘。我一直安慰自己辐真,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,689評(píng)論 6 392
  • 文/花漫 我一把揭開(kāi)白布侍咱。 她就那樣靜靜地躺著耐床,像睡著了一般楔脯。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上昧廷,一...
    開(kāi)封第一講書(shū)人閱讀 51,554評(píng)論 1 305
  • 那天堪嫂,我揣著相機(jī)與錄音,去河邊找鬼皆串。 笑死眉枕,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的速挑。 我是一名探鬼主播梗摇,決...
    沈念sama閱讀 40,302評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼断序!你這毒婦竟也來(lái)了糜烹?” 一聲冷哼從身側(cè)響起,我...
    開(kāi)封第一講書(shū)人閱讀 39,216評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤诸迟,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后壁公,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體绅项,經(jīng)...
    沈念sama閱讀 45,661評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡快耿,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,851評(píng)論 3 336
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了撞反。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片搪花。...
    茶點(diǎn)故事閱讀 39,977評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖丁稀,靈堂內(nèi)的尸體忽然破棺而出倚聚,到底是詐尸還是另有隱情,我是刑警寧澤授账,帶...
    沈念sama閱讀 35,697評(píng)論 5 347
  • 正文 年R本政府宣布惨驶,位于F島的核電站,受9級(jí)特大地震影響屋确,放射性物質(zhì)發(fā)生泄漏续扔。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,306評(píng)論 3 330
  • 文/蒙蒙 一刨啸、第九天 我趴在偏房一處隱蔽的房頂上張望识脆。 院中可真熱鬧,春花似錦离例、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 31,898評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)袍冷。三九已至猫牡,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間煌恢,已是汗流浹背震庭。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 33,019評(píng)論 1 270
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留二汛,地道東北人拨拓。 一個(gè)月前我還...
    沈念sama閱讀 48,138評(píng)論 3 370
  • 正文 我出身青樓渣磷,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親醋界。 傳聞我的和親對(duì)象是個(gè)殘疾皇子物独,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,927評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容