Slide 1

Slide 1 text

sleep and still having time to Maintaining 200+ by Victor "Frodo" Martinez crawlers

Slide 2

Slide 2 text

vcrmartinez twitte r email blog Victor Frodo Martinez Software Developer [email protected] victormartinez.github.io

Slide 3

Slide 3 text

No content

Slide 4

Slide 4 text

No content

Slide 5

Slide 5 text

Informação Pessoas Advogados

Slide 6

Slide 6 text

JUSTICE GAP ?

Slide 7

Slide 7 text

A Jusbrasil tem revolucionado o acesso a informação jurídica

Slide 8

Slide 8 text

Páginas de D.O. Jurisprudências Notícias Leis 128 mi 23 mi 7 mi 975 K 160 mi documentos

Slide 9

Slide 9 text

Crawler Team

Slide 10

Slide 10 text

Jurisprudências Diários Oficiais Legislações Notícias ~200 3 ~180 ~105

Slide 11

Slide 11 text

No content

Slide 12

Slide 12 text

notícias extraídas/ mês 200K 17K notícias persistidas/ mês

Slide 13

Slide 13 text

notícias extraídas/ dia 7K 700 notícias persistidas/ dia

Slide 14

Slide 14 text

sleep and still having time to Maintaining 200+ crawlers

Slide 15

Slide 15 text

No content

Slide 16

Slide 16 text

2 Testes e Integração 1 Spiders 3 Execução 4 Persistência

Slide 17

Slide 17 text

2 Testes e Integração 1 Spiders 3 Execução 4 Persistência Métricas Monitorament o

Slide 18

Slide 18 text

NewsPipeline Scrapinghub item 1 item 2 item 4 item 5 item 3 item n Jusbrasil Network tests build Spiders 1 2 3 4 5 Monitoramento Métricas

Slide 19

Slide 19 text

1Spiders

Slide 20

Slide 20 text

Framework Open Source e colaborativo para extração de dados de websites de forma rápida, simples e extensível. scrapy.or g

Slide 21

Slide 21 text

No content

Slide 22

Slide 22 text

Spiders Itens Downloader Item Pipeline

Slide 23

Slide 23 text

semcomp/ ├── scrapy.cfg └── semcomp/ ├── __init__.py ├── items.py ├── pipelines.py ├── settings.py └── spiders/ └── __init__.py $ scrapy startproject $ scrapy startproject semcomp

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

Pipelines are AWESOME

Slide 26

Slide 26 text

Verifique Duplicidade

Slide 27

Slide 27 text

Salve no MongoDB

Slide 28

Slide 28 text

Slide 29

Slide 29 text

Concurrent Requests Enable/Disable Cookie Default Request Headers Autothrottle Spider Middlewares Downloader Middlewares …

Slide 30

Slide 30 text

2Testes e Integração

Slide 31

Slide 31 text

Como enviar esse código para produção?

Slide 32

Slide 32 text

$ git push origin master Build and Push $ docker-compose -f docker-compose-test.yml rm 1 2 $ git pull 3 unit regression 4 $ docker-compose -f docker-compose-deploy up

Slide 33

Slide 33 text

Unit and Regression Tests

Slide 34

Slide 34 text

3Execução

Slide 35

Slide 35 text

No content

Slide 36

Slide 36 text

No content

Slide 37

Slide 37 text

No content

Slide 38

Slide 38 text

No content

Slide 39

Slide 39 text

No content

Slide 40

Slide 40 text

Scrapinghub Command Line Client $ shub login $ shub deploy projects: default: 12345 prod: 33333 apikeys: default: 0bbf4f0f691e0d9378ae00ca7bcf7f0c scrapinghub.yml https://doc.scrapinghub.com/shub.htm l

Slide 41

Slide 41 text

scrapinghub/python-s crapinghub

Slide 42

Slide 42 text

>>> from scrapinghub import Connection >>> conn = Connection('1q2w3e4r54t56ydy87u89u8') >>> conn Connection('1q2w3e4r54t56ydy87u89u8')

Slide 43

Slide 43 text

>>> from scrapinghub import Connection >>> conn = Connection('1q2w3e4r54t56ydy87u89u8') >>> conn Connection('1q2w3e4r54t56ydy87u89u8') >>> conn.project_ids() [123, 456]

Slide 44

Slide 44 text

>>> from scrapinghub import Connection >>> conn = Connection('1q2w3e4r54t56ydy87u89u8') >>> conn Connection('1q2w3e4r54t56ydy87u89u8') >>> conn.project_ids() [123, 456] >>> project = conn[123] >>> job = project.job(u'123/1/2')

Slide 45

Slide 45 text

>>> from scrapinghub import Connection >>> conn = Connection('1q2w3e4r54t56ydy87u89u8') >>> conn Connection('1q2w3e4r54t56ydy87u89u8') >>> conn.project_ids() [123, 456] >>> for item in job.items(): ... # do something with item (it's just a dict) >>> for logitem in job.log(): ... # logitem is a dict with logLevel, message, time >>> project = conn[123] >>> job = project.job(u'123/1/2')

Slide 46

Slide 46 text

https://doc.scrapinghub.com/api/over view.html Scrapinghub API

Slide 47

Slide 47 text

$ curl -u APIKEY: https://storage.scrapinghub.com/items/53/34/7 $ curl -u APIKEY: https://storage.scrapinghub.com/items/53 Todos os itens de um Job Todos os itens de um Projeto

Slide 48

Slide 48 text

unlimited team members unlimited projects unlimited requests 24 hour max job run time 1 concurrent crawl 7 day data retention no credit card required 0 CLOUD-BASED CRAWLING. FREE $

Slide 49

Slide 49 text

No content

Slide 50

Slide 50 text

4Persistência

Slide 51

Slide 51 text

NewsPipeline Scrapinghub item 1 item 2 item 4 item 5 item 3 item n Jusbrasil Network

Slide 52

Slide 52 text

Extração Consumo Consulta Persistência

Slide 53

Slide 53 text

http://docs.celeryproject.org/en/latest/index.ht ml

Slide 54

Slide 54 text

Task?

Slide 55

Slide 55 text

Tasks! Consumo Consulta Persistência

Slide 56

Slide 56 text

Celery RabbitMQ Envia Mensagem (Task) Worker s Pegam Tasks Envia resultado Obtém Resultado Message Broker Backend

Slide 57

Slide 57 text

NewsPipeline Scrapinghub item 1 item 2 item 4 item 5 item 3 item n Jusbrasil Network

Slide 58

Slide 58 text

Primitivas : chain group chord map & startmap chunks

Slide 59

Slide 59 text

Métricas

Slide 60

Slide 60 text

No content

Slide 61

Slide 61 text

SÉRIE TEMPORAL ?

Slide 62

Slide 62 text

time measurement field (key-value string) tag(s) 2015-10-21T19:28:07.580664347 Z cpu region=us_west value=0.64

Slide 63

Slide 63 text

TEMP O

Slide 64

Slide 64 text

No content

Slide 65

Slide 65 text

Monitoramento

Slide 66

Slide 66 text

Status dos Crawlers

Slide 67

Slide 67 text

Frequência de publicação

Slide 68

Slide 68 text

O que aprendemos até aqui

Slide 69

Slide 69 text

Adote a tecnologia que atende ao seu problema. 1

Slide 70

Slide 70 text

Métricas são importantes! 2

Slide 71

Slide 71 text

Monitore! Sempre! 3

Slide 72

Slide 72 text

Estabeleça prioridades 4

Slide 73

Slide 73 text

Entenda os impactos 5

Slide 74

Slide 74 text

DRY Don’t Repeat Yourself 6

Slide 75

Slide 75 text

Pense em mudanças de layout 7

Slide 76

Slide 76 text

Crie um processo de validação 8

Slide 77

Slide 77 text

Seja realista! Seu crawler vai quebrar em algum momento 9

Slide 78

Slide 78 text

10 Have Fun!

Slide 79

Slide 79 text

Obrigado! Victor Frodo Martinez Software Developer vcrmartinez [email protected] victormartinez.github.io