Slide 1

Slide 1 text

Scraping the Web with Scrapinghub For Startups

Slide 2

Slide 2 text

“Getting information off the Internet is like taking a drink from a fire hydrant.” – Mitchell Kapor

Slide 3

Slide 3 text

Who Uses Web Scraping It is used by everyone from individuals to multinational companies: ● Monitor your competitors’ prices by scraping product information ● Detect fraudulent reviews and sentiment changes by scraping product reviews ● Track online reputation by scraping social media profiles ● Create apps that use public data ● Track SEO by scraping search engine results

Slide 4

Slide 4 text

Web Scraping Traffic

Slide 5

Slide 5 text

Scrapinghub Our products empower our users to scrape data quickly and effectively using open source technologies. We offer: ● A cloud-based platform to help you scale your crawlers ● A smart proxy rotator to crawl the web even faster ● Professional Services to handle web scraping and data mining for you ● Off-the-shelf datasets so you can get data hassle-free

Slide 6

Slide 6 text

Scrapy Scrapy is a web scraping framework that gets the dirty work related to web crawling out of your way. Benefits ● No platform lock-in: Open Source ● Very popular (13k+ ★) ● Battle tested ● Highly extensible ● Great documentation

Slide 7

Slide 7 text

Portia Portia is a Visual Scraping tool that lets you get data without needing to write code. Benefits ● No platform lock-in: Open Source ● JavaScript dynamic content generation ● Ideal for non-developers ● Extensible ● It’s as easy as annotating a page

Slide 8

Slide 8 text

Portia

Slide 9

Slide 9 text

Large Scale Infrastructure Meet Scrapy Cloud , our PaaS for web crawlers: ● Scalable: Crawlers run on EC2 instances or dedicated servers ● Crawlera add-on ● Control your spiders: Command line, API or web UI ● Machine learning integration: BigML, MonkeyLearn, among others ● No lock-in: scrapyd to run Scrapy spiders on your own infrastructure

Slide 10

Slide 10 text

Broad Crawls Frontera allows us to build large scale web crawlers in Python: ● Scrapy support out of the box ● Distribute and scale custom web crawlers across servers ● Crawl Frontier Framework: large scale URL prioritization logic ● Aduana to prioritize URLs based on link analysis (PageRank, HITS)

Slide 11

Slide 11 text

Web Scraping Pitfalls

Slide 12

Slide 12 text

Bot Countermeasures Websites are using increasingly sophisticated techniques to protect against bad bots. Unfortunately, these same technologies often prevent harmless bots from scraping content. Common countermeasures include: ● IP address-based bans ● JavaScript and session based counter-measures

Slide 13

Slide 13 text

Blocked Crawlers Servers identify and block crawlers that continuously fire many requests to a website. Solution: Meet Crawlera , our smart proxy rotator for web crawlers. ● Routes requests through a pool of 50k+ IPs ● Detects, logs and handles bans ● Polite scraping: Automatically throttles requests to websites

Slide 14

Slide 14 text

JavaScript in Web Pages Dynamic content generated by JavaScript is often used by websites to render the page (SPA) or to avoid being scraped by naive crawlers. For simple instances, you can emulate the AJAX requests in Scrapy. For complex cases, you can use Splash ● Works through an HTTP API ● Lua Scripts simulate user interaction ● No lock-in, it’s an open source project!

Slide 15

Slide 15 text

Duplicate Content The web is full of duplicate content. Duplicate Content negatively impacts: ● Storage ● Re-crawl performance ● Quality of data Efficient algorithms for Near Duplicate Detection, like SimHash, are applied to estimate similarity between web pages to avoid scraping duplicated content.

Slide 16

Slide 16 text

Near Duplicate Detection Uses Compare prices of products scraped from different retailers by finding near duplicates in a dataset: Merge similar items to avoid duplicate entries: Title Store Price ThinkPad X220 Laptop Lenovo (i7 2.8GHz, 12.5 LED, 320 GB) Acme Store 599.89 Lenovo Thinkpad Notebook Model X220 (i7 2.8, 12.5’’, HDD 320) XYZ Electronics 559.95 Name Summary Location Saint Fin Barre’s Cathedral Begun in 1863, the cathedral was the first major work of the Victorian architect William Burges… 51.8944, -8.48064 St. Finbarr’s Cathedral Cork Designed by William Burges and consecrated in 1870, ... 51.894401550293, -8.48064041137695

Slide 17

Slide 17 text

Examples of Web Scraping Usage

Slide 18

Slide 18 text

Competitor Monitoring E-commerce companies use web scraping to monitor the price fluctuations and the ratings of competitors: ● Scrape online retailers ● Structure the data in a search engine or DB ● Create an interface to search for products ● Sentiment analysis for product rankings

Slide 19

Slide 19 text

We help electronics companies monitor the activities of their resellers: ● Tracking and watching out for stolen goods ● Pricing agreement violations ● Customer support responses on complaints ● Product line quality checks Monitor Resellers

Slide 20

Slide 20 text

Lead Generation Mine scraped data to identify who to target in a company for your outbound sales campaigns: ● Locate possible leads in your target market ● Identify the right contacts within each one ● Augment the information you already have on them ● Use data science to guess their email address

Slide 21

Slide 21 text

Reduce the time spent on HR tasks by creating a select pool of applicants: ● Mine scraped data to locate candidates ● Match requisite skills and background ● Spot and rescue employees that are shopping for a new job Human Resources

Slide 22

Slide 22 text

Thank you! Thank you! scrapinghub.com